paper_id
stringlengths 12
48
| title
stringlengths 12
155
| url
stringlengths 39
46
| abstract
stringlengths 389
2.11k
| ocr_markdown
stringlengths 18.1k
576k
|
---|---|---|---|---|
muller-etal-2023-considerations | Considerations for meaningful sign language machine translation based on glosses | https://aclanthology.org/2023.acl-short.60 | Automatic sign language processing is gaining popularity in Natural Language Processing (NLP) research (Yin et al., 2021). In machine translation (MT) in particular, sign language translation based on glosses is a prominent approach. In this paper, we review recent works on neural gloss translation. We find that limitations of glosses in general and limitations of specific datasets are not discussed in a transparent manner and that there is no common standard for evaluation. To address these issues, we put forward concrete recommendations for future research on gloss translation. Our suggestions advocate awareness of the inherent limitations of gloss-based approaches, realistic datasets, stronger baselines and convincing evaluation. |
## Considerations For Meaningful Sign Language Machine Translation Based On Glosses
Mathias Müller1, Zifan Jiang1, Amit Moryossef1,2, Annette Rios1 **and Sarah Ebling**1 1 Department of Computational Linguistics, University of Zurich, Switzerland 2 Bar-Ilan University, Israel
{mmueller,jiang,rios,ebling}@cl.uzh.ch, [email protected]
## Abstract
Automatic sign language processing is gaining popularity in Natural Language Processing
(NLP) research (Yin et al., 2021). In machine translation (MT) in particular, sign language translation based on *glosses* is a prominent approach. In this paper, we review recent works on neural gloss translation. We find that limitations of glosses in general and limitations of specific datasets are not discussed in a transparent manner and that there is no common standard for evaluation.
To address these issues, we put forward concrete recommendations for future research on gloss translation. Our suggestions advocate awareness of the inherent limitations of glossbased approaches, realistic datasets, stronger baselines and convincing evaluation.
## 1 Introduction
Automatic sign language processing is becoming more popular in NLP research (Yin et al., 2021).
In machine translation (MT) in particular, many recent publications have proposed sign language translation (SLT) based on *glosses*. Glosses provide semantic labels for individual signs. They typically consist of the base form of a word in the surrounding spoken language written in capital letters (see Table 1). Even though glosses are not a complete representation of signs (see e.g. Pizzuto et al. 2006), they are often adopted in MT because, by virtue of being textual, they fit seamlessly into existing MT pipelines and existing methods seemingly require the least modification.
In this paper, we review recent works on neural gloss translation. We find that limitations of gloss-based approaches in general and limitations of specific datasets are not transparently discussed as inherent shortcomings. Furthermore, among gloss translation papers there is no common standard for evaluation, especially regarding the exact method to compute BLEU scores.
682 Glosses (DSGS) KINDER FREUEN WARUM FERIEN NÄHER-KOMMEN
Translation (DE)
Die Kinder freuen sich, weil die Ferien näher rücken.
Glosses (EN)
('CHILDREN REJOICE WHY HOLIDAYS APPROACHING')
Translation (EN)
('The children are happy because the holidays are approaching.')
Table 1: Example of sign language glosses.
DSGS=Swiss German Sign Language, DE=German, EN=English. English translations are provided for convenience. Example is adapted from a lexicon of the three sign languages of Switzerland, where a sign language video of this sentence is available (https://
signsuisse.sgb-fss.ch/de/lexikon/g/ferien/).
Experiments in SLT should be informed by sign language expertise and should be performed according to the best practices already established in the MT community.
To alleviate these problems going forward, we make practical recommendations for future research on gloss translation.
Our paper makes the following contributions:
- We provide a review of recent works on gloss translation (§2).
- We outline recommendations for future work which promote awareness of the inherent limitations of gloss-based approaches, realistic datasets, stronger baselines and convincing evaluation (§3).
## 2 Related Work
For a general, interdisciplinary introduction to sign language processing see Bragg et al. (2019). For an overview in the context of NLP see Yin et al. (2021); Moryossef and Goldberg (2021) and De Coster et al. (2022) for a comprehensive survey
| L | datasets | translation directions | code | evaluation metrics | BLEU tool | | | | | | | |
|--------------------------|------------|--------------------------|------------|----------------------|-------------|--------|-----|----|----|----|-------------|------------|
| P | O | DGS→DE | DE→DGS | O | B 1-3 | B-4 | R | O | | | | |
| Camgöz et al. (2018) | - | ✔ | - | ✔ | - | - | ✔ | ✔ | ✔ | ✔ | - | Tensorflow |
| Stoll et al. (2018) | - | ✔ | - | - | ✔ | - | - | ✔ | ✔ | ✔ | - | (unclear) |
| Camgöz et al. (2020b) | ✔ | ✔ | - | ✔ | - | - | ✔ | ✔ | ✔ | - | WER | (unclear) |
| Camgöz et al. (2020a) | ✔ | ✔ | - | ✔ | - | - | - | - | ✔ | ✔ | - | (unclear) |
| Yin and Read (2020) | ✔ | ✔ | ASLG-PC12 | ✔ | - | ASL→EN | ✔ | ✔ | ✔ | ✔ | METEOR | NLTK |
| Saunders et al. (2020) | ✔ | ✔ | - | - | ✔ | - | ✔ | ✔ | ✔ | ✔ | - | (unclear) |
| Stoll et al. (2020) | ✔ | ✔ | - | - | ✔ | - | - | ✔ | ✔ | ✔ | WER | (unclear) |
| Orbay and Akarun (2020) | - | ✔ | - | ✔ | - | - | - | ✔ | ✔ | ✔ | - | (unclear) |
| Moryossef et al. (2021) | - | ✔ | NCSLGR | ✔ | - | ASL→EN | (✔) | - | ✔ | - | COMET | SacreBLEU |
| Zhang and Duh (2021) | - | ✔ | - | ✔ | ✔ | - | - | - | ✔ | - | - | (unclear) |
| Egea Gómez et al. (2021) | - | ✔ | - | - | ✔ | - | ✔ | - | ✔ | ✔ | METEOR, TER | SacreBLEU |
| Saunders et al. (2022) | - | ✔ | DGS Corpus | - | ✔ | - | - | - | ✔ | ✔ | - | (unclear) |
| Angelova et al. (2022) | ✔ | ✔ | DGS Corpus | ✔ | - | - | ✔ | - | ✔ | - | - | SacreBLEU |
| Walsh et al. (2022) | - | ✔ | DGS Corpus | - | ✔ | - | - | ✔ | ✔ | ✔ | - | (unclear) |
of sign language machine translation (including, but not limited to, gloss-based approaches).
We conduct a more narrow literature review of 14 recent publications on gloss translation. We report characteristics such as the datasets used, translation directions, and evaluation details (Table 2).
Our informal procedure of selecting papers is detailed in Appendix A.
## 2.1 **Awareness Of Limitations Of Gloss Approach**
We find that 8 out of 14 reviewed works do not include an adequate discussion of the limitations of gloss approaches, inadvertently overstating the potential usefulness of their experiments.
In the context of sign languages, glosses are unique identifiers for individual signs. However, a linear sequence of glosses is not an adequate representation of a signed utterance, where different channels (manual and non-manual) are engaged simultaneously. Linguistically relevant cues such as non-manual movement or use of three-dimensional space may be missing (Yin et al., 2021).
The gloss transcription conventions of different corpora vary greatly, as does the level of detail (see Kopf et al. (2022) for an overview of differences and commonalities between corpora). Therefore, glosses in different corpora or across languages are not comparable. Gloss transcription is an enormously laborious process done by expert linguists.
Besides, glosses are a linguistic tool, not a writing system established in Deaf communities. Sign language users generally do not read or write glosses in their everyday lives.
Taken together, this means that gloss translation suffers from an inherent and irrecoverable information loss, that creating an abundance of translations transcribed as glosses is unrealistic, and that gloss translation systems are not immediately useful to end users.
## 2.2 Choice Of Dataset
All reviewed works use the RWTH-PHOENIX
Weather 2014T (hereafter abbreviated as PHOENIX) dataset (Forster et al., 2014; Camgöz et al., 2018) while other datasets are used far less frequently. Besides, we note a distinct paucity of languages and translation directions: 12 out of 14 works are concerned only with translation between German Sign Language (DGS) and German (DE),
the language pair of the PHOENIX dataset.
While PHOENIX was a breakthrough when it was published, it is of limited use for current research. The dataset is small (8k sentence pairs)
and contains only weather reports, covering a very narrow linguistic domain. It is important to discuss the exact nature of glosses, how the corpus was created and how it is distributed.
| domains | language pair | #signs | # hours | #signers | signing origin | glosses? | |
|----------------------------------------------------------------|----------------------------|----------|-----------|------------|------------------|------------|----|
| PHOENIX | weather | DGS↔DE | 1066 | 11 | 9 | LI | ✔ |
| (Forster et al., 2014) (Camgöz et al., 2018) Public DGS Corpus | conversation, | DGS↔DE | 8580∗ | 50 | 330 | OS | ✔ |
| storytelling | | | | | | | |
| (Hanke et al., 2020) BOBSL | general broadcast programs | BSL↔EN | 2281 | 1467 | 39 | LI | - |
| (Albanie et al., 2021) FocusNews | general news | DSGS↔DE | - | 19 | 12 | OS | - |
| (Müller et al., 2022) | | | | | | | |
Glossing PHOENIX is based on German weather reports interpreted into DGS and broadcast on the TV station Phoenix. The broadcast videos served as input for the DGS side of the parallel corpus. Compared to the glossing conventions of other well-known corpora, PHOENIX glosses are simplistic and capture mostly manual features (with mouthings as the only non-manual activity), which is not sufficient to represent meaning (§2.1).
## Live Interpretation And Translationese Effects
The fact that PHOENIX data comes from interpretation in a live setting has two implications: Firstly, since information was conveyed at high speed, the sign language interpreters omitted pieces of information from time to time. This leads to an information mismatch between some German sentences and their DGS counterparts. Secondly, due to the high speed of transmission, the (hearing) interpreters sometimes followed the grammar of German more closely than that of DGS, amounting to a translationese effect.
Preprocessing of spoken language The German side of the PHOENIX corpus is available only already tokenized, lowercased and with punctuation symbols removed. From an MT perspective this is unexpected since corpora are usually distributed without such preprocessing.
PHOENIX is popular because it is freely available and is a benchmark with clearly defined data splits introduced by Camgöz et al. (2018). SLT as a field is experiencing a shortage of free and open datasets and, with the exception of PHOENIX,
there are no agreed-upon data splits.
Essentially, from a scientific point of view achieving higher gloss translation quality on the PHOENIX dataset is near meaningless. The apparent overuse of PHOENIX is reminiscent of the overuse of MNIST (LeCun et al., 2010) in machine learning, or the overuse of the WMT 14 EnglishGerman benchmark in the MT community, popularized by Vaswani et al. (2017).
Alternative corpora In Table 3 we list several alternatives to PHOENIX, to exemplify how other corpora are preferable in different ways. For example, in PHOENIX the sign language data is produced by hearing interpreters in a live interpretation setting. In contrast, the Public DGS Corpus and FocusNews contain original (non-translated) signing material produced by deaf signers. PHOENIX is limited to weather reports, while all other corpora listed in Table 3 feature much broader domains.
The number of different signs found in PHOENIX
is also small compared to alternative corpora. For instance, the sign vocabulary of BOBSL is twice as large as for PHOENIX, which corroborates that the language data in BOBSL indeed is more varied. Besides, BOBSL also is vastly bigger than PHOENIX and features more individual signers.
## 2.3 Evaluation
As evaluation metrics, all works use some variant of BLEU (Papineni et al., 2002), and ten out of 14 use some variant of ROUGE (Lin, 2004). All but four papers do not contain enough information about how exactly BLEU was computed. Different BLEU implementations, settings (e.g. ngram orders, tokenization schemes) and versions are used.
| Reference | VIEL1A | FAMILIE1* | JUNG1 | FAMILIE1 | GERN1* | IN1* | HAMBURG1* | STADT2* |
|---------------------------|------------------------------------------------------|-------------|---------|------------|----------|--------|-------------|-----------|
| WOHNUNG2B* FAMILIE1 | | | | | | | | |
| Hypothesis | VIEL1B JUNG1 LEBEN1 GERN1* HAMBURG1* STADT2* $INDEX1 | | | | | | | |
| BLEU with tokenization | 25.61 | | | | | | | |
| BLEU without tokenization | 10.18 | | | | | | | |
Table 4: Impact of applying or disabling internal tokenization (mtv13a) when computing BLEU on gloss outputs.
Example taken from the Public DGS Corpus (Hanke et al., 2020).
Non-standard metrics ROUGE is a metric common in automatic summarization but not in MT,
and was never correlated with human judgement in a large study. In eight out of 14 papers, BLEU
is used with a non-standard maximum ngram order, producing variants such as BLEU-1, BLEU-2, etc. Similar to ROUGE, these variants of BLEU
have never been validated as metrics of translation quality, and their use is scientifically unmotivated.
Tokenization BLEU requires tokenized machine translations and references. Modern tools therefore apply a tokenization procedure internally and implicitly (independently of the MT system's preprocessing). Computing BLEU with tokenization on glosses leads to seemingly better scores but is misleading since tokenization creates trivial matches. For instance, in corpora that make use of the character $ in glosses (e.g. the DGS Corpus (Konrad et al., 2022)), $ is split off as a single character, inflating the ngram sub-scores. For an illustration see Table 4 (and Appendix B for a complete code listing) where we demonstrate that using or omitting tokenization leads to a difference of 15 BLEU.
Spurious gains Different implementations of BLEU or different tokenizations lead to differences in BLEU bigger than what many papers describe as an "improvement" over previous work (Post, 2018).
Incorrectly attributing such improvements to, for instance, changes to the model architecture amounts to a "failure to identify the sources of empirical gains" (Lipton and Steinhardt, 2019). In a similar vein, we observe that papers on gloss translation tend to copy scores from previous papers without knowing whether the evaluation procedures are in fact the same. This constitutes a general trend in recent MT literature (Marie et al., 2021).
In summary, some previous works on gloss translation have used 1) automatic metrics that are not suitable for MT or 2) well-established MT metrics in ways that are not recommended. BLEU
with standard settings and tools is inappropriate for gloss outputs.
The recommended way to compute BLEU on gloss output is to use the tool SacreBLEU (Post, 2018) and to disable internal tokenization. Nevertheless, even with these precautions, it is important to note that BLEU was never validated empirically as an evaluation metric for gloss output. Some aspects of BLEU may not be adequate for a sequence of glosses, such as its emphasis on whitespaces to mark the boundaries of meaningful units that are the basis of the final score.
Other string-based metrics such as CHRF
(Popovic´, 2016) may be viable alternatives for gloss evaluation. CHRF is a character-based metric and its correlation with human judgement is at least as good as BLEU's (Kocmi et al., 2021).
On a broader note, we do not advocate BLEU in particular, but advocate that any evaluation metric is used according to best practices in MT. Some of the best practices (such as reporting the metric signature) equally apply to all metrics. A key limitation regarding choosing a metric is that many metrics that are indeed advocated today, such as COMET (Rei et al., 2020), cannot be used for gloss outputs because this "language" is not supported by COMET. There are also hardly any human judgement scores to train new versions of neural metrics.
## 2.4 Further Observations
More informally (beyond what we show in Table 2), we observe that most papers do not process glosses in any corpus-specific way, and that particular modeling and training decisions may not be ideal for low-resource gloss translation.
Preprocessing glosses Glosses are created for linguistic purposes (§2.1), not necessarily with machine translation in mind. Particular gloss parts are not relevant for translation and, if kept, make the problem harder unnecessarily. For instance, a corpus transcription and annotation scheme might prescribe that meaning-equivalent, minor form variants of signs are transcribed as different glosses.
As the particular nature of glosses is specific to each corpus, it is necessary to preprocess glosses in a corpus-specific way. We illustrate corpus-specific gloss processing in Appendix C, using the Public DGS Corpus (Hanke et al., 2020) as an example.
Modeling and training decisions Gloss translation experiments are certainly low-resource scenarios and therefore, best practices for optimizing MT
systems on low-resource datasets apply (Sennrich and Zhang, 2019). For example, dropout rates or label smoothing should be set accordingly, and the vocabulary of a subword model should be generally small (Ding et al., 2019).
Gloss translation models are often compared to other approaches as baselines, it is therefore problematic if those gloss baselines are weak and unoptimized (Denkowski and Neubig, 2017).
## 3 Recommendations For Gloss Translation
Based on our review of recent works on gloss translation, we make the following recommendations for future research:
- Demonstrate awareness of limitations of gloss approaches (§2.1) and explicitly discuss them.
- Focus on datasets beyond PHOENIX. Openly discuss the limited size and linguistic domain of PHOENIX (§2.2).
- Use metrics that are well-established in MT.
If BLEU is used, compute it with SacreBLEU,
report metric signatures and disable internal tokenization for gloss outputs. Do not compare to scores produced with a different or unknown evaluation procedure (§2.3).
- Given that glossing is corpus-specific (§2.1),
process glosses in a corpus-specific way, informed by transcription conventions (§2.4).
- Optimize gloss translation baselines with methods shown to be effective for lowresource MT (§2.4).
We also believe that publishing reproducible code makes works on gloss translation more valuable to the scientific community.
Justification for recommendations There is an apparent tension between making recommendations for future work on gloss translation and at the same time claiming that the paradigm of gloss translation is inadequate to begin with (§2.1). But importantly, further works on gloss translation are likely because MT researchers have a preference for textbased translation problems and little awareness of sign linguistics. If further research is conducted, it should be based on sound scientific methodology.
## 4 Alternatives To Gloss Translation
In previous sections we have established that glosses are a lossy representation of sign language.
We also argued that the most prominent benchmark corpus for gloss translation (PHOENIX) is inadequate, but other, preferable corpora do not contain glosses. This begs the question: if not gloss translation, what other approach should be pursued?
Representing sign language Alternatives include translation models that extract features directly from video, generate video directly or use pose estimation data as a sign language representation (Tarrés et al., 2023; Müller et al., 2022). A distinct advantage of such systems is that they produce a sign language output that is immediately useful to a user, whereas glosses are only an intermediate output that are not intelligible by themselves.
If a system generates a continuous output such as a video, then evaluating translation quality with an automatic metric is largely an unsolved problem.
Even though there are recent proposals for metrics
(e.g. Arkushin et al., 2023), more fundamental research in this direction is still required.
## 5 Conclusion
In this paper we have shown that some recent works on gloss translation lack awareness of the inherent limitations of glosses and common datasets, as well as a standardized evaluation method (§2). In order to make future research on gloss translation more meaningful, we make practical recommendations for the field (§3).
We urge researchers to spell out limitations of gloss translation approaches, e.g. in the now mandatory limitation sections of *ACL papers, and to strengthen their findings by implementing existing best practices in MT.
Finally, we also caution that researchers should consider whether gloss translation is worthwhile, and if time and effort would be better spent on basic linguistic tools (such as segmentation, alignment or coreference resolution), creating training corpora or translation methods that do not rely on glosses.
## Limitations
Our approach to surveying the research literature has limitations. Firstly, some characterizations of the published works we survey are subjective. For example, it is somewhat subjective whether a paper
"includes an adequate discussion of the limitations of glosses" and somewhat subjective whether the evaluation procedure is explained in enough detail.
Furthermore, it is likely that our survey missed some existing publications, especially if published in other contexts than NLP and machine learning conferences and journals. This may have skewed our findings.
Finally, the statements and recommendations in this paper are valid only as long as automatic glossing from video is not feasible. If a scientific breakthrough is achieved in the future, the relevance of glosses for sign language translation may need to be re-evaluated.
## Data Licensing
The license of the Public DGS Corpus1(which we use only as examples in Table 4 and Appendix C) does not allow any computational research except if express permission is given by the University of Hamburg.
## Acknowledgements
This work was funded by the EU Horizon 2020 project EASIER (grant agreement no. 101016982),
the Swiss Innovation Agency (Innosuisse) flagship IICT (PFFS-21-47) and the EU Horizon 2020 project iEXTRACT (grant agreement no. 802774).
We thank the DGS Corpus team at the University of Hamburg for helpful discussions on gloss preprocessing. Finally, we thank the anonymous reviewers for their help in improving this paper.
## References
Samuel Albanie, Gül Varol, Liliane Momeni, Hannah Bull, Triantafyllos Afouras, Himel Chowdhury, Neil Fox, Bencie Woll, Rob Cooper, Andrew McParland, and Andrew Zisserman. 2021. BOBSL: BBC-Oxford British Sign Language Dataset.
Galina Angelova, Eleftherios Avramidis, and Sebastian Möller. 2022. Using neural machine translation methods for sign language translation. In *Proceedings* of the 60th Annual Meeting of the Association for 1https://www.sign-lang.uni-hamburg.de/
meinedgs/ling/license_en.html
Computational Linguistics: Student Research Workshop, pages 273–284, Dublin, Ireland. Association for Computational Linguistics.
Rotem Shalev Arkushin, Amit Moryossef, and Ohad Fried. 2023. Ham2pose: Animating sign language notation into pose sequences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21046–21056.
Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, et al. 2019. Sign language recognition, generation, and translation: An interdisciplinary perspective. In *Proceedings of the 21st International ACM*
SIGACCESS Conference on Computers and Accessibility, pages 16–31.
Necati Cihan Camgöz, Simon Hadfield, Oscar Koller, Hermann Ney, and Richard Bowden. 2018. Neural sign language translation. In *2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition*,
pages 7784–7793.
Necati Cihan Camgöz, Oscar Koller, Simon Hadfield, and Richard Bowden. 2020a. Multi-channel transformers for multi-articulatory sign language translation. In Computer Vision - ECCV 2020 Workshops:
Glasgow, UK, August 23–28, 2020, Proceedings, Part IV, page 301–319, Berlin, Heidelberg. SpringerVerlag.
Necati Cihan Camgöz, Oscar Koller, Simon Hadfield, and Richard Bowden. 2020b. Sign language transformers: Joint end-to-end sign language recognition and translation. In *IEEE Conference on Computer* Vision and Pattern Recognition (CVPR).
Mathieu De Coster, Dimitar Shterionov, Mieke Van Herreweghe, and Joni Dambre. 2022. Machine translation from signed to spoken languages: State of the art and challenges. *arXiv preprint arXiv:2202.03086*.
Michael Denkowski and Graham Neubig. 2017.
Stronger baselines for trustable results in neural machine translation. In *Proceedings of the First Workshop on Neural Machine Translation*, pages 18–27, Vancouver. Association for Computational Linguistics.
Shuoyang Ding, Adithya Renduchintala, and Kevin Duh.
2019. A call for prudent choice of subword merge operations in neural machine translation. In Proceedings of Machine Translation Summit XVII: Research Track, pages 204–213, Dublin, Ireland. European Association for Machine Translation.
Santiago Egea Gómez, Euan McGill, and Horacio Saggion. 2021. Syntax-aware transformers for neural machine translation: The case of text to sign gloss translation. In *Proceedings of the 14th Workshop on Building and Using Comparable Corpora (BUCC 2021)*,
pages 18–27, Online (Virtual Mode). INCOMA Ltd.
Jens Forster, Christoph Schmidt, Oscar Koller, Martin Bellgardt, and Hermann Ney. 2014. Extensions of the sign language recognition and translation corpus RWTH-PHOENIX-weather. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1911–
1916, Reykjavik, Iceland. European Language Resources Association (ELRA).
Thomas Hanke, Marc Schulder, Reiner Konrad, and Elena Jahn. 2020. Extending the Public DGS Corpus in size and depth. In Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, pages 75–
82, Marseille, France. European Language Resources Association (ELRA).
Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics.
Reiner Konrad, Thomas Hanke, Gabriele Langer, Susanne König, Lutz König, Rie Nishio, and Anja Regen. 2022. Public DGS Corpus: Annotation Conventions / Öffentliches DGS-Korpus: Annotationskonventionen.
Maria Kopf, Marc Schulder, Thomas Hanke, and Sam Bigeard. 2022. Specification for the harmonization of sign language annotations.
Yann LeCun, Corinna Cortes, and CJ Burges. 2010.
Mnist handwritten digit database. *ATT Labs [Online].*
Available: http://yann.lecun.com/exdb/mnist, 2.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Zachary C. Lipton and Jacob Steinhardt. 2019. Troubling trends in machine learning scholarship: Some ml papers suffer from flaws that could mislead the public and stymie future research. *Queue*,
17(1):45–77.
Benjamin Marie, Atsushi Fujita, and Raphael Rubino.
2021. Scientific credibility of machine translation research: A meta-evaluation of 769 papers. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 7297–7306, Online.
Association for Computational Linguistics.
Amit Moryossef and Yoav Goldberg. 2021.
Sign Language Processing. https:
//sign-language-processing.github.io/.
Amit Moryossef, Kayo Yin, Graham Neubig, and Yoav Goldberg. 2021. Data augmentation for sign language gloss translation. In Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL), pages 1–11, Virtual. Association for Machine Translation in the Americas.
Mathias Müller, Sarah Ebling, Eleftherios Avramidis, Alessia Battisti, Michèle Berger, Richard Bowden, Annelies Braffort, Necati Cihan Camgöz, Cristina España-bonet, Roman Grundkiewicz, Zifan Jiang, Oscar Koller, Amit Moryossef, Regula Perrollaz, Sabine Reinhard, Annette Rios, Dimitar Shterionov, Sandra Sidler-miserez, and Katja Tissi. 2022. Findings of the first WMT shared task on sign language translation (WMT-SLT22). In Proceedings of the Seventh Conference on Machine Translation (WMT),
pages 744–772, Abu Dhabi, United Arab Emirates
(Hybrid). Association for Computational Linguistics.
Alptekin Orbay and Lale Akarun. 2020. Neural sign language translation by learning tokenization. In *2020* 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), pages 222–
228.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Elena Antinoro Pizzuto, Paolo Rossini, and Tommaso Russo. 2006. Representing signed languages in written form: Questions that need to be posed. In *Proceedings of the LREC2006 2nd Workshop on the Representation and Processing of Sign Languages: Lexicographic Matters and Didactic Scenarios*, pages 1–6, Genoa, Italy. European Language Resources Association (ELRA).
Maja Popovic. 2016. ´ chrF deconstructed: beta parameters and n-gram weights. In *Proceedings of the* First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 499–504, Berlin, Germany. Association for Computational Linguistics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Ben Saunders, Necati Cihan Camgöz, and Richard Bowden. 2020. Progressive Transformers for End-to-End
Sign Language Production. In *Proceedings of the* European Conference on Computer Vision (ECCV).
Ben Saunders, Necati Cihan Camgöz, and Richard Bowden. 2022. Signing at Scale: Learning to CoArticulate Signs for Large-Scale Photo-Realistic Sign Language Production. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR).
Rico Sennrich and Biao Zhang. 2019. Revisiting lowresource neural machine translation: A case study.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 211–
221, Florence, Italy. Association for Computational Linguistics.
Stephanie Stoll, Necati Cihan Camgöz, Simon Hadfield, and Richard Bowden. 2018. Sign language production using neural machine translation and generative adversarial networks. In *Proceedings of the 29th* British Machine Vision Conference (BMVC 2018).
British Machine Vision Association.
Stephanie Stoll, Necati Cihan Camgöz, Simon Hadfield, and Richard Bowden. 2020. Text2sign: towards sign language production using neural machine translation and generative adversarial networks. *International* Journal of Computer Vision, 128(4):891–908.
Laia Tarrés, Gerard I Gállego, Amanda Duarte, Jordi Torres, and Xavier Giró-i Nieto. 2023. Sign language translation from instructional videos. *arXiv preprint* arXiv:2304.06371.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In *Advances in Neural Information Processing Systems 30*, pages 5998–6008.
Harry Walsh, Ben Saunders, and Richard Bowden. 2022.
Changing the representation: Examining language representation for neural sign language production.
In *Proceedings of the 7th International Workshop on* Sign Language Translation and Avatar Technology:
The Junction of the Visual and the Textual: Challenges and Perspectives, pages 117–124, Marseille, France. European Language Resources Association.
Kayo Yin, Amit Moryossef, Julie Hochgesang, Yoav Goldberg, and Malihe Alikhani. 2021. Including signed languages in natural language processing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7347–
7360, Online. Association for Computational Linguistics.
Kayo Yin and Jesse Read. 2020. Better sign language translation with STMC-transformer. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5975–5989, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Xuan Zhang and Kevin Duh. 2021. Approaching sign language gloss translation as a low-resource machine translation task. In Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL), pages 60–70, Virtual. Association for Machine Translation in the Americas.
## A Informal Procedure Of Selecting Papers For Review
Since our paper is first and foremost a position paper we did not follow a rigorous process when selecting papers to review. Our informal criteria are as follows:
- Discover papers indexed by the ACL anthology, published at a more general machine learning conference or published in a computational linguistics journal.
- Limit our search to papers on gloss translation (as opposed to other MT papers on sign language). - Only consider neural approaches to gloss translation, excluding statistical or rule-based works. - Limit to recent works published in the last five years.
B Impact of internal tokenization when computing BLEU on gloss sequences
1 2 # ! pip install sacrebleu==2.2.0 3 4 >>> from sacrebleu.metrics **import** BLEU
5 6 # English translation: Many young families like living in the city of Hamburg. 7 # German translation: Viele junge Familien leben gerne in Hamburg in der Stadt.
8
![8_image_0.png](8_image_0.png)
10 12 14 18 20 24
Listing 1: Impact of enabling or disabling internal tokenization (mtv13a) when computing BLEU on gloss outputs.
## C Example For Corpus-Specific Gloss Preprocessing
For this example, we recommend downloading and processing release 3.0 of the corpus. To DGS glosses we suggest to apply the following modifications derived from the DGS Corpus transcription conventions (Konrad et al., 2022):
- Removing entirely two specific gloss types that cannot possibly help the translation: $GEST-OFF and
$$EXTRA-LING-MAN.
- Removing *ad-hoc* deviations from citation forms, marked by *. Example: ANDERS1* → ANDERS1.
- Removing the distinction between type glosses and subtype glosses, marked by ˆ. Example:
WISSEN2Bˆ → WISSEN2B.
- Collapsing phonological variations of the same type that are meaning-equivalent. Such variants are marked with uppercase letter suffixes. Example: WISSEN2B → WISSEN2.
690
- Deliberately keep numerals ($NUM), list glosses ($LIST) and finger alphabet ($ALPHA) intact, except for removing handshape variants.
See Table 5 for examples for this preprocessing step. Overall these simplifications should reduce the number of observed forms while not affecting the machine translation task. For other purposes such as linguistic analysis our preprocessing would of course be detrimental.
| before | $INDEX1 ENDE1ˆ ANDERS1* SEHEN1 MÜNCHEN1B* BEREICH1A* | | | | | |
|--------------------------|------------------------------------------------------------------------|----------------------------|-----------|----------|------------|------------|
| after | $INDEX1 ENDE1 ANDERS1 SEHEN1 MÜNCHEN1 BEREICH1 | | | | | |
| before | ICH1 | ETWAS-PLANEN-UND-UMSETZEN1 | SELBST1A* | KLAPPT1* | $GEST-OFFˆ | BIS-JETZT1 |
| GEWOHNHEIT1* $GEST-OFFˆ* | | | | | | |
| after | ICH1 ETWAS-PLANEN-UND-UMSETZEN1 SELBST1 KLAPPT1 BIS-JETZT1 GEWOHNHEIT1 | | | | | |
Table 5: Examples for preprocessing of DGS glosses.
While this preprocessing method provides a good baseline, it can certainly be refined further. For instance, the treatment of two-handed signs could be improved. If a gloss occurs simultaneously on both hands, we either keep both glosses or remove one occurrence. In both cases, information about the simultaneity of signs is lost during preprocessing and preserving it could potentially improve translation.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section without number, after conclusion A2. Did you discuss any potential risks of your work?
Not applicable. there are no pertinent risks in this particular paper
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
B ✓ **Did you use or create scientific artifacts?**
the only way we used artifacts is in the sense of using examples from public corpora, in Table 1, Table 3 and Appendix B
✓ B1. Did you cite the creators of artifacts you used?
Table 1, Table 3 and Appendix B
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
we will discuss the license terms explicitly in the camera-ready version. We omitted this on purpose in the review version as a precaution for anonymity
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
sosa-etal-2023-detecting | Detecting Contradictory {COVID}-19 Drug Efficacy Claims from Biomedical Literature | https://aclanthology.org/2023.acl-short.61 | The COVID-19 pandemic created a deluge of questionable and contradictory scientific claims about drug efficacy {--} an {``}infodemic{''} with lasting consequences for science and society. In this work, we argue that NLP models can help domain experts distill and understand the literature in this complex, high-stakes area. Our task is to automatically identify contradictory claims about COVID-19 drug efficacy. We frame this as a natural language inference problem and offer a new NLI dataset created by domain experts. The NLI framing allows us to create curricula combining existing datasets and our own. The resulting models are useful investigative tools. We provide a case study of how these models help a domain expert summarize and assess evidence concerning remdisivir and hydroxychloroquine. | # Detecting Contradictory Covid-19 Drug Efficacy Claims From Biomedical Literature
## Daniel N. Sosa1 Malavika Suresh2 Christopher Potts3 **Russ B. Altman**4,5
1Department of Biomedical Data Science, Stanford University 2School of Computing, Robert Gordon University 3Department of Linguistics, Stanford University 4Department of Bioengineering, Stanford University 5Department of Genetics, Stanford University
{dnsosa,cgpotts,russ.altman}@stanford.edu [email protected]
## Abstract
The COVID-19 pandemic created a deluge of questionable and contradictory scientific claims about drug efficacy - an "infodemic" with lasting consequences for science and society. In this work, we argue that NLP models can help domain experts distill and understand the literature in this complex, high-stakes area. Our task is to automatically identify contradictory claims about COVID-19 drug efficacy. We frame this as a natural language inference problem and offer a new NLI dataset created by domain experts. The NLI framing allows us to create curricula combining existing datasets and our own. The resulting models are useful investigative tools. We provide a case study of how these models help a domain expert summarize and assess evidence concerning remdisivir and hydroxychloroquine.1
## 1 **Introduction**
The COVID-19 pandemic caused by the novel SARS-CoV-2 virus completely changed modern life. According to the World Health Organization Nov. 16, 2022, situation report, more than 6.5 million people have died as a result of this disease
(World Health Organization, 2022). During times of pandemic, treatment options are limited, and developing new drug treatments is infeasible in the short-term (Wouters et al., 2020).
However, if a novel disease shares biological underpinnings with another disease for which a drug treatment already exists, a doctor may be able to repurpose that drug as a treatment for the new disease with positive therapeutic effect (Pushpakom et al.,
2019). This strategy has been successful in several contexts (Corsello et al., 2017; Himmelstein et al.,
1Our COVID-19 NLI dataset and code are available at https://github.com/dnsosa/covid_lit_contra_
claims 2022; Al-Saleem et al., 2021) and may be the only viable strategy during an emerging pandemic.
Decisions about repurposing drug treatments are predicated on scientific knowledge. Making predictions about how to repurpose an existing drug requires understanding the target disease's mechanism. Because SARS-CoV-2 was a new virus, our knowledge of COVID-19's mechanism rapidly evolved. The biomedical literature about the virus and disease proliferated at an unprecedented rate
(Ioannidis et al., 2022a,b). The need for knowledge about the virus and the bottleneck of limited peer reviewers led to many cases of circumventing typical quality control mechanisms for research. To inform their clinical practice, healthcare professionals relied on knowledge sources of lower scientific quality including early clinical reports with small sample sizes and non-peer reviewed manuscripts posted on preprint servers (Nouri et al., 2021). This deluge of rapidly changing information became an
"infodemic", and it became infeasible for the average clinician to stay up-to-date with the growing literature (The Lancet Infectious Diseases, 2020).
Automated methods have great potential to help domain experts fight such an infodemic. We illustrate this potential with a case study focused on automatically detecting contradictory research claims in the COVID-19 therapeutics literature. We frame this as a natural language inference (NLI) problem: given pairs of research claims in biomedical literature, we develop models that predict whether they entail, contradict, or are neutral with respect to each other. Our models are trained on a new dataset of these claim pairs extracted from the CORD-19 dataset (Wang et al., 2020a) and annotated by domain experts. Our best models are trained on curricula (Bengio et al., 2009) of existing NLI datasets and our domain-specific one. These models are effective at the NLI task, but the ultimate test of their value is whether they can help domain experts.
We show how these models could help a domain expert to see early on that hydroxychloroquine was an ineffective COVID-19 treatment and how the story of remdisivir was still emerging.
## 2 **Covid-19 Nli Dataset**
Our new COVID-19 NLI dataset consists of pairs of research claims describing COVID-19 drug treatment efficacy and safety. These claims came from the subset of the June 17, 2020 (v33) CORD19 (Wang et al., 2020a) manuscripts containing a COVID-19-related term (e.g., "SARS-CoV-2",
"2019-nCov"). Claims were extracted from the articles' full text using the LSTM approach of Achakulvisut et al. (2020). False positive research claims were manually removed.
To begin the annotation process, we inspected pairs of claims on common drugs and topics. This led us to a set of five categories: Strict Entailment, Entailment, Possible Entailment, Strict Contradiction, Contradiction, and Neutral. Our annotation guidelines were developed and refined by clinically trained annotators (nurses and a biomedical researcher) over two preliminary rounds of annotation. In Round 1, four annotators labeled 64 claim pairs (Fleiss' = 0.83). The team discussed this process and refined the guidelines. In Round 2, three annotators (a subset of those from Round 1)
annotated 75 claim pairs (Fleiss' = 0.84) using the new guidelines, and then determined that they were ready to scale (Appendix A.1).
For the dataset itself, 1000 pairs of claims were sampled for annotation using three criteria: (1) both claims mention at least one of 7 treatment candidates ({"hydroxychloroquine",
"chloroquine", "tocilizumab", "remdesivir", "vitamin D", "lopinavir", "dexamethasone"}), (2)
high similarity between the claim's embedding and the embedding for a word in a predefined topic list
({"mortality", "effective treatment", "toxicity"}),
using uSIF embeddings (Ethayarajh, 2018), and (3)
non-zero polarities of equal or opposite sign using VADER (Hutto and Gilbert, 2014). Appendix A.3 provides further details.
Each annotation requires a large time investment from the annotator and draws heavily on their domain expertise, so each example was annotated by a single annotator, with the inter-annotator agreement rounds and guidelines serving to ensure consistency across the dataset.
| Dataset | # Entail | # Neutral | # Contra |
|-----------|------------|-------------|------------|
| Full | 266 | 610 | 118 |
| D-Train | 129 | 265 | 40 |
| D-Val | 41 | 75 | 41 |
| D-Test | 66 | 100 | 21 |
Because some claims are present in multiple claim pairs, we selected a subset of pairs such that no claim is present in more than one train, validation, or test split to prevent test-set leakage.
From the network of claim pairs (claims are nodes, and co-occurrences in an annotated pair are edges),
we selected 3 disjoint subnetworks to comprise the train, validation, and test splits. The resulting dataset contains 778 total claim pairs. Dataset distributions are found in Table 1.
## 3 **Model Development**
Our goal is to develop a model to help domain experts find and adjudicate contradictory claims in the COVID-19 literature. We explored a wide range of techniques for developing the best model given our available data. The Appendices provide a full overview of all these experiments and comparisons.
Here, we provide a high-level summary.
Pretrained Parameters All our models begin with pretrained parameters created using the general architecture of BERT (Devlin et al., 2019).
Five pre-trained BERT models were evaluated for further fine-tuning: PubMedBERT (Gu et al., 2021), SciBERT (Beltagy et al., 2019), BioBERT
(Lee et al., 2020), BioClinBERT (Alsentzer et al.,
2019), and RoBERTa (Liu et al., 2019). We found that PubMedBERT was the best for our task across all fine-tuning regimes (Appendix D).
Fine-tuning Curricula For fine-tuning these parameters, we use MultiNLI (Williams et al., 2018),
MedNLI (Romanov and Shivade, 2018), ManConCorpus (Alamri and Stevenson, 2016), and our new COVID-19 NLI Dataset (with our six labels collapsed to three as in the other datasets). We found that the best models were achieved with a curriculum that arranged these in the order we gave above.
This is intuitively an arrangement from most general to most domain-specific, which aligns with existing results and intuitions for curriculum learn-
| Contra. | | | |
|----------------------|------------|-------|--------|
| Model | Curriculum | F1 | Recall |
| Forward | 0.690 | 0.571 | |
| PubMedBERT | Reverse | 0.428 | 0.381 |
| Shuffled | 0.523 | 0.416 | |
| Forward | 0.544 | 0.429 | |
| RoBERTa | Reverse | 0.411 | 0.476 |
| Shuffled | 0.239 | 0.119 | |
| PubMedBERT Hyp. only | Forward | 0.485 | 0.190 |
| RoBERTa Hyp. only | Forward | 0.433 | 0.095 |
Table 2: Core results. Figure 6 and Table 4 expand these results to include a number of other baselines, most of which perform near chance. Metrics for the shuffled category are averages of the 4 shuffled curricula.
ing (Bengio et al., 2009; Xu et al., 2020; Nagatsuka et al., 2021). For detailed descriptions of these datasets, the range of curricula we explored, and our procedures for hyperparameter tuning, we refer to Appendices B, C, and E, respectively.
Results To contextualize our results on this hard, novel task, we evaluated a number of baselines using sparse feature representations and simple similarity calculations, as well as hypothesis-only variants of these models and our BERT models. These baselines are described in Appendix F.
Table 2 summarizes our results. We report F1 scores as well as Contradictions Recall, an important category for our case study. The best performance is achieved by the PubMedBERT model trained with the forward curriculum where finetuning takes place from general domain to complex, in-domain datasets. This setting outperforms baselines and alternative curricula by a large margin.
## 4 **Case Study: Wading Through The Sea** Of Drug Treatment Literature
The value of our model lies in its potential to help domain experts tackle an infodemic. We used the model to understand the state of knowledge about the efficacy and mechanism of two controversial treatments, hydroxychlorouqine and remdesivir, from the perspective of June 2020.
We first extracted all claims identified from COVID-19 manuscripts concerning a drug treatment, using the same procedure as for our COVID
NLI dataset (Section 2), and we filtered that set to pairs of claims that were (1) sufficiently similar (uSIF similarity > 0.5) and (2) both mentioned remdesivir or hydroxychloroquine. We sampled pairs from 50 papers yielding 5,336 total pairs. We then used our best model to make predictions about all these pairs resulting in 322 predicted contradictions. We ranked these by the model's predicted probability of this class, and we inspected the highest probability predictions.
For remdesivir, one claim of limited efficacy from an clinical trial of 233 participants yielded several predicted contradictions:
(1) Remdesivir did not result in significant reductions in SARS-CoV-2 RNA loads or detectability in upper respiratory tract or sputum specimens in this study despite showing strong antiviral effects in preclinical models of infection with coronaviruses (Wang et al., 2020b).
Nineteen unique papers contained a claim that was predicted to contradict this claim - already a striking pattern that might have taken a researcher days to discover by hand by probing full-text articles.
The specific claims that contradict our core claim are illuminating. One reads,
(2) The present study reveals that remdesivir has the highest potential in binding and therefore competitively inhibiting RDRP of SARS-CoV2, among all known RDRP inhibitors (Choudhury et al., 2021),
indicating strong chemical and pharmacodynamic reasoning supporting a mechanism of action for remdesivir. A second claim describes,
(3) Remdesivir treatment in rhesus macaques infected with SARS-CoV-2 was highly effective in reducing clinical disease and damage to the lungs (Williamson et al., 2020),
surfacing particularly strong pre-clinical evidence.
From another ongoing clinical trial including 1,064 patients, authors note:
(4) Preliminary results of this trial suggest that a 10-day course of remdesivir was superior to placebo in the treatment of hospitalized patients with COVID-19. (Beigel et al., 2020)
Overall, we are quickly able to glean how evidence supporting the remdesivir hypothesis was strong from a variety of pre-clinical studies in vastly different settings in 2020. Our original negative claim
(1) presents real evidence against the drug. Still, though, the clinical picture was not yet clear, suggesting the need for further clinical investigation or better striation of populations or therapeutic windows for seeing efficacy.
For hydroxychloroquine, one of the earliest drugs considered, a different picture emerges. We focus in on a claim from a medRxiv preprint (5):
(5) In summary, this retrospective study demonstrates that hydroxychloroquine application is associated with a decreased risk of death in critically ill COVID-19 patients without obvious toxicity and its mechanisms of action is probably mediated through its inhibition of inflammatory cytokine storm on top of its ability in inhibiting viral replication. (Yu et al., 2020)
From its predicted contradictions, we immediately identified two clinical studies:
(6) Overall, these data do not support the addition of hydroxychloroquine to the current standard of care in patients with persistent mild to moderate COVID-19 for eliminating the virus.
(Tang et al., 2020)
(7) Although a marginal possible benefit from prophylaxis in a more at-risk group cannot be ruled out, the potential risks that are associated with hydroxychloroquine may also be increased in more at-risk populations, and this may essentially negate any benefits that were not shown in this large trial involving younger, healthier participants. (Boulware et al., 2020)
These claims reflect the challenging language typical for the domain including hedging, multiple clauses, important context qualifiers (subpopulations and adverse events), and positive and negative sentiments. From these surfaced contradictions, we find evidence of the drug's inefficacy in mild and moderate cases and are led to discover the early observations of cardiac arrest being associated with hydroxychloroquine treatment. Again, discovering these claims *de novo* is difficult given the size of the corpus of COVID-19 literature. Our NLI model greatly speeds up the process and allows domain experts to home in directly on relevant evidence.
## 5 **Stakeholders**
There are several biomedical stakeholders who would benefit from models like ours.
Epidemiologists Epidemiologists survey public health data to inform policy decisions in collaboration with authoritative bodies like the NIH and WHO. Their recommendations must be conservative, so surfacing results that dispute claims of drug efficacy is critical. Their gold standard resource for aggregating evidence is the meta-analysis, but in the early stages of the pandemic, large randomized controlled trials (RCTs) had not completed, and review articles quickly became outdated.
FDA Regulators Regulators too need to make conservative recommendations, as FDA approval signals to clinicians that a treatment is standard-ofcare. Surfacing contradictory claims of drug efficacy and safety is essential (Cassidy et al., 2020).
Researchers By identifying areas of scientific uncertainty via contradictory evidence at all stages of the pipeline (in silico, in vitro, *in vivo*, clinical), researchers could have more quickly identified fruitful areas of investigation (Sosa et al., 2021).
Drug Manufacturers Manufacturers of repurposing candidates were incentivized to understand in what settings their drug seemed to be effective and by what mechanism. For claims of inefficacy, they were interested in surfacing any mitigating factors qualifying these claims or motivating followup analyses.
We note that these models are not intended as the sole source of decision making in clinical or epidemiological settings. To be clinically translatable, further work would need to be conducted on assessing the quality of research claims by relying on contextual information including research setting, demographics, size, and level of evidence. Rather, this work is intended to augment a manual curator's capacity to distill and synthesize a large corpus of literature. This allows trained researchers to use their judgment and conduct last-mile diligence of surfaced research contradictions or corroborations, which will be beneficial to these stakeholders downstream.
## 6 **Discussion And Conclusion**
In settings where the scale of literature is insurmountable for human readers, as is the case during a pandemic, automated curatorial assistants can be transformative (Lever and Altman, 2021). During COVID-19, meta-analyses and review articles, which are written to synthesize a large body of literature, could not be comprehensive or quickly became outdated. In some cases, it was necessary to create meta-meta-analyses involving hundreds of papers (Chivese et al., 2021).
Our work shows the value of integrating NLP
into the domain of meta-science, embracing all the complexities of biomedical research as it naturally exists in literature. We presented an NLI framing for identifying contradictory or corroborating research claims in the challenging domain of COVID-19 drug efficacy. We created a new dataset and designed curricula for optimizing language model fine-tuning for the task. To illustrate the potential of our model, we showed that we were quickly able to distill the state of knowledge about hydroxychlorouqine and remdesivir efficacy as of June 2020, arriving at conclusions that are extremely well-supported in 2022.
Identifying where science is inconsistent is necessary for understanding the current state of human knowledge and reveals frontiers for further research. Significant contradictions can often be found buried in biomedical articles; surfacing these instances nearly as quickly as research is publicly disseminated can generate leads that researchers and curators should pursue. Beyond facilitating search and discovery, our method can help estimate confidence in the consensus of facts in science when creating general knowledge representations (Sosa and Altman, 2022) for downstream applications like predicting novel drug repurposing opportunities *in silico* (Sosa et al., 2020).
## Limitations
We identify three limitations to our approach. First, parsing research claims and automatically classifying a sentence's purpose (its meta-discourse) are not solved problems. It is more prudent to surface novel claims supported by original research than an author's allusion to other research as background context. Second, the domain of biomedical scientific text is complicated by wordy prose, hedging, and long-distance anaphora. These aspects make natural language understanding challenging and present implementational challenges for tokenization, including truncating long sentences and extracting meaning from out-of-vocabulary tokens.
Third, commonsense reasoning for detecting contradictions in biomedical text requires expert background knowledge and a working definition of when contexts are sufficiently aligned such that two claims are called contradictory, which may differ depending on the use case. We believe that context sensitivity and interpretability analysis of LLMs for NLI in challenging domains like this using attention mechanisms or frameworks such as maieutic prompting (Jung et al., 2022) are particularly fruitful research directions.
## Ethics Statement
COVID research has been misinterpreted or selectively promoted leading to disinformation muddling public understanding of COVID-19 science.
Any research in this space is at risk of being misapplied, and models like ours in principle could be used to distort rather than clarify the current state of research, especially by cherry picking results that fit a particular world view.
Creating a method for surfacing contradictory claims in science may also create unwanted incentives for researchers. For instance, if writing simpler and more polar claims causes our NLI model to include these claims in contradictory pairs, researchers may choose to write in such a way as to make their results more sensational, discoverable, and desirable for publishing (Ioannidis and Trikalinos, 2005). Unwanted bias may be incurred from cultural norms around how much to hedge research claims. A second important caveat is that claims surfaced with this model should be given proper due diligence. This model makes no assumptions about the quality of the underlying research and may give visibility to low-quality manuscripts. Diligence should always be maintained concerning the context, scope, relevancy, and timeliness of the research being surfaced, and our model should only serve as an initial exploratory aid.
## References
Titipat Achakulvisut, Chandra Bhagavatula, Daniel Acuna, and Konrad Kording. 2020. Claim extraction in biomedical publications using deep discourse model and transfer learning. *arXiv:1907.00962*.
Jacob Al-Saleem, Roger Granet, Srinivasan Ramakrishnan, Natalie A. Ciancetta, Catherine Saveson, Chris Gessner, and Qiongqiong Zhou. 2021. Knowledge graph-based approaches to drug repurposing for COVID-19. Journal of Chemical Information and Modeling, 61(8):4058–4067.
Abdulaziz Alamri and Mark Stevenson. 2016. A corpus of potentially contradictory research claims from cardiovascular research abstracts. *Journal of Biomedical Semantics*, 7.
Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In *Proceedings of the 2nd* Clinical Natural Language Processing Workshop,
pages 72–78, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Antonio Gonçalves, Julie Bertrand, Ruian Ke, Emmanuelle Comets, Xavier de Lamballerie, Denis Malvy, Andrés Pizzorno, Olivier Terrier, Manuel Rosa Calatrava, France Mentré, Patrick Smith, Alan S Perelson, and Jérémie Guedj. 2020. Timing of antiviral treatment initiation is critical to reduce SARS-Cov-2 viral load. *medRxiv*, page 2020.04.04.20047886.
John H. Beigel, Kay M. Tomashek, Lori E. Dodd, Aneesh K. Mehta, Barry S. Zingman, Andre C.
Kalil, Elizabeth Hohmann, Helen Y. Chu, Annie Luetkemeyer, Susan Kline, Diego Lopez de Castilla, Robert W. Finberg, Kerry Dierberg, Victor Tapson, Lanny Hsieh, Thomas F. Patterson, Roger Paredes, Daniel A. Sweeney, William R. Short, Giota Touloumi, David Chien Lye, Norio Ohmagari, Myoung-don Oh, Guillermo M. Ruiz-Palacios, Thomas Benfield, Gerd Fätkenheuer, Mark G. Kortepeter, Robert L. Atmar, C. Buddy Creech, Jens Lundgren, Abdel G. Babiker, Sarah Pett, James D.
Neaton, Timothy H. Burgess, Tyler Bonnett, Michelle Green, Mat Makowski, Anu Osinusi, Seema Nayak, and H. Clifford Lane. 2020. Remdesivir for the treatment of COVID-19: a final report. *New England* Journal of Medicine, 383(19):1813–1826.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615–
3620, Hong Kong, China. Association for Computational Linguistics.
Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, pages 41–48, New York, NY, USA. Association for Computing Machinery.
Oliver Borchers. 2019. Fast sentence embeddings. https://github.com/oborchers/Fast_
Sentence_Embeddings.
David R. Boulware, Matthew F. Pullen, Ananta S.
Bangdiwala, Katelyn A. Pastick, Sarah M. Lofgren, Elizabeth C. Okafor, Caleb P. Skipper, Alanna A.
Nascene, Melanie R. Nicol, Mahsa Abassi, Nicole W.
Engen, Matthew P. Cheng, Derek LaBar, Sylvain A.
Lother, Lauren J. MacKenzie, Glen Drobot, Nicole Marten, Ryan Zarychanski, Lauren E. Kelly, Ilan S.
Schwartz, Emily G. McDonald, Radha Rajasingham, Todd C. Lee, and Kathy H. Hullsiek. 2020. A randomized trial of hydroxychloroquine as postexposure prophylaxis for COVID-19. *New England Journal of* Medicine, 383(6):517–525.
Christine Cassidy, Danielle Dever, Laura Stanbery, Gerald Edelman, Lance Dworkin, and John Nemunaitis. 2020. FDA efficiency for approval process
of COVID-19 therapeutics. *Infectious Agents and* Cancer, 15(1):73.
Tawanda Chivese, Omran A.H. Musa, George Hindy, Noor Al-Wattary, Saif Badran, Nada Soliman, Ahmed T.M. Aboughalia, Joshua T. Matizanadzo, Mohamed M. Emara, Lukman Thalib, and Suhail A.R. Doi. 2021. Efficacy of chloroquine and hydroxychloroquine in treating COVID-19 infection:
A meta-review of systematic reviews and an updated meta-analysis. *Travel Medicine and Infectious Disease*, 43:102135.
Shuvasish Choudhury, Debojyoti Moulick, Purbajyoti Saikia, and Muhammed Khairujjaman Mazumder.
2021. Evaluating the potential of different inhibitors on RNA-dependent RNA polymerase of severe acute respiratory syndrome coronavirus 2: A molecular modeling approach. Medical Journal Armed Forces India, 77:S373–S378.
Ka-Tim Choy, Alvina Yin-Lam Wong, Prathanporn Kaewpreedee, Sin Fun Sia, Dongdong Chen, Kenrie Pui Yan Hui, Daniel Ka Wing Chu, Michael Chi Wai Chan, Peter Pak-Hang Cheung, Xuhui Huang, Malik Peiris, and Hui-Ling Yen. 2020. Remdesivir, lopinavir, emetine, and homoharringtonine inhibit SARS-CoV-2 replication in vitro. *Antiviral Research*,
178:104786.
Steven M. Corsello, Joshua A. Bittker, Zihan Liu, Joshua Gould, Patrick McCarren, Jodi E. Hirschman, Stephen E. Johnston, Anita Vrcic, Bang Wong, Mariya Khan, Jacob Asiedu, Rajiv Narayan, Christopher C. Mader, Aravind Subramanian, and Todd R.
Golub. 2017. The drug repurposing hub: A nextgeneration drug library and information resource.
Nature Medicine, 23(4):405–408.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Kawin Ethayarajh. 2018. Unsupervised random walk sentence embeddings: A strong but simple baseline.
In *Proceedings of the Third Workshop on Representation Learning for NLP*, pages 91–100, Melbourne, Australia. Association for Computational Linguistics.
Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing. *ACM Transactions on Computing* for Healthcare, 3(1):2:1–2:23.
Daniel Scott Himmelstein, Antoine Lizee, Christine Hessler, Leo Brueggeman, Sabrina L Chen, Dexter Hadley, Ari Green, Pouya Khankhanian, and Sergio E Baranzini. 2022. Systematic integration of
biomedical knowledge prioritizes drugs for repurposing. *eLife*, 6:e26726.
Oliver James Hulme, Eric-Jan Wagenmakers, Per Damkier, Christopher Fugl Madelung, Hartwig Roman Siebner, Jannik Helweg-Larsen, Quentin F.
Gronau, Thomas Lars Benfield, and Kristoffer Hougaard Madsen. 2021. A Bayesian reanalysis of the effects of hydroxychloroquine and azithromycin on viral carriage in patients with COVID-19. *PLOS ONE*, 16(2):e0245048. Publisher:
Public Library of Science.
CJ Hutto and Eric Gilbert. 2014. VADER: A parsimonious rule-based model for sentiment analysis of social media text. Proceedings of the International AAAI Conference on Web and Social Media, 8(1):216–
225.
John P. A. Ioannidis, Eran Bendavid, Maia SalholzHillel, Kevin W. Boyack, and Jeroen Baas. 2022a.
Massive covidization of research citations and the citation elite. Proceedings of the National Academy of Sciences, 119(28):e2204074119.
John P. A. Ioannidis, Maia Salholz-Hillel, Kevin W.
Boyack, and Jeroen Baas. 2022b. The rapid, massive growth of COVID-19 authors in the scientific literature. *Royal Society Open Science*, 8(9):210389.
John P. A. Ioannidis and Thomas A. Trikalinos. 2005.
Early extreme contradictory estimates may appear in published research: The proteus phenomenon in molecular genetics research and randomized trials.
Journal of Clinical Epidemiology, 58(6):543–549.
Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, and Yejin Choi. 2022. Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations.
In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 1266–1279, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.
2020. BioBERT: A pre-trained biomedical language representation model for biomedical text mining.
Bioinformatics, 36(4):1234–1240.
Jake Lever and Russ B. Altman. 2021. Analyzing the vast coronavirus literature with CoronaCentral.
Proceedings of the National Academy of Sciences, 118(23):e2100766118.
Yue-hua Li, Cheng-hui Zhou, Han-jun Pei, Xian-liang Zhou, Li-huan Li, Yong-jian Wu, and Ru-tai Hui.
2013. Fish consumption and incidence of heart failure: A meta-analysis of prospective cohort studies.
Chinese Medical Journal, 126(5):942–948.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *arXiv:1907.11692*.
Koichi Nagatsuka, Clifford Broni-Bediako, and Masayasu Atsumi. 2021. Pre-training a BERT with curriculum learning by increasing block-size of input text. In Proceedings of the International Conference on Recent Advances in Natural Language Processing
(RANLP 2021), pages 989–996.
Shayan N. Nouri, Yosef A. Cohen, Mahesh V. Madhavan, Piotr J. Slomka, Ami E. Iskandrian, and Andrew J. Einstein. 2021. Preprint manuscripts and servers in the era of coronavirus disease 2019. *Journal of Evaluation in Clinical Practice*, 27(1):16–21.
Sudeep Pushpakom, Francesco Iorio, Patrick A. Eyers, K. Jane Escott, Shirley Hopper, Andrew Wells, Andrew Doig, Tim Guilliams, Joanna Latimer, Christine McNamee, Alan Norris, Philippe Sanseau, David Cavalla, and Munir Pirmohamed. 2019. Drug repurposing: Progress, challenges and recommendations.
Nature Reviews Drug Discovery, 18(1):41–58.
Alexey Romanov and Chaitanya Shivade. 2018.
Lessons from natural language inference in the clinical domain. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1586–1596, Brussels, Belgium. Association for Computational Linguistics.
Connie Schardt, Martha B. Adams, Thomas Owens, Sheri Keitz, and Paul Fontelo. 2007. Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC Medical Informatics and Decision Making, 7(1):16.
Daniel N Sosa and Russ B Altman. 2022. Contexts and contradictions: A roadmap for computational drug repurposing with knowledge inference. *Briefings in* Bioinformatics, 23(4):bbac268.
Daniel N. Sosa, Binbin Chen, Amit Kaushal, Adam Lavertu, Jake Lever, Stefano Rensi, and Russ Altman. 2021. Repurposing biomedical informaticians for COVID-19. *Journal of Biomedical Informatics*,
115:103673.
Daniel N. Sosa, Alexander Derry, Margaret Guo, Eric Wei, Connor Brinton, and Russ B. Altman. 2020. A
literature-based knowledge graph embedding method for identifying drug repurposing opportunities in rare diseases. *Pacific Symposium on Biocomputing*,
25:463–474.
Wei Tang, Zhujun Cao, Mingfeng Han, Zhengyan Wang, Junwen Chen, Wenjin Sun, Yaojie Wu, Wei Xiao, Shengyong Liu, Erzhen Chen, Wei Chen, Xiongbiao Wang, Jiuyong Yang, Jun Lin, Qingxia Zhao, Youqin Yan, Zhibin Xie, Dan Li, Yaofeng Yang, Leshan Liu, Jieming Qu, Guang Ning, Guochao Shi, and Qing Xie. 2020. Hydroxychloroquine in patients with mainly mild to moderate coronavirus disease 2019: Open label, randomised controlled trial. BMJ, 369:m1849.
The Lancet Infectious Diseases. 2020. The COVID19 infodemic. *The Lancet Infectious Diseases*,
20(8):875.
Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Doug Burdick, Darrin Eide, Kathryn Funk, Yannis Katsis, Rodney Michael Kinney, Yunyao Li, Ziyang Liu, William Merrill, Paul Mooney, Dewey A. Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stilson, Alex D. Wade, Kuansan Wang, Nancy Xin Ru Wang, Christopher Wilhelm, Boya Xie, Douglas M. Raymond, Daniel S. Weld, Oren Etzioni, and Sebastian Kohlmeier. 2020a. CORD-19: The COVID-19 open research dataset. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020. Association for Computational Linguistics.
Yeming Wang, Dingyu Zhang, Guanhua Du, Ronghui Du, Jianping Zhao, Yang Jin, Shouzhi Fu, Ling Gao, Zhenshun Cheng, Qiaofa Lu, Yi Hu, Guangwei Luo, Ke Wang, Yang Lu, Huadong Li, Shuzhen Wang, Shunan Ruan, Chengqing Yang, Chunlin Mei, Yi Wang, Dan Ding, Feng Wu, Xin Tang, Xianzhi Ye, Yingchun Ye, Bing Liu, Jie Yang, Wen Yin, Aili Wang, Guohui Fan, Fei Zhou, Zhibo Liu, Xiaoying Gu, Jiuyang Xu, Lianhan Shang, Yi Zhang, Lianjun Cao, Tingting Guo, Yan Wan, Hong Qin, Yushen Jiang, Thomas Jaki, Frederick G. Hayden, Peter W.
Horby, Bin Cao, and Chen Wang. 2020b. Remdesivir in adults with severe COVID-19: A randomised, double-blind, placebo-controlled, multicentre trial.
Lancet (London, England), 395(10236):1569–1578.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Brandi N. Williamson, Friederike Feldmann, Benjamin Schwarz, Kimberly Meade-White, Danielle P.
Porter, Jonathan Schulz, Neeltje van Doremalen, Ian Leighton, Claude Kwe Yinda, Lizzette Pérez-Pérez, Atsushi Okumura, Jamie Lovaglio, Patrick W. Hanley, Greg Saturday, Catharine M. Bosio, Sarah Anzick, Kent Barbian, Tomas Cihlar, Craig Martens, Dana P. Scott, Vincent J. Munster, and Emmie de Wit. 2020.
Clinical benefit of remdesivir in rhesus macaques infected with SARS-CoV-2. *Nature*, 585(7824):273–
276.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45. Association for Computational Linguistics.
World Health Organization. 2022. Weekly epidemiological update on COVID-19 - 16 November 2022.
WHO report.
Olivier J. Wouters, Martin McKee, and Jeroen Luyten.
2020. Estimated research and development investment needed to bring a new medicine to market, 20092018. *JAMA*, 323(9):844–853.
Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020.
Curriculum learning for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6095–6104. Association for Computational Linguistics.
Bo Yu, Chenze Li, Peng Chen, Ning Zhou, Luyun Wang, Jia Li, Hualiang Jiang, and Dao Wen Wang. 2020.
Hydroxychloroquine application is associated with a decreased mortality in critically ill patients with COVID-19. *medRxiv*.
## Supplementary Materials A **Further Details About The Covid-19 Nli Dataset**
In this appendix we provide additional details about the creation of the COVID-19 NLI dataset. Our annotators are experts in the domain having trained as healthcare providers (nursing) and annotation.
The research annotator is a specialist in the biomedical domain with background in molecular biology and computer science. Annotators have also provided span annotations in several cases of drug mention, polarity, context, and expressions of uncertainty to aid in the annotation task. We plan to release the dataset under a Creative Commons Attribution 4.0 International license.2
## A.1 **Inter-Annotator Analysis**
Two rounds of inter-annotator analysis were conducted to converge on a set of annotation guidelines for scaling and to measure consistency between multiple annotators. In the first round four annotators
(three clinical annotators, one researcher) were presented with 64 pairs of extracted research claims and an initial set of annotation guidelines. Classification was conducted across five classes including a Strict Entailment and Strict Contradiction class indicating two claims were entailing or contradicting in a strict logical sense as opposed to a common-reasoning sense. Global Fleiss' for this round was 0.83. For the second round, three annotators (two clinical annotators, one researcher) annotated 75 claim pairs with updated guidelines and achieved similar consistency at = 0.84. Further minor modifications were made to the annotation guidelines resulting in the final guidelines used for the scaling round (Table 3).
| Criteria | Annotation | | |
|---------------------------------------------------------------------------------------------------------------------------|----------------------|-----|----|
| All drugs, context, and sentiment match | STRICT ENTAILMENT | | |
| At least one drug matches, the sentiment is the same but the context is at | ENTAILMENT | | |
| least similar All drugs and context match but the sentiment is opposing | STRICT CONTRADICTION | | |
| At least one drug matches, the sentiment is opposing but the context is at | CONTRADICTION | | |
| least similar The context or sentiment statement cannot be compared | NEUTRAL | | |
| There is no mention of a drug OR none of the drugs match | NEUTRAL | | |
| One claim contains both a POSITIVE and a NEGATIVE statement and the other claim contains a POSITIVE or NEGATIVE statement | CONTRADICTION | | |
| One claim is POSITIVE or NEGATIVE statement and the other is EXPRESSION_OF_UNCERTAINTY | NEUTRAL | | |
| Both | claims | are | EXPRES |
| SION_OF_UNCERTAINTY | ENTAILMENT | | |
| Table 3: Annotation guidelines for the COVID-19 NLI dataset. | | | |
## A.2 **Qualitative Error Analysis**
Two challenges facing annotators during inter-annotator analysis rounds were making judgments about uncertainty and context. Research claims may have important meta-discourse cues describing speculation, hedging, or prior knowledge. For example, the statement:
(8) Randomized controlled trials are currently underway and will be critical in resolving this uncertainty as to whether [hydroxychloroquine] and [azithromycin] are effective as a treatment for COVID-19
(Hulme et al., 2021),
created discrepancy among annotators where one annotator indicated that "uncertainty as to whether
[hydroxychloroquine] and [azithromycin] are effective as a treatment" is a negative statement and another indicated that "are effective as a treatment for COVID-19" was a positive statement. This led to different conclusions about efficacy of hydroxychloroquine, as the authors are describing the uncertainty in the field as background knowledge without staking a claim themselves. This motivated the creation of a span annotation, EXPRESSION_OF_UNCERTAINTY, and the criterion that when one of the claims contains this type of span, the pair is called Neutral.
For two claims to be considered comparable, they need to have sufficient contextual overlap. As an example, in the pair
(9) Remdesivir, lopinavir, emetine, and homoharringtonine inhibit SARS-CoV-2 replication *in vitro*
(Choy et al., 2020)
## And
(10) Overall our results emphasize that the PK/PD properties of lopinavir/ritonavir, IFN--1a and hydroxychloroquine make them unlikely to have a dramatic impact on viral load kinetics in the nasopharynx if they are administered after symptom onset (Antonio Gonçalves et al., 2020),
the key contexts are "*in vitro*"and "viral load kinetics in the nasopharynx". The first indicates experimental results in a controlled lab setting whereas the second indicates data collected from the noses of live patients.
The decision about whether or not these contexts are sufficiently similar to decide that these claims can be compared requires the judgment of annotators, paralleling how research builds from different levels of evidence to create a grander picture about drug mechanism and efficacy. Because science is not predicated on hard and fast rules as such, annotator judgment was not always consistent.
## A.3 **Preparing Claims For Annotation**
For this work, resources were available for our team of highly skilled annotators to label 1000 pairs of claims. Sampling pairs of research claims at random from all extracted claims would yield pairs that are predominantly Neutral to one another. Thus, we biased the sampling procedure using heuristics for improving the balance of pairs across the three classes for annotators. The intuition behind the heuristic procedure is that two claims describing at least one drug in common and concerning a common topic may be an entailing pair if they have the same overall polarity or a contradictory pair if they have opposing polarity. Many annotated pairs were still expected to be neutral despite the biasing procedure. This was borne out by the annotated data distribution.
We considered three topics, t 2 T = { "mortality", "effective treatment", "toxicity"} and seven drugs, d 2 D = {"hydroxychloroquine", "chloroquine", "tocilizumab", "remdesivir", "vitamin D", "lopinavir",
"dexamethasone"}. For each pair, (*t, d*), the following procedure (Algorithm 1) was used to generate candidate claim pairs from the set of true research claims, C. Additionally given pol(.), a function for calculating the polarity of a claim; k, the number of claims to sample that are relevant to a drug and topic and have a given polarity (positive or negative); and N, the total number of pairs to subsample, we define our heuristic algorithm for generating candidate non-trivial pairs in Algorithm 1.
Algorithm 1 Heuristic sampler for generating candidate non-trivial pairs
Input: Topic set T, drug set D, claim set C, polarity function pol(.) : c ! [1, 1], drug topic claim sample size k, total subsample size N
Output: Set of N claim pairs PN concerning a common drug and topic and non-neutral predicted polarity 1: P ;
2: for (*d, t*) 2 D ⇥ T do 3: Retrieve claims Cd := {c 2 C : d is a substring of c}
4: Define Cd,t,k,pos := top k claims c relevant to t from Cd s.t. pol(c) > 0 5: Define Cd,t,k,neg := top k claims c relevant to t from Cd s.t. pol(c) < 0 6: Enumerate all combinations of claim pairs, P*d,t,*2k, from claims in set Cd,t,k,pos [ Cd,t,k,neg 7: Remove copy claim pairs, Pd,t,2k Pd,t,2k \ {(c1, c2) 2 P*d,t,*2k : c1 = c2}
8: P P [ P*d,t,*2k 9: **end for**
10: Sample N pairs uniformly from P, PN
We set k = 7 and N = 1000. To evaluate claim relevancy (lines 4 and 5), we calculate the cosine similarity between an embedding of the topic and sentence embeddings of claims using uSIF (Ethayarajh, 2018). Polarity, pol(.), is calculated using Vader scores (Hutto and Gilbert, 2014).
## B **Curriculum Datasets**
We included four datasets for fine-tuning our language models, which comprise general language and multiple biomedically-focused domains. All our datasets use the labels Entailment, Contradiction, and Neural. For our COVID-19 NLI dataset, we collapse Strict Entailment with Entailment and Strict Contradiction with Contradiction.
## B.1 **Multinli**
MultiNLI is an NLI dataset consisting of 433k premise-hypothesis pairs taken from 5 general domains
(Williams et al., 2018). To create the dataset, annotators were shown a premise and were asked to provide hypothesis statements that were entailed by, contradicted by, or were neutral to the prompt premise. In this work, we used the *matched* validation set for evaluation, which we split into two equal sized validation and test sets. The licensing situation for MultiNLI is somewhat complex (see Williams et al. 2018, section 2.2),
but the dataset is widely used in the research community.
## B.2 **Mednli**
MedNLI is an NLI dataset consisting of 14k premise-hypothesis pairs where premises are extracted from doctor's notes in electronic medical records (Romanov and Shivade, 2018). The annotation task for generating premise-hypothesis pairs was analogous to that for MultiNLI. As far as we know, MedNLI
does not have an associated license, but it is widely used in the research community.3
## B.3 **Manconcorpus**
ManConCorpus is a dataset of research claims taken from biomedical systematic reviews (Alamri and Stevenson, 2016). These reviews compile together studies that investigate a common research questions and consider their findings in aggregate. The research question, which conforms to the standardized PICO
criteria (Schardt et al., 2007), yields a binary answer, so findings from the associated review will take explicit "yes" or "no" stances. One such PICO question is "In elderly populations, does omega 3 acid from fatty fish intake, compared with no consumption, reduce the risk of developing heart failure?" (Li et al., 2013).
Pairs of claims manually annotated from these works can be paired together for NLI classification by matching claims that take the same stance on a common question as entailing pairs, those that take 3https://archive.physionet.org/physiotools/mimic-code/mednli/
opposite stances on a common question as contradicting pairs, and those taken from two different reviews about different questions as neutral pairs. The dataset's 16 PICO questions are split into 12, 4, and 4 questions for the train, validation, and test splits, respectively, and the neutral class is downsampled to be the same size as the next largest class in all splits. The resulting dataset has 2.8k claim pairs in total. The ManConCorpus is covered under a CC-BY-NC-SA license.4
## C **Curriculum Design**
To create an effective curriculum for the ultimate task of detecting contradictions in the COVID-19 treatment domain, we conducted a set of experiments analyzing the effect of multiple design decisions for incorporating domain-adjacent corpora in training.
## C.1 **Experiments** C.1.1 **Shuffled And Combined Curricula**
To understand the importance of sequencing the curriculum, we evaluated BERT models trained using various sequences of domain-adjacent corpora in equal proportion. We consider three types of curricula:
forward, reverse, and shuffled. The forward curriculum proceeds with fine-tuning a pre-trained BERT
model in sequence from the most general domain (MultiNLI) to MedNLI to ManConCorpus to the most relevant domain (COVID-19 NLI). The reverse curriculum begins with the most relevant domain and proceeds in the opposite direction. The shuffled curricula were sampled from the 22 possible random orderings of the four domains excluding the forward and reverse sequences. We sampled three shuffled domains to assess the background from non-intentional curriculum design. Finally, we considered a "combined" curriculum where data from the four corpora are concatenated together and shuffled, thus ablating the notion of intentional sequencing in the curriculum. To ensure no dataset dominated training, each dataset, D*train*, is subsampled such that ND*train* = min (d, |D*train*|) samples are present in the curriculum.
## C.1.2 **Ordered Curriculum Subsequence Fine-Tuning**
To assess the contribution to performance from specific domains during sequencing as well as the effect of curriculum size, we evaluated forward curriculum subsequences. Ten subsequences were evaluated:
the full forward curriculum, two three-dataset subsequences, three two-dataset subsequences, and the four single corpora. As in C.1.1, ND*train* samples are present in the curriculum from dataset D*train*.
## C.1.3 **Perturbing Dataset Proportion In Sequential Curricula**
To assess whether changing the ratio of training data used from the various corpora yielded better performance or to dilutive biases from larger corpora, we modulated the data ratio parameter. We define data ratio, r, as the multiplicative factor larger a dataset is from the next dataset in the curriculum sequence.
Specifically, given r, we calculate the sample size of the dataset, D*train*, to be used in the ith step (1-index)
of a size-k fine-tuning curriculum as ND*train* = min (rkid, |D*train*|). We considered three curricula: the full forward curriculum and the two sequential three-dataset curricula.
## C.2 **Evaluation** C.2.1 **Setup**
We evaluated multiple BERT models pre-trained using general and biomedical corpora for curriculumbased fine-tuning (Devlin et al., 2019). Each fine-tuning step involves training for 4 epochs with learning rate l = 105 and batch size b = 8. For all experiments, d = 500, and for data ratio experiments, r 2 {1, 2}. Pre-trained models were loaded from the Hugging Face Transformers library (Wolf et al.,
2020). All fine-tuning was conducted on 2 Tesla-V100-SXM2 and 2 Tesla-A100-PCIe GPUs. Experiments in curriculum design were evaluated with the pre-trained PubMedBERT model (Gu et al., 2021). Other pre-trained BERT models were evaluated on forward curriculum subsequences (Appendix D).
4http://staffwww.dcs.shef.ac.uk/people/M.Stevenson/resources/bio_contradictions/
![12_image_0.png](12_image_0.png)
## C.2.2 **Evaluation Metrics**
The primary NLI evaluation metric for fine-tuned BERT models was macro F1 on the COVID-19 NLI
validation set. We also investigated recall of the contradictions class as an important metric in evaluating the ability to detect contradictory research claims.
## C.2.3 **Shuffled And Collapsed Curricula**
Of the six tested four-dataset curricula, the forward curriculum performed highest with an F1 of 0.503.
The reverse curriculum, starting with the most relevant and challenging curriculum first, achieved an F1 of 0.474. The shuffled curricula yielded F1 scores of 0.380, 0.432, and 0.478. The collapsed curriculum, in which the four corpora are concatenated and shuffled, achieved competitive performance as well, yielding an F1 score of 0.475 (Figure 1).
## C.2.4 **Ordered Subsequences**
From the 10 curriculum subsequences, the model trained with the full forward curriculum yielded highest performance with an F1 of 0.503. Among the two three-domain sequences, the one including the in-domain COVID-19 NLI dataset achieved greater performance than that without, yielding F1 scores of 0.440 and 0.296 respectively. Similarly, with the two-domain subsequences, the sequence with ManConCorpus and COVID-19 performed best with F1 of 0.434, and the subsequence containing MedNLI and ManConCorpus performed worst with F1 of 0.275. Among the single domain curricula, the in-domain training on our dataset was best with F1 of 0.311 (Figure 2).
## C.2.5 **Variable Dataset Proportions**
In all three curricula, the condition with data ratio r = 2 outperformed the r = 1 equal data proportion condition. The highest performing curriculum was the r = 2 forward curriculum achieving an F1 of 0.638. In the in-domain three-dataset sequence, F1 increased from 0.416 with r = 1 to 0.461 with r = 2.
![13_image_1.png](13_image_1.png)
![13_image_0.png](13_image_0.png)
(Figure 3 ).
## Bert Pretraining D
Five pre-trained BERT models were evaluated for further fine-tuning: PubMedBERT (Gu et al., 2021),
SciBERT (Beltagy et al., 2019), BioBERT (Lee et al., 2020), BioClinBERT (Alsentzer et al., 2019), and RoBERTA (Liu et al., 2019). We conducted fine-tuning experiments under the same 10 subsequences and parameter settings as in Section C.1.2 and evaluated performance on the validation split of the COVID-19 NLI dataset. For PubMedBERT, SciBERT, and RoBERTa, the full forward curriculum yielded the greatest macro F1 scores at 0.503, 0.448, and 0.590, respectively. The greatest performance was achieved by the MedNLI-ManCon-COVID-19 NLI subsequence for BioBERT and BioClinBERT models yielding F1 scores of 0.433 and 0.354 (Figure 4). The models were used according to the licensing information provided at the Hugging Face pages for the models. 5
## E Bert Hyperparameter Tuning
We evaluated macro F1 and contradictions recall on the COVID-19 NLI validation set over a parameter sweep of learning rates, lr E {5e—6, 1e—5, 3e—5, 5e—5, 1e—4, 3e—4} and batch sizes, b E {4, 8, 16, 32}
for PubMedBERT and RoBERTa models. For both models the highest macro F1 setting was lr = 3e–5 and b = 4 yielding F 1 = 0.61 and F 1 = 0.64 for PubMedBERT and RoBERTa, respectively. These settings yielded the greatest contradictions recall of 0.51 for PubMedBERT, and settings of lr = 5e − 6, b =
4 yielded the highest contradictions recall value of 0.39 for RoBERTa (Figure 5).
## Test Set Evaluation And Baselines F
We evaluated test set statistics for the COVID-19 NLI using PubMedBERT and RoBERTa (Liu et al.,
2019 ) models fine-tuned with the forward curriculum of MultiNLI ➝ MedNLI ➝ ManCon ➝ COVID-19
![14_image_1.png](14_image_1.png)
![14_image_0.png](14_image_0.png)
![14_image_2.png](14_image_2.png)
![15_image_0.png](15_image_0.png)
![15_image_1.png](15_image_1.png)
![15_image_2.png](15_image_2.png)
![15_image_3.png](15_image_3.png)
NLI. We set data ratio as being equal between the four corpora ( r = 1) (see Appendix C.1.3), and after hyperparameter tuning of learning rate and batch size (Appendix E) set parameters lHP = 3 * 10 −5 and bhp = 4.
We compared performance of our trained BERT models to several NLI baselines.
- Hypothesis-Only Unigrams Softmax classification using unigram counts in the hypothesis (single claim).
- Word Overlap Softmax classification over counts of overlapping unigrams from the two claims.
- Word Cross-Product Softmax classification over counts of pairs of words in the cross-product between the two claims.
- Similarity + Polarity Softmax classification using similarity of the two claims as calculated using uSIF sentence embeddings (Ethayarajh, 2018; Borchers, 2019) and polarity of each claim using Vader polarity scores (Hutto and Gilbert, 2014).
- Hypothesis-Only BERT BERT classification where one of the two claims has been ablated.
Figure 6 offers a comparison of these baselines with our proposed models, focusing on the forward curriculum condition. We also evaluated the optimized PubMedBERT and RoBERTa models with the reverse curriculum and four shuffled curricula 4. We note the consistent result that the forward curriculum performs best overall.
![16_image_0.png](16_image_0.png)
| Contra. | | | |
|------------------------------|------------------------------|-------|--------|
| Model | Curriculum | F1 | Recall |
| Multi ! Med ! ManCon ! Covid | 0.690 | 0.571 | |
| Covid ! ManCon ! Med ! Multi | 0.428 | 0.381 | |
| Covid ! Multi ! ManCon ! Med | 0.486 | 0.381 | |
| Covid ! ManCon ! Multi ! Med | 0.581 | 0.571 | |
| Med ! ManCon ! Covid ! Multi | 0.446 | 0.381 | |
| ManCon ! Multi ! Covid ! Med | 0.579 | 0.333 | |
| PubMedBERT | Multi ! Med ! ManCon ! Covid | 0.544 | 0.429 |
| Covid ! ManCon ! Med ! Multi | 0.411 | 0.476 | |
| Covid ! Multi ! ManCon ! Med | 0.319 | 0.476 | |
| Covid ! ManCon ! Multi ! Med | 0.232 | 0 | |
| Med ! ManCon ! Covid ! Multi | 0.174 | 0 | |
| ManCon ! Multi ! Covid ! Med | 0.232 | 0 | |
| RoBERTa | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Final required Limitations section
✓ A2. Did you discuss any potential risks of your work?
Final recommended Ethics section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2
✓ B1. Did you cite the creators of artifacts you used?
Section 3, Appendix B, Appendix D
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A, Appendix B, Appendix D
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix A, Appendix B, Appendix D
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
All the data are sampled from publicly available research papers.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 2, Appendix A
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2, Appendix A
## C ✓ **Did You Run Computational Experiments?** Section 3, Appendix C-F
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix C.2.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3, Appendix C-F
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3, Appendix C-F
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 2, Appendix A
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 2, Appendix A
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section 2, Appendix A
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Annotators are professional clinical annotators.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. All the data are sampled from publicly available research papers.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Our research does not classify as human subjects research.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. The group of professional annotators is too small for such reporting and would violate privacy. |
amalvy-etal-2023-role | The Role of Global and Local Context in Named Entity Recognition | https://aclanthology.org/2023.acl-short.62 | Pre-trained transformer-based models have recently shown great performance when applied to Named Entity Recognition (NER). As the complexity of their self-attention mechanism prevents them from processing long documents at once, these models are usually applied in a sequential fashion. Such an approach unfortunately only incorporates local context and prevents leveraging global document context in long documents such as novels, which might hinder performance. In this article, we explore the impact of global document context, and its relationships with local context. We find that correctly retrieving global document context has a greater impact on performance than only leveraging local context, prompting for further research on how to better retrieve that context. |
## The Role Of Global And Local Context In Named Entity Recognition
Arthur Amalvy Laboratoire Informatique d'Avignon [email protected] Vincent Labatut∗
Laboratoire Informatique d'Avignon [email protected]
## Richard Dufour∗
Laboratoire des Sciences du Numérique de Nantes [email protected]
## Abstract
Pre-trained transformer-based models have recently shown great performance when applied to Named Entity Recognition (NER). As the complexity of their self-attention mechanism prevents them from processing long documents at once, these models are usually applied in a sequential fashion. Such an approach unfortunately only incorporates local context and prevents leveraging global document context in long documents such as novels, which might hinder performance. In this article, we explore the impact of global document context, and its relationships with local context. We find that correctly retrieving global document context has a greater impact on performance than only leveraging local context, prompting for further research on how to better retrieve that context.
## 1 Introduction
Named Entity Recognition (NER) is a fundamental task in Natural Language Processing (NLP), and is often used as a building block for solving higherlevel tasks. Recently, pre-trained transformerbased models such as BERT (Devlin et al., 2019)
or LUKE (Yamada et al., 2020) showed great NER
performance and have been able to push the state of the art further.
These models, however, have a relatively short range because of the quadratic complexity of selfattention in the number of input tokens: as an example, BERT (Devlin et al., 2019) can only process spans of up to 512 tokens. For longer documents, texts are usually processed sequentially using a rolling window. Depending on the document, this local window may not always include all the context needed to perform inference, which may be present at the global document level. This leads to prediction errors (Stanislawek et al., 2019): In NER, this often occurs when the type of an entity cannot be inferred from the local context. For
*These authors contributed equally.
instance, in the following sentence from the fantasy novel *Elantris*, one cannot decide if the entity Elantris is a person (PER) or a location (LOC)
without prior knowledge:
"Raoden stood, and as he did, his eyes fell on Elantris again."
In the novel, this prior knowledge comes from the fact that a human reader can recall previous mentions of Elantris, even at a very long range.
A sequentially applied vanilla transformer-based model, however, might make an error without a neighboring sentence clearly establishing the status of Elantris as a city.
While some works propose to retrieve external knowledge to disambiguate entities (Zhang et al.,
2022; Wang et al., 2021), external resources are not always available. Furthermore, external retrieval might be more costly or less relevant than performing document-level context retrieval, provided the document contains the needed information, which depends on the type of document.
Therefore, we wish to explore the relevance of document-level context when performing NER. We place ourselves at the sentence level, and we distinguish and study two types of contexts:
- *local context*, consisting of surrounding sentences. This type of context can be used directly by vanilla transformer-based models, as their range lies beyond the simple sentence.
Fully using surrounding context as in Devlin et al. (2019) is, however, computationally expensive.
- *global context*, consisting of all sentences available at the document level. To enhance NER prediction at the sentence level, we retrieve a few of these sentences and provide them as context for the model.
We seek to answer the following question: is local context sufficient when solving the NER task, or would the model obtain better performance by retrieving global document context?
To answer this question, we conduct experiments on a literary NER dataset we improved from its original version (Dekker et al., 2019). We release the annotation process, data and code necessary to reproduce these experiments under a free license1.
## 2 Related Work 2.1 Sparse Transformers
Since the range problem of vanilla transformerbased models is due to the quadratic complexity of self-attention in the number of input tokens, several works on *sparse transformers* proposed alternative attention mechanisms in hope of reducing this complexity (Zaheer et al., 2020; Wang et al., 2020; Kitaev et al., 2020; Tay et al., 2020b,a; Beltagy et al., 2020; Choromanski et al., 2020; Katharopoulos et al., 2020; Child et al., 2019). While reducing self-attention complexity improves the effective range of transformers, these models still have issues processing very long documents (Tay et al.,
2020c).
## 2.2 Context Retrieval
Context retrieval in general has been widely leveraged for other NLP tasks, such as semantic parsing (Guo et al., 2019), question answering (Ding et al., 2020), event detection (Pouran Ben Veyseh et al., 2021), or machine translation (Xu et al.,
2020).
In NER, context retrieval has mainly been used in an external fashion, for example by leveraging names lists and gazetteers (Seyler et al., 2018; Liu et al., 2019), knowledge bases (Luo et al., 2015)
or search engines (Wang et al., 2021; Zhang et al.,
2022). Meanwhile, we are interested in documentlevel context retrieval, which is comparatively seldom explored. While Luoma and Pyysalo (2020)
study document-level context, their study is restricted to neighboring sentences, i.e. local context.
## 3 Method And Experiments 3.1 Retrieval Heuristics
We wish to understand the role of both *local* and global contexts for the NER task. We split all documents in our dataset (described in Section 3.3)
into sentences. We evaluate both local and global 1https://github.com/CompNet/conivel/tree/
ACL2023 simple heuristics of sentence retrieval in terms of NER performance impact. We study the following local heuristics:
- before: Retrieves the closest k sentences at the left of the input sentence.
- after: Same as before, but at the right of the input sentence.
- surrounding: Retrieves the closest k2 sentences on both sides of the input sentence.
And the following *global* heuristics:
- random: Randomly retrieves a sentence from the whole document.
- samenoun: Randomly retrieves a sentence from the set of all sentences that have at least one common noun with the input sentence2.
Intuitively, this heuristic will return sentences that contain entities of the input sentence, allowing for possible disambiguation. We use the NLTK library (Bird et al., 2009) to identify nouns.
- bm25: Retrieves sentences that are similar to the input sentences according to BM25 (Robertson, 1994). Retrieving similar sentences has already been found to increase NER performance (Zhang et al., 2022; Wang et al., 2021).
It has to be noted that global heuristics can sometimes retrieve local context, as they are not restricted in which sentences they can retrieve at the document level. For all configurations, we concatenate the retrieved sentences to the input. During this concatenation step, we preserve the global order between sentences in the document.
## 3.2 Oracles
For each heuristic mentioned in Section 3.1, we also experiment with an *oracle* version. The oracle version retrieves 16 sentences from the document using the underlying retrieval heuristic, and retain only those that enhance the NER predictions the most. We measure this enhancement by counting the difference in numbers of NER BIO tags errors made with and without the context. In essence, the oracle setup simulates a perfect re-ranker model, and allows us to study the maximum performance of such an approach.
2If the set of sentences with a common noun is empty, the samenoun heuristic does not retrieve any sentence.
## 3.3 Dataset
To evaluate our heuristics, we use a corrected and improved version of the literary dataset of Dekker et al. (2019). This dataset is comprised of the first chapter of 40 novels in English, which we consider long enough for our experiments.
Dataset corrections The original dataset suffers mainly from annotation issues. To fix them, we design an annotation guide inspired by CoNLL2003 (Tjong Kim Sang and De Meulder, 2003)
and apply it consistently using a semi-automated process:
1. We apply a set of simple rules to identify obvious errors3(for example, non capitalized entities annotated as PER are often false positives).
Depending on the estimated performance of each rule, we manually reviewed its choices before application.
2. We manually review each difference between the predictions of a BERT (Devlin et al., 2019) model finetuned on a slightly modified version of the CoNLL-2003 dataset (Tjong Kim Sang and De Meulder, 2003)
4and the existing annotations.
3. We manually correct the remaining errors.
Further annotations The original dataset only consists of PER entities. We go further and annotate LOC and ORG entities. The final dataset contains 4476 PER entities, 886 LOC entities and 201 ORG
entities.
## 3.4 Ner Training
For all experiments, we use a pretrained BERTBASE (Devlin et al., 2019) model, consisting in 110 million parameters, followed by a classification head at the token level to perform NER. We finetune BERT for 2 epochs with a learning rate of 2 · 10−5 using the huggingface transformers library (Wolf et al., 2020), starting from the bert-base-cased checkpoint.
## 3.5 Ner Evaluation
We perform cross-validation with 5 folds on our NER dataset. We evaluate NER performance using the default mode of the seqeval (Nakayama, 2018)
python library to ensure results can be reproduced.
3See Appendix A.2 for details.
4We modified the CoNLL-2003 dataset to include honorifics as part of PER entities to be consistent with our annotation guidelines.
## 4 Results 4.1 Retrieval Heuristics
The NER performance for retrieval heuristics can be seen in Figure 1. The samenoun heuristic performs the best among global heuristics, whereas the surrounding heuristic is the best for local heuristics. While the top results obtained with both heuristics are quite similar, we consider global heuristics as naive retrieval baselines: they could be bested by more complex approaches, which might enhance performance even more.
Interestingly, the performance of both before and bm25 heuristics decrease strongly after four sentences, and even drop behind the no retrieval baseline. For both heuristics, this might be due to retrieving irrelevant sentences after a while. The bm25 heuristic is limited by the similar sentences present in the document: if there are not enough of them, the heuristic will retrieve unrelated ones.
Meanwhile, the case of the before heuristic seems more puzzling, and could be indicative of a specific entity mention pattern that might warrant more investigations.
## 4.2 Oracle Versions
NER results with the oracle versions of retrieval heuristics can be found in Figure 2.
It is worth noting that the performance of the oracle versions of the heuristics always peaks when retrieving a single sentence. This might indicate that a single sentence is usually sufficient to resolve entity type ambiguities, but it might also be a result of the oracle ranking sentences individually, thereby not taking into account their possible combinations.
Global heuristics perform better than local ones overall, with the oracle version of the random heuristic even performing better than both the before and after heuristics. These results tend to highlight the benefits of using global document context, provided it can be retrieved accurately.
Retrieved sentences To better understand which sentences are useful for predictions when performing global retrieval, we plot in Figure 3 the distribution of the distance between sentences and their retrieved contexts for the oracle versions of heuristics samenoun and bm25. We find that 8% and 16% of retrieved sentences (for samenoun and bm25, respectively) are comprised within 6 sentences of their input sentence, while the other are
![3_image_1.png](3_image_1.png)
![3_image_0.png](3_image_0.png)
![3_image_2.png](3_image_2.png)
![3_image_3.png](3_image_3.png)
![3_image_4.png](3_image_4.png)
further away, highlighting the need for long-range
## Retrieval.
Local context importance To see whether or not local context is an important component of NER
performance, we perform an experiment where we restrict the oracle version of the bm25 heuristic from retrieving local surrounding context. Results can be found in Figure 4. NER performance remains about the same without local context, which tends to show that local context is not strictly necessary for performance.
## 5 Conclusion And Future Work
In this article, we explored the role of local and global context in Named Entity Recognition. Our results tend to show that, for literary texts, retrieving global document context is more effective at enhancing NER performance than retrieving only local context, even when using relatively simple retrieval heuristics. We also showed that a re-ranker model using simple document-level retrieval heuristics could obtain significant NER performance improvements. Overall, our work prompts for further research in how to accurately retrieve global context for NER.
## 6 Limitations
We acknowledge the following limitations of our work:
- While the oracle selects a sentence according to the benefits it provides when performing NER, it does not consider the interactions between selected sentences. This may lead to lowered performances when the several sentences are retrieved at once.
- The retrieval heuristics considered are naive on purpose, as the focus of this work is not performance. Stronger retrieval heuristics may achieve better results than presented in this article.
- The studied documents only consist in the first chapter of a set of novels. Using complete novel would increase the number of possible information to retrieve for the presented global heuristics.
## References
I. Beltagy, M. E. Peters, and A. Cohan. 2020. Longformer: The long-document transformer. *arXiv*,
cs.CL:2004.05150.
S. Bird, E. Loper, and E. Klein. 2009. *Natural Language* Processing with Python. O'Reilly Media Inc.
R. Child, S. Gray, A. Radford, and I. Sutskever. 2019.
Generating long sequences with sparse transformers.
arXiv, cs.LG:1904.10509.
K. Choromanski, V. Likhosherstov, D. Dohan, X. Song, A. Gane, T. Sarlos, P. Hawkins, J. Davis, A. Mohiuddin, L. Kaiser, D. Belanger, L. Colwell, and A. Weller.
2020. Rethinking attention with performers. *arXiv*,
cs.LG:2009.14794.
N. Dekker, T. Kuhn, and M. van Erp. 2019. Evaluating named entity recognition tools for extracting social networks from novels. *PeerJ Computer Science*, 5:e189.
J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019.
BERT: Pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 4171–4186.
M. Ding, C. Zhou, H. Yang, and J. Tang. 2020. CogLTX:
Applying bert to long texts. In *Advances in Neural* Information Processing Systems, volume 33, pages 12792–12804.
D. Guo, D. Tang, N. Duan, M. Zhou, and J. Yin. 2019.
Coupling retrieval and meta-learning for contextdependent semantic parsing. In *57th Annual Meeting of the Association for Computational Linguistics*,
pages 855–866.
A. Katharopoulos, A. Vyas, N. Pappas, and François Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear attention. In Proceedings of the 37th International Conference on Machine Learning, ICML'20.
N. Kitaev, Ł. Kaiser, and A. Levskaya. 2020. Reformer:
The efficient transformer. *arXiv*, cs.LG:2001.04451.
T. Liu, J. Yao, and C. Lin. 2019. Towards improving neural named entity recognition with gazetteers. In 57th Annual Meeting of the Association for Computational Linguistics, pages 5301–5307.
G. Luo, X. Huang, C. Lin, and Z. Nie. 2015. Joint entity recognition and disambiguation. In 2015 Conference on Empirical Methods in Natural Language Processing, pages 879–888.
J. Luoma and S. Pyysalo. 2020. Exploring crosssentence contexts for named entity recognition with BERT. In *28th International Conference on Computational Linguistics*, pages 904–914.
H. Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation.
A. Pouran Ben Veyseh, M. V. Nguyen, N. Ngo Trung, B. Min, and T. H. Nguyen. 2021. Modeling document-level context for event detection via important context selection. In *Conference on Empirical Methods in Natural Language Processing*, pages 5403–5413.
S. E. W. Robertson. 1994. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In *SIGIR '94*, pages 232–241.
D. Seyler, T. Dembelova, L. Del Corro, J. Hoffart, and G. Weikum. 2018. A study of the importance of external knowledge in the named entity recognition task. In *56th Annual Meeting of the Association for* Computational Linguistics (Volume 2: Short Papers),
pages 241–246.
T. Stanislawek, A. Wróblewska, A. Wójcicka, D. Ziembicki, and P. Biecek. 2019. Named entity recognition
- is there a glass ceiling? In 23rd Conference on Computational Natural Language Learning, pages 624–633.
Y. Tay, D. Bahri, D. Metzler, D. Juan, Z. Zhao, and C. Zheng. 2020a. Synthesizer: Rethinking self-attention in transformer models. *arXiv*,
cs.CL:2005.00743.
Y. Tay, D. Bahri, L. Yang, D. Metzler, and D. Juan.
2020b. Sparse sinkhorn attention. *arXiv*,
cs.LG:2002.11296.
Y. Tay, M. Dehghani, S. Abnar, Y. Shen, D. Bahri, P. Pham, J. Rao, L. Yang, S. Ruder, and D. Metzler. 2020c. Long range arena: A benchmark for efficient transformers. *arXiv*, cs.LG:2011.04006.
E. F. Tjong Kim Sang and F. De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Languageindependent named entity recognition. In *7th Conference on Natural Language Learning*, pages 142–147.
S. Wang, B. Z. Li, M. Khabsa, H. Fang, and H. Ma.
2020. Linformer: Self-attention with linear complexity. *arXiv*, cs.LG:2006.04768.
X. Wang, Y. Jiang, N. Bach, T. Wang, Z. Huang, F. Huang, and K. Tu. 2021. Improving named entity recognition by external context retrieving and cooperative learning. In 59th Annual Meeting of the Association for Computational Linguistics and 11th International Joint Conference on Natural Language Processing, volume 1, pages 1800–1812.
T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush. 2020. Transformers:
State-of-the-art natural language processing. In Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45.
J. Xu, J. Crego, and J. Senellart. 2020. Boosting neural machine translation with similar translations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1580–
1590.
I. Yamada, A. Asai, H. Shindo, H. Takeda, and Y. Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity-aware self-attention. In Conference on Empirical Methods in Natural Language Processing, pages 6442–6454.
M. Zaheer, G. Guruganesh, K. A. Dubey, J. Ainslie, C. Alberti, S. Ontanon, P. Pham, A. Ravula, Q. Wang, L. Yang, and A. Ahmed. 2020. Big bird: Transformers for longer sequences. In *Advances in Neural* Information Processing Systems, volume 33, pages 17283–17297.
X. Zhang, Y. Jiang, X. Wang, X. Hu, Y. Sun, P. Xie, and M. Zhang. 2022. Domain-specific NER via retrieving correlated samples. In *Proceedings of the 29th* International Conference on Computational Linguistics, pages 2398–2404.
## A Dataset Details A.1 Document Lengths
Figure 5 shows the distribution of the number of sentences of our NER dataset.
![5_image_0.png](5_image_0.png)
Number of books
![5_image_1.png](5_image_1.png)
## A.2 Automatic Correction Rules
We use the following rules to automatically identify obvious errors in the original dataset from Dekker et al. (2019). The original dataset only contained PER entities, so these rules only apply to them:
- If a span appears in the list of characters from its novel but is not annotated as an entity, we investigate whether or not this is a false negative.
- Similarly, if a span annotated as an entity does not appear in the list of characters from its novel, we investigate whether or not it is a false positive.
- Finally, if a span is annotated as an entity but all of its tokens are not capitalized, we check if it is a false positive.
## B Heuristics Results Breakdown By Precision/Recall
Figures 6 and 7 show precision and recall for all retrieval heuristics. Interestingly, retrieval only has a positive effect on recall, with precision being lower than the baseline except for the surrounding heuristic.
## B.1 Oracle Versions
Figures 6 and 7 show precision and recall for the oracle versions of all retrieval heuristics. While retrieval benefits recall more than precision, precision is still increased using retrieval. Together with Our NER dataset is composed of documents longer that typical NER datasets such as CoNLL2003 (Tjong Kim Sang and De Meulder, 2003).
![6_image_0.png](6_image_0.png)
![6_image_3.png](6_image_3.png)
the results from the regular heuristics, these results again highlight the potential performance gains of using a suitable re-ranker model to retrieve context.
![6_image_1.png](6_image_1.png)
Recall
![6_image_2.png](6_image_2.png)
![6_image_4.png](6_image_4.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Yes, limitations are discussed in Section 6
✗ A2. Did you discuss any potential risks of your work?
We do not think our work presents any direct risk
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Yes, in the abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
in Section 3.4, we indicate that we use a BERT checkpoint. We also use a previous NER dataset, see Section 3.3. We distribute an enhanced version of this dataset and code to reproduce our experiments.
✓ B1. Did you cite the creators of artifacts you used?
See Section 3.3 and 3.4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We specify the license in the Github repository given at the end of section 1.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We use a dataset published for research purposes.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Collected datas do not include information that can be used to identify individuals
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We specify that the distributed dataset covers english literature (section 3.3). The reader can refer to Dekker et al., 2019 for more informations on the dataset.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We include the number of document of our dataset in Section 3.3 We also include statistics about the length of these document in the Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** See Section 3.4 And Section 3.5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
See Section 3.4
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We include training hyperparameters in Section 3.4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Our results are reported in Section 4. We indicate that, for Figure 1 and 2, each point is the mean F1 of 3 runs.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
See Section 3.1 (nltk), Section 3.4 (huggingface transformers), Section 3.5 (seqeval)
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3.3
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
The experiments were free of any risks
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
The authors annotated the dataset themselves
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
The authors annotated the dataset themselves
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
The authors annotated the dataset themselves
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
This is not relevant since annotation was done by the authors |
spaulding-etal-2023-joint | Joint End-to-end Semantic Proto-role Labeling | https://aclanthology.org/2023.acl-short.63 | Semantic proto-role labeling (SPRL) assigns properties to arguments based on a series of binary labels. While multiple studies have evaluated various approaches to SPRL, it has only been studied in-depth as a standalone task using gold predicate/argument pairs. How do SPRL systems perform as part of an information extraction pipeline? We model SPRL jointly with predicate-argument extraction using a deep transformer model. We find that proto-role labeling is surprisingly robust in this setting, with only a small decrease when using predicted arguments. We include a detailed analysis of each component of the joint system, and an error analysis to understand correlations in errors between system stages. Finally, we study the effects of annotation errors on SPRL. | # Joint End-To-End Semantic Proto-Role Labeling
Elizabeth Spaulding2∗and **Gary Kazantsev**1and **Mark Dredze**1,3 Bloomberg, L.P., New York, NY USA1 University of Colorado Boulder, Boulder, CO, USA2 Computer Science, Johns Hopkins University, Baltimore, MD USA3 [email protected], [email protected], [email protected]
## Abstract
Semantic proto-role labeling (SPRL) assigns properties to arguments based on a series of binary labels. While multiple studies have evaluated various approaches to SPRL, it has only been studied in-depth as a standalone task using gold predicate/argument pairs. How do SPRL systems perform as part of an information extraction pipeline? We model SPRL jointly with predicate-argument extraction using a deep transformer model. We find that proto-role labeling is surprisingly robust in this setting, with only a small decrease when using predicted arguments. We include a detailed analysis of each component of the joint system, and an error analysis to understand correlations in errors between system stages. Finally, we study the effects of annotation errors on SPRL.
## 1 Introduction
Semantic analyses of text have been framed (Gildea and Jurafsky, 2000) as extracting structured information in the form of predicates, arguments, and their relations, often called semantic roles. Multiple schemas have been proposed for structuring semantic roles, each with its own benefits and challenges. Semantic proto-roles (Dowty, 1991) offer a way to decompose traditional inventories of thematic roles into simple properties that are both easier to annotate and more generalizable to unseen arguments. These emerge from Dowty's protorole theory, which assigns properties to arguments based on how agent-like (volition, *sentience*) or patient-like (change of state, *was used*) they are.
For example, in the sentence "The boy threw a rock," categorical role inventories assign argument
"boy" the role Agent, and argument "rock" the role Patient. Work on decompositional semantics1 has formulated the task of semantic proto-role labeling as the assignment of 14 different binary properties to arguments (Reisinger et al., 2015a).
∗Work done during an internship at Bloomberg.
1http://decomp.io/
Multiple systems have been proposed for automatically assigning proto-roles to predicatearguments pairs in text (Opitz and Frank, 2019; Rudinger et al., 2018; Teichert et al., 2017; Tenney et al., 2019), which have established the feasibility and best practices for semantic proto-role labeling
(SPRL). At the same time, this task continues to be either treated in total isolation, assuming gold predicates and arguments, or included in Universal Dependency Semantics (UDS) parsing pipelines
(Stengel-Eskin et al., 2020, 2021), which has not included fine-grained analysis on semantic protorole properties themselves. How well does SPRL
work when integrated into a semantic extraction pipeline? Are earlier errors compounded by SPRL?
Are the same tokens challenging for each stage of the pipeline?
We answer these questions by constructing a joint multi-task model for identifying predicates and arguments, and assigning proto-role properties. Competitive with state-of-the-art for *both* dependency- and span-based SPRL evaluation, a careful component-wise analysis of our system allows us to make the following contributions. 1)
Despite SPRL labeling errorful predicate argument predictions, our results are still competitive with having gold predicates and arguments, and far surpass the only previous work that predicts protoroles jointly with predicates and arguments. Future work should include SPRL scores with predicted arguments. 2) Errors in predicates and arguments do not negatively affect SPRL because the same tokens that are challenging for argument identification are challenging for SPRL. 3) We find that most SPRL errors come from arguments with annotator disagreement, which suggests that these are inherently hard; removing unskilled annotators doesn't change performance, suggesting that conflict alone is not the source of the problem. We discuss implications for future work on SPRL after our analysis.
## 2 Semantic Proto-Roles
Semantic role labeling (SRL) was first formulated as a natural language understanding task by Gildea and Jurafsky (2000) and quickly proliferated (Surdeanu et al., 2003; Xue and Palmer, 2004; Pradhan et al., 2005) as resources and common evaluation frameworks were introduced (Carreras and Màrquez, 2004, 2005; Pradhan et al., 2007; Surdeanu et al., 2008; Hajic et al. ˇ , 2009). SRL assigns relationships or roles of an argument and its predicate. Various labels build on different linguistic theories: label inventories that are small and coarse versus large and fine-grained. Common roles include Agent, Patient, Goal, and Location. Neuralbased systems that assign SRL jointly with other related tasks, such as predicate and argument identification, perform just as well or better than models trained on only SRL (Conia et al., 2021; Blloshmi et al., 2021; He et al., 2018; Strubell et al., 2018; Li et al., 2019).
Argument identification can be formulated as dependency-based (find the argument's syntactic head) or span-based (find the entire argument span).
The CoNLL 2004 and 2005 shared tasks (Carreras and Màrquez, 2004, 2005) used spans: an argument is correct only if all argument tokens are correctly identified with the correct argument role. CoNLL 2008 and 2009 (Surdeanu et al., 2008; Hajic et al. ˇ ,
2009) used a dependency-based method, which only requires that the syntactic head of the argument be tagged with the correct argument role. Understandably, span-based is more challenging and scores lag behind dependency-based systems (Li et al., 2019).
SPRL (Dowty, 1991) offers an alternative
(Reisinger et al., 2015a) by decomposing traditional semantic roles into properties. The two protoroles are "cluster-concepts" called Proto-Agent and Proto-Patient, which each correspond to an inventory of properties. Certain properties (such as volition or *instigation*) tend to belong to Proto-Agents, while others (such as *change of state* and *was used*)
tend to belong to Proto-Patients. This analysis offers increased granularity but without sparsification of the training data.
The state-of-the-art SPRL dependency-based system fine-tunes BERT (Devlin et al., 2019) with a multi-layer Perceptron to assign labels using a linear combination of different BERT layer embeddings (Tenney et al., 2019). For span-based, the leading system uses an attention-based ensemble and trainable "argument marker embeddings" to indicate which tokens are arguments (Opitz and Frank, 2019). Stengel-Eskin et al. (2020) and Stengel-Eskin et al. (2021) jointly predict UDS
graph structures (i.e., the spans of predicates and arguments) with all UDS properties, including semantic proto-roles. Both use a sequence-to-graph transductive model, and Stengel-Eskin et al. (2021)
is able to improve the transductive model by integrating transformer architecture. Systems are rarely evaluated in both dependency- and span-based settings, and none have been evaluated on anything but gold predicates and arguments.
## 3 Data
We report results on two English-language datasets for SPRL: SPR1 (Reisinger et al., 2015b) and SPR2
(White et al., 2016). SPR1 contains 4,912 Wall Street Journal sentences from PropBank (Kingsbury and Palmer, 2002; Palmer et al., 2005; Gildea and Palmer, 2002) annotated by a single annotator based on a set of 18 proto-role properties. 9,738 arguments were annotated for the likelihood (on a Likert scale from 1 to 5) that a property holds for that argument. SPR2 contains 2,758 English Web Treebank (Bies et al., 2012) sentences annotated for a smaller set of 14 properties using a revised, streamlined protocol. In this release, multiple annotators ensured two-way redundancy for each property judgment.
Following previous work (Opitz and Frank, 2019; Rudinger et al., 2018; Teichert et al., 2017; Tenney et al., 2019), we formulate SPRL as a 18
(SPR1) or 14 (SPR2) way multi-label binary classification problem and map Likert labels {1, 2, 3}
to 0, and {4, 5} to 1. The task has also been formulated as a regression problem in which SPRL
scores are predicted as continuous values (Opitz and Frank, 2019; Rudinger et al., 2018), but we do not include this formulation as a part of our analysis. We additionally map judgments labeled
"inapplicable" to 0 to ensure consistency with previous work. We use standard train/dev/tests splits provided in the data. We additionally do analysis on inter-annotator agreement in SPR2 shown in Appendix B.2.
## 4 Joint End-To-End Sprl
We construct a joint end-to-end SPRL system based on BERT (Devlin et al., 2019) with classification heads for each sub-task (Figure 1). We fine-tune
![2_image_0.png](2_image_0.png)
BERT and the encoder parameters are shared across tasks. We favor BERT as opposed to newer encoders for a direct comparison to Tenney et al.
(2019) and Opitz and Frank (2019).2 We construct representations of the input sentences to enable the system to efficiently identify predicates, arguments, and SPRL. For each sentence, we construct an instance with a candidate predicate prepended onto the sentence with a separator. We use linear classification heads3 with sigmoid functions to produce classification probabilities for each token. We place a classification head on the prepended predicate token to determine if it is a predicate. For argument and SPRL, we use separate classification heads on each token in the sentence. Dependency-based models predict argument for the dependency head only, whereas span-based models use an IOE tagging and softmax outputs. The resulting argument-tagged subwords are pooled and concatenated with the predicate representation taken from the sentence into the SPRL binary classification heads. Since we create an input for each possible predicate, we reduce the number of examples originating from incorrect predicates by only using predicate candidates that were verbs. Full model and training details appear in Appendix A.3.
Our model architecture is very similar to Tenney et al. (2019), except we extend it for predicate and argument identification. Additionally, we do not use a linear combination of BERT layers, instead taking BERT's last layer of BERT.4
| SPR1 | SPR2 | | | |
|-------------------------------------------------------------|--------|-------|-------|-------|
| macro | micro | macro | micro | |
| Dependency Prediction This Paper | 72.7 | 85.5 | 65.0 | 83.3 |
| This Paper + Arg prediction | 73.7 | 85.5 | 68.1 | 82.4 |
| This Paper + Predicate + Arg prediction | 74.0 | 85.3 | 64.7 | 83.8 |
| Rudinger et al. (2018) | 71.1 | 83.3 | - | - |
| Tenney et al. (2019) | - | 86.1 | - | 83.85 |
| Span Prediction This Paper | 71.4 | 83.7 | 65.0 | 82.9 |
| This Paper + Arg prediction | 71.8 | 84.1 | 65.7 | 81.9 |
| This Paper + Predicate + Arg prediction | 73.0 | 84.3 | 65.2 | 81.6 |
| Opitz and Frank (2019) | 69.3 | 82.0 | 69.7 | 83.4 |
| Opitz and Frank (2019) + BERT | 73.8 | 83.5 | 67.5 | 83.9 |
| Stengel-Eskin et al. (2020) Transductive parser | - | - | 65.4 | - |
| Stengel-Eskin et al. (2021) 6 TFMR + EN + BERT | - | - | 69.8 | 83.3 |
| Span Prediction (Ensembles) Opitz and Frank (2019) Ensemble | 72.1 | 83.6 | 70.9 | 84 |
| Opitz and Frank (2019) Ensemble + BERT | 77.5 | 86.8 | 69.9 | 84.9 |
## 5 Experiments
We run multiple experiments to isolate the behavior of different components of our system, such as training on only SPRL, as well as the full pipeline.
For all experiments, we train both a dependencybased and span-based model.
| SPR1 | SPR2 | | | | | |
|------------------------|-----------|-----------|-------------|-------|-----------|-------------|
| Recall | Strict F1 | Recall | Strict F1 | | | |
| Dependency Prediction | Preds | Arg Heads | Properties | Preds | Arg Heads | Properties |
| Gold Predicates | - | 93.2 | 77.5 (-8) | - | 95.7 | 79.0 (-3.4) |
| + Predicate Prediction | 94.8 | 95.2 | 78.0 (-7.3) | 83.4 | 97.8 | 81.8 (-2) |
| Recall | Strict F1 | Recall | Strict F1 | | | |
| Span Prediction | Preds | Arg Spans | Properties | Preds | Arg Spans | Properties |
| Gold Predicates | - | 91.6 | 78.8 (-5.3) | - | 86.7 | 77.8 (-4.1) |
| + Predicate Prediction | 95.2 | 91.2 | 77.6 (-6.7) | 92.4 | 87.6 | 74.7 (-6.9) |
Scoring SPRL is typically reported as micro/macro averaged F1 across the individual SPR
binary properties. We report Gold F1 scores that assume the previous stages of the pipeline produced correct predicates and arguments. However, when considering SPRL run on predicted predicates and arguments, we need to adjust the scoring such that we penalize the SPRL score due to mistakes earlier in the pipeline. For other tasks, such as entity linking, we can simply mark a link as "missed" if we fail to recognize an entity with a NER system.
However, because SPRL is a binary classification task, scoring is more complex.
We consider two different SPRL scoring methods for false negative predicate or arguments: (1)
a lenient score that assumes 0 for all properties, which means that missed arguments do not have any of the properties. (2) A strict score that forces the label *incorrect* for all properties, which assumes we get all property predictions wrong thereby marking them incorrect. We do not modify the SPRL
score for false positive arguments (for which there are no gold labels) since this would change the set of arguments over which each run of the system is evaluated. For example, in the sentence "Bob sat on the chair and I laid on the ground", if the model predicts that "ground" is an argument for "sat",
then the model would produce property predictions for "ground" even though there are no annotations for this token. Those predictions are ignored entirely, because it is not guaranteed that other runs will also include predictions for this token.
We evaluate each component of our system separately to determine the effects of the pipeline. (1)
Train only the SPRL classifiers using gold predicates and arguments. (2) Train arguments and SPRL assuming gold predicates. (3) Train predicates, arguments, and SPRL using inputs first filtered to consider only verbs as predicates. We replicate this training for both span and dependencybased predictions. For each setting, we evaluate under different conditions by decoding assuming gold or predicted labels from earlier in the pipeline.
## 6 Results
We present an overview of the results for our joint system, but full results appear in Appendix B. Table 1 shows our SPRL system compares favorably to previous work. We show three systems: trained on only SPRL, trained on arguments and SPRL, and trained on predicates, arguments, and SPRL. In all cases, we decode SPRL predictions assuming gold predicates and arguments. Our model matches or surpasses previous span and dependency results on SPR1, but lags slightly behind on span-based SPR2.
This confirms previous work that found SPR2 more difficult than SPR1, perhaps because SPR2 has less data and more complex predicates and arguments.
Table 2 shows performance using our strict pipeline scoring, in which we map proto-role property predictions to *incorrect* for false negative arguments. The drop in F1 from using gold labels is shown in red. While we do worse in a pipeline, with the largest gap being 8 points for dependency-based SPR1, jointly learning predicates slightly improves strict F1 performance on SPRL in the dependencybased models, but degrades performance in the span-based models. Furthermore, SPR1 suffers a larger drop in the strict scoring regime than SPR2, perhaps because SPR2 models were already predicting many of the "harder" arguments incorrectly. How are SPRL errors related to mistakes earlier in the pipeline? SPRL performance was much lower for arguments that would have been missed earlier in the pipeline. (See Table 3.) Table 2 shows this effect: models with smaller drops from gold were *already* making errors on incorrect arguments, whereas models with larger drops were likely bet-
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
ter at handling "difficult" examples. These difficult examples seem to correlate with annotation difficulty. We measure the performance of the system on annotation subsets based on the difference in Likert scores from the annotator. The larger the disagreement in Likert scores betweeen annotators, the worse the model performance (Appendix B.1.2.) To rule out the role of poor annotators, we removed those who had low inter-annotator agreement with others. However, this had almost no effect on F1, suggesting that it is the examples themselves that are challenging, and not the quality of the annotations. Perhaps these arguments are challenging for both tasks, or possibly the BERT
encoder learns a poor representation of them. Fortunately, this means that when arguments are correctly discovered, SPRL does a good job on them and that correcting errors may improve both tasks.
Additionally, since we follow previous work by collapsing proto-role annotations marked "inapplicable" into the 0 class, we investigate the effect of excluding "inapplicable" property annotations in Table 7 and find a consistent boost of at least 3 F1 points by excluding inapplicable annotations, suggesting future work may benefit from handling applicability judgements differently, such as in Stengel-Eskin et al. (2020, 2021), who use a hurdle model in which a first classifier determines whether or not a property applies before making the property value judgement. Together, the effects of Likert disagreements and inapplicability of proto-role annotations additionally suggests that normalizing the different annotator responses, as in White et al. (2020), who use a mixed effects model, might lead to better outcomes in SPRL. See Appendix B.1 for a more detailed analysis of all results.
## 7 Discussion
Our end-to-end SPRL system demonstrates the efficacy of SPRL when combined with a full system. We are competitive with both span-based and dependency-based models and find that joint identification of predicates and arguments still produces a high-performing SPRL system. Future work should evaluate this setting, using both spanand dependency-based models and our proposed scoring method. Furthermore, our work points to the need for focused improvement on challenging arguments, which is harming both argument identification and SPRL. Do these errors show the limits of SPRL since annotators also get them wrong? Do we need better encoder training? Will downstream tasks that consume SPRL labels be robust to these errors? What is the feasibility of a reinforcement learning system that trains on the model's own output? These questions remain for future work.
## Limitations
Our analysis of the behavior of SPRL focused on intrinsic task scores. Higher SPRL scores suggest a better system. In practice, we do not yet understand how these scores affect downstream uses of SPRL
labels. Furthermore, SPRL datasets are relatively small and are English only. As we are limited to the labels in the existing datasets, we are uncertain about how our results would generalize to larger datasets, new domains, and other languages.
## Ethics Statement
When deploying a system such as ours on real text, e.g., news, one should carefully consider the implications of labeling real entities with certain protorole properties. For example, answering the question of whether or not an actor *instigated* some action could have serious ramifications in the real world. Care should be taken so that such cases might be, for example, flagged for human review.
## Acknowledgements
We thank Elias Stengel-Eskin, Benjamin Van Durme, Igor Malioutov, and Leslie Barrett for their helpful comments and feedback throughout conversations surrounding the project. We additionally acknowledge anonymous Bloomberg employees for assistance in reviewing the paper. Finally, we thank the ACL reviewers for their careful consideration and invaluable feedback.
## References
Ann Bies, Justin Mott, Colin Warner, and Seth Kulick.
2012. English web treebank.
Rexhina Blloshmi, Simone Conia, Rocco Tripodi, and Roberto Navigli. 2021. Generating senses and roles:
An end-to-end model for dependency- and spanbased semantic role labeling. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 3786–3793. International Joint Conferences on Artificial Intelligence Organization. Main Track.
Xavier Carreras and Lluís Màrquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role labeling. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL2004) at HLT-NAACL 2004, pages 89–97, Boston, Massachusetts, USA. Association for Computational Linguistics.
Xavier Carreras and Lluís Màrquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL2005), pages 152–164, Ann Arbor, Michigan. Association for Computational Linguistics.
Simone Conia, Andrea Bacciu, and Roberto Navigli.
2021. Unifying cross-lingual semantic role labeling with heterogeneous linguistic resources. In *Proceedings of the 2021 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 338–
351, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
David Dowty. 1991. Thematic Proto-Roles and Argument Selection. *Language*, 67(3):547–619. Publisher: Linguistic Society of America.
Daniel Gildea and Daniel Jurafsky. 2000. Automatic labeling of semantic roles. In *Proceedings of the 38th* Annual Meeting of the Association for Computational Linguistics, pages 512–520, Hong Kong. Association for Computational Linguistics.
Daniel Gildea and Martha Palmer. 2002. The necessity of parsing for predicate argument recognition.
In *Proceedings of the 40th Annual Meeting on Association for Computational Linguistics*, ACL '02, page 239–246, USA. Association for Computational Linguistics.
Jan Hajic, Massimiliano Ciaramita, Richard Johans- ˇ
son, Daisuke Kawahara, Maria Antònia Martí, Lluís Màrquez, Adam Meyers, Joakim Nivre, Sebastian Padó, Jan Štepánek, Pavel Stra ˇ nák, Mihai Surdeanu, ˇ
Nianwen Xue, and Yi Zhang. 2009. The CoNLL2009 shared task: Syntactic and semantic dependencies in multiple languages. In *Proceedings of* the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 1–18, Boulder, Colorado. Association for Computational Linguistics.
Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
pages 364–369, Melbourne, Australia. Association for Computational Linguistics.
Paul Kingsbury and Martha Palmer. 2002. From TreeBank to PropBank. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC'02), Las Palmas, Canary Islands
- Spain. European Language Resources Association
(ELRA).
Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019.
Dependency or span, end-to-end uniform semantic role labeling. In Proceedings of the ThirtyThird AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'19/IAAI'19/EAAI'19. AAAI Press.
Juri Opitz and Anette Frank. 2019. An argument-marker model for syntax-agnostic proto-role labeling. In *Proceedings of the Eighth Joint Conference on Lexical* and Computational Semantics (*SEM 2019), pages 224–234, Minneapolis, Minnesota. Association for Computational Linguistics.
Martha Palmer, Daniel Gildea, and Paul Kingsbury.
2005. The proposition bank: An annotated corpus of semantic roles. *Comput. Linguist.*, 31(1):71–106.
Sameer Pradhan, Kadri Hacioglu, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005. Semantic role chunking combining complementary syntactic views. In Proceedings of the Ninth Conference on Computational Natural Language Learning
(CoNLL-2005), pages 217–220, Ann Arbor, Michigan. Association for Computational Linguistics.
Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. SemEval-2007 task-17: English lexical sample, SRL and all words. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 87–92, Prague, Czech Republic. Association for Computational Linguistics.
Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015a. Semantic Proto-Roles. *Transactions of the Association for Computational Linguistics*, 3:475–488.
Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015b. Semantic proto-roles. *Transactions of the Association for Computational Linguistics*, 3:475–488.
Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, and Benjamin Van Durme. 2018. NeuralDavidsonian semantic proto-role labeling. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 944–955, Brussels, Belgium. Association for Computational Linguistics.
Elias Stengel-Eskin, Kenton Murray, Sheng Zhang, Aaron Steven White, and Benjamin Van Durme. 2021.
Joint universal syntactic and semantic parsing. *Transactions of the Association for Computational Linguistics*, 9:756–773.
Elias Stengel-Eskin, Aaron Steven White, Sheng Zhang, and Benjamin Van Durme. 2020. Universal decompositional semantic parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8427–8439, Online. Association for Computational Linguistics.
Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguisticallyinformed self-attention for semantic role labeling.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 5027–5038, Brussels, Belgium. Association for Computational Linguistics.
Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 8–15, Sapporo, Japan.
Association for Computational Linguistics.
Mihai Surdeanu, Richard Johansson, Adam Meyers, Lluís Màrquez, and Joakim Nivre. 2008. The CoNLL
2008 shared task on joint parsing of syntactic and semantic dependencies. In *CoNLL 2008: Proceedings* of the Twelfth Conference on Computational Natural Language Learning, pages 159–177, Manchester, England. Coling 2008 Organizing Committee.
Adam Teichert, Adam Poliak, Benjamin Van Durme, and Matthew Gormley. 2017. Semantic proto-role labeling. *Proceedings of the AAAI Conference on* Artificial Intelligence, 31(1).
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn
from context? probing for sentence structure in contextualized word representations. In *International* Conference on Learning Representations.
Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on Universal Dependencies. In *Proceedings of the 2016 Conference* on Empirical Methods in Natural Language Processing, pages 1713–1723, Austin, Texas. Association for Computational Linguistics.
Aaron Steven White, Elias Stengel-Eskin, Siddharth Vashishtha, Venkata Subrahmanyan Govindarajan, Dee Ann Reisinger, Tim Vieira, Keisuke Sakaguchi, Sheng Zhang, Francis Ferraro, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2020. The universal decompositional semantics dataset and decomp toolkit. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 5698–5707, Marseille, France. European Language Resources Association.
Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In *Proceedings of* the 2004 Conference on Empirical Methods in Natural Language Processing, pages 88–94, Barcelona, Spain. Association for Computational Linguistics.
| SPR1 | SPR2 | |
|-----------------|--------|------|
| Train Precision | 45.1 | 49.4 |
| Recall | 100 | 100 |
| F1 | 62.2 | 66.2 |
| Dev Precision | 47.4 | 53.1 |
| Recall | 100 | 100 |
| F1 | 64.3 | 69.4 |
| Test Precision | 45.1 | 54.2 |
| Recall | 100 | 100 |
| F1 | 62.1 | 70.3 |
## A Training Details
We simplify the complex task of joint predicateargument-proto-role learning, as the space of possible predicates, arguments, and proto-role labels is O(|R|n 3) for a sentence of n tokens and a set of proto-role properties R. (There are O(n) possible predicates and O(n 2) possible argument spans.)
First, we made the decision not to train the model on its own output—ie, we use an oracle to identify gold predicate and argument tokens so that nonpredicate sequences and non-argument tokens are ignored in the loss step.
## A.1 Pre-Processing
We shift some of the complexity to the data processing step before any learning occurs by crafting sequences such that only one predicate is considered at a time: for example, the sentence "He stole my toy!" would be split into four separate data points:
He <SEP> He stole my toy!
stole <SEP> He stole my toy!
## 1 Introduction The _same_ quantum mechanics is a classical theory of quantum mechanics. The classical theory of quantum mechanics is a classical theory of quantum mechanics.
toy! <SEP> He stole my toy!
The model learns to focus on the first token as the candidate predicate of the sentence. For example, in the sequence stole <SEP> He stole my toy, the model must answer the questions: If stole is the predicate, what are the arguments of the sentence, and what are their proto-role properties?
We truncate sequences to a fixed maximum length of 50 and pad shorter sequences to the right.
## A.2 Predicate Filtering
The number of training instances would be quite large if every token of every sentence was used as a predicate candidate, as above. So we undergo a predicate filtering step in which we only select tokens that are labeled as verbs (ie, anything with a POS tag beginning with VB) in the datasets. For every dataset and split, this initial predicate filtering step has a recall of 100. After filtering, the number of training instances in SPR1 is reduced to 8,999 from 83,789, and in SPR2, is reduced to 7,452 from 46,138. Model output for argument identification and SPRL on false positive predicates is ignored in the loss function and evaluations. Table 4 shows full results for the predicate filtering.
## A.3 Hyperparameters
Using the hyperparameters from previous work as a starting point, we fine-tuned the learning rate and batch size and then kept them fixed based on the highest validation macro-F1 for final experiments.
We report scores from a single run from each final experiment. We use a batch size of 8, run for 30 epochs with no early stopping, and choose scores based on the best validation macro-F1. Our learning rate is 0.00001. For each property, we apply loss weights equal to the inverse frequency of that property. Our model, which uses BERT-base, contains 109M trainable parameters, and took roughly 2-6 hours to train on a single GPU depending on the size of the dataset, whether or not we were predicting predicates, and whether or not we were predicting argument spans. We used the Pytorch Lightning7framework to build and train our model.
## B Full Results
The full proto-role property identification results for all model configurations using a linear classification head can be found arranged in a grid in Table 5. The grid shows the three training methods and the three scoring methods. For training, we have three columns indicating whether or not the model was trained to predict predicates, arguments, and proto-role properties. For training settings in which we do not train the model to predict predicates, note that we do not create sequences with incorrect predicates (ie, the model would never see the sequence He <SEP> He stole my toy!) and the model only sees instances with correct predicates.
7https://www.pytorchlightning.ai, Apache-2.0
| Dependency | Spans | | | | | | | | | | | | |
|--------------|---------|------|------|------|------|-------|-------|-------|-------|-------|-------|-------|-------|
| Train | Test | SPR1 | SPR2 | SPR1 | SPR2 | | | | | | | | |
| P | A | R | P | A | R | macro | micro | macro | micro | macro | micro | macro | micro |
| ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | 72.7 | 85.5 | 65.0 | 83.3 | 71.4 | 83.7 | 65.0 | 82.9 |
| ✗ | ✓ | ✓ | ✗ | ✗ | ✓ | 73.7 | 85.5 | 68.1 | 82.4 | 71.8 | 84.1 | 65.7 | 81.9 |
| ✗ | ✓ | ✓ | ✗ | ✓ | ✓^ | 72.9 | 84.4 | 67.6 | 81.3 | 64.0 | 85.0 | 60.0 | 80.3 |
| ✗ | ✓ | ✓ | ✗ | ✓ | ✓* | 62.2 | 77.5 | 64.1 | 79.0 | 59.4 | 78.8 | 57.1 | 74.8 |
| ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | 74.0 | 85.3 | 64.7 | 83.8 | 73.0 | 84.3 | 65.2 | 81.6 |
| ✓ | ✓ | ✓ | ✗ | ✓ | ✓^ | 73.3 | 84.5 | 64.3 | 83.2 | 64.8 | 85.3 | 59.1 | 79.9 |
| ✓ | ✓ | ✓ | ✗ | ✓ | ✓* | 64.7 | 79.5 | 62.1 | 81.9 | 60.0 | 78.8 | 56.5 | 74.8 |
| ✓ | ✓ | ✓ | ✓ | ✓ | ✓^ | 72.9 | 83.9 | 64.3 | 83.2 | 64.2 | 84.9 | 58.9 | 79.8 |
| ✓ | ✓ | ✓ | ✓ | ✓ | ✓* | 63.1 | 78.0 | 62.0 | 81.8 | 59.0 | 77.6 | 56.4 | 74.7 |
| Dependency | Spans | | | | | | | | | | | | | | | | | |
|--------------|---------|-----------|-------|-----------|-------|-----------|-------|-----------|------|------|------|------|------|------|------|------|------|------|
| SPR1 | SPR2 | SPR1 | SPR2 | | | | | | | | | | | | | | | |
| Train | Preds | Arg Heads | Preds | Arg Heads | Preds | Arg Spans | Preds | Arg Spans | | | | | | | | | | |
| P | A | R | F1 | P | R | F1 | F1 | P | R | F1 | F1 | P | R | F1 | F1 | P | R | F1 |
| ✗ | ✓ | ✓ | - | 77.1 | 93.2 | 84.4 | - | 82.1 | 95.7 | 88.4 | - | 62.3 | 91.6 | 74.1 | - | 70.8 | 86.7 | 78 |
| ✓ | ✓ | ✓ | 94.8 | 73.5 | 95.2 | 83 | 83.4 | 61.9 | 97.8 | 75.8 | 95.2 | 65.7 | 91.2 | 76.4 | 92.4 | 74.1 | 87.6 | 80.3 |
For scoring, we show three different scoring methods: (1) gold scores, which assume correct predicates and arguments earlier in the pipeline, for direct comparison to previous work; (2) lenient scores, which assume 0 for all proto-role properties, treating SPRL as a "proto-role retrieval" task; and
(2) strict scores, which map proto-role properties to the wrong label if predicates and arguments are falsely predicted as 0 earlier in the pipeline. We do not modify the SPRL score for false positives in predicate and argument identification since this would chnage the set of arguments over which the system is evaluated. The corresponding results for predicate and argument identification can be found in Table 6.
## B.1 Evaluating On Subsets Of Data
To attempt to tease out the reasons for various errors in the model predictions, we take varying subsets of the data and evaluate separately on each subset. We report sizes of the different subsets we evaluate in Table 8.
## B.1.1 Arguments Predicted Correctly And Incorrectly
To further investigate the question of how errors earlier in the pipeline propagate later in the pipeline, we take a subset of arguments which the model predicted correctly and a subset of arguments which the model predicted incorrectly, and calculate the F1 scores for each subset. We report these scores in Table 3. We see large differences in the F1 scores between these subsets, suggesting that arguments that are difficult for the model to identify are also difficult for proto-role property classification.
An example of an argument that all configurations of our SPR2 models struggled with is italicized in the sentence below, with the predicate in bold:
I **like** *I Move CA - Los Angeles Movers*,
they moved me before, but this time they were awesome :)
None of the models were able to retrieve this argument correctly (neither the head, nor the span).
They all made mistakes on at least some proto-role
![9_image_0.png](9_image_0.png)
property predictions: common mistakes among all configurations of the model included false negatives for *sentient*, false positives for *awareness*, and false positives for *change of location*.
Interestingly, only the span-based models predicted that the argument was not *sentient*, showing that the non-head tokens in the span confused the model. On the other hand, both the dependencybased models understandably predicted that the head of the argument, *Movers*, changed location, while the span-based models did not make this mistake.
We notice that this sentence might have been difficult for annotators to judge, so we proceed with evaluations of subsets based on annotator judgements and agreement to tease out the association between examples that are difficult for annotators and examples that are difficult for the model.
## B.1.2 Differences In Likert Ratings
For SPR2, which is doubly-annotated, we hypothesized that we could locate examples that are difficult for the model to classify by the difference between the two annotators' Likert ratings in each property judgement. We construct several different subsets of data, which we refer to as LDi. LDi is the subset of property annotations in which the difference between Likert ratings between two annotators is exactly i. We show F1 scores for different combinations of these subsets in Figure 2, and provide the sizes of each subset in Table 8. We see that the score on the subset containing only property judgements with complete agreement between annotators is far higher than all other scores. As we add property judgements with larger and larger disagreement between annotators, the scores drop substantially.
## B.1.3 Pairwise Inter-Annotator Agreement
In section B.2, we show inter-annotator agreement κ scores averaged over each property. We also calculate κ pairwise for each annotator and report these scores in Table 10. We then investigate the extent to which including annotations by annotators with low pairwise agreement affects F1 by creating subsets of data which excludes annotations by annotators with an inter-annotator agreement below some cutoff. We report these scores in Figure 3. Surprisingly, we note that excluding annotators with low pairwise inter-annotator agreement has almost no affect on the F1 score, suggesting that annotator "skill" is less important than the difficulty of each example in SPRL F1.
## B.1.4 Applicability Judgements
Finally, we investigate the extent to which the applicability judgements correlate with difficulty of property prediction. For both SPR1 and SPR2, an
"applicable" judgement, indicating whether or not the proto-role property was applicable to the argument in the context of the sentence, was collected for each property in addition to the Likert judgements. As a reminder, annotations marked inapplicable were collapsed into the 0 class regardless of the Likert rating. Thus, in Table 7, we show F1 scores on the subset of annotations marked "applicable" by both annotators (or, in the case of SPR1, the single annotator) versus F1 scores on the entire
| Dataset | Model | Applicable | All |
|------------------------|------------|--------------|-------|
| SPR1 | Dependency | 88.9 | 85.5 |
| + Predicate prediction | 88.5 | 85.3 | |
| Span | 88.2 | 84.1 | |
| + Predicate prediction | 88.2 | 84.3 | |
| SPR2 | Dependency | 86.3 | 82.4 |
| + Predicate prediction | 86.9 | 83.8 | |
| Span | 86.6 | 81.9 | |
| + Predicate prediction | 86.0 | 81.6 | |
dataset. We see a consistent boost of at least 3 F1 points by only evaluating on applicable annotations.
| Subset | SPR1 | SPR2 |
|----------|--------|--------|
| A0 | 8,925 | 997 |
| A1 | 10,083 | 1,775 |
| A2 | n/a | 5,348 |
| LD0 | n/a | 4,611 |
| LD1 | n/a | 1,563 |
| LD2 | n/a | 830 |
| LD3 | n/a | 570 |
| LD4 | n/a | 546 |
## B.2 Inter-Annotator Agreement
A possible limitation of the currently available SPR
data is relatively low average inter-annotator agreement. White et al. (2016) report an agreement of 0.617 using Spearman's rank correlation coefficient for SPR2. However, this agreement was measured over the Likert scores, which our model will not be predicting. We re-measured both the Likert data and the collapsed binary data using Cohen's kappa on a per-property basis. We see in Table 9 that when measuring agreement using Cohen's kappa, collapsing the Likert labels to {0, 1}
improves the agreement significantly, resulting in every property having at least κ ≥ 0.64.
We also calculated each annotator's Cohen's kappa score pairwise against every other annotator
(and averaged). We then experimented with scoring our models on a subset of the data in which only judgements by annotators with a certain interannotator agreement were kept. The inter-annotator agreement scores used in these experiments can be found in Table 10.
| Dataset | Property | Likert κ | Binary κ |
|-----------|----------------------------|------------|------------|
| 1&2 | instigation | 0.61 | 0.69 |
| 1&2 | volition | 0.77 | 0.86 |
| 1&2 | awareness | 0.82 | 0.88 |
| 1&2 | sentient | 0.82 | 0.88 |
| 1&2 | change of location | 0.59 | 0.71 |
| 1 | exists as physical | - | - |
| 1&2 | existed before | 0.74 | 0.79 |
| 1&2 | existed during | 0.79 | 0.86 |
| 1&2 | existed after | 0.68 | 0.76 |
| 1 | created | - | - |
| 1 | destroyed | - | - |
| 1&2 | change of possession | 0.66 | 0.80 |
| 1&2 | change of state | 0.59 | 0.66 |
| 1 | stationary | - | - |
| 1 | location of event | - | - |
| 1 | physical contact | - | - |
| 1&2 | was used | 0.59 | 0.66 |
| 1 | pred changed arg | - | - |
| 2 | was for benefit | 0.61 | 0.70 |
| 2 | partitive | 0.58 | 0.64 |
| 2 | change of state continuous | 0.65 | 0.67 |
| Average | 0.68 | 0.75 | |
Annotator ID Likert κ Binary κ **# Annotations**
0 0.43 **0.42** 14
1 0.72 0.81 5,418 2 0.56 0.74 28 3 0.70 0.76 1,932 7 0.62 0.73 9,184 8 0.67 0.78 14 10 0.75 0.81 1,078 11 0.61 0.78 42 13 0.69 0.73 14 15 0.71 0.80 7,224 16 0.68 0.73 2,814 20 0.64 0.76 3,640 25 0.70 0.75 3,248 26 0.70 0.78 19,250 29 0.70 0.81 7,504 30 0.65 0.78 1,652 32 0.65 0.74 8,708 35 0.68 0.74 1,204 37 0.60 0.68 1,092
40 0.81 **0.85** 14
43 0.67 0.75 14,854 45 0.67 0.77 126 46 0.65 0.71 1,498 48 0.69 0.78 1,358 50 0.63 0.72 140 51 0.70 0.80 308 56 0.71 0.79 6,496 62 0.68 0.71 3,850 64 0.68 0.76 518 65 0.71 0.81 1,512 66 0.69 0.76 4,186 68 0.72 0.80 588
69 0.75 0.79 14
70 0.65 0.76 4,746 71 0.70 0.77 4,144 73 0.67 0.67 2,744 74 0.51 0.50 546 75 0.66 0.80 4,942 76 0.67 0.77 4,228 78 0.70 0.79 896 81 0.68 0.75 854 87 0.69 0.77 23,002 92 0.69 0.75 6,580 93 0.67 0.76 4,746 94 0.73 0.82 3,682
Average 0.67 **0.75**
Table 10: Pairwise inter-annotator agreement measured with Cohen's kappa. *Italics* show the lowest κ value.
Bold shows the highest κ value.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations (unnumbered, page 5)
✓ A2. Did you discuss any potential risks of your work?
Ethics (unnumbered, page 5)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1 - Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 And Appendix A
✓ B1. Did you cite the creators of artifacts you used?
Section 4 (BERT) and Appendix A.3 (Pytorch Lightning)
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A.3 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3 - Data
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3 - Data
## C ✓ **Did You Run Computational Experiments?** Section 5 - Experiments, Appendix A
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix A.3 - since we only report from a single run, we do not provide error bars around results
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
vishnubhotla-etal-2023-improving | Improving Automatic Quotation Attribution in Literary Novels | https://aclanthology.org/2023.acl-short.64 | Current models for quotation attribution in literary novels assume varying levels of available information in their training and test data, which poses a challenge for in-the-wild inference. Here, we approach quotation attribution as a set of four interconnected sub-tasks: character identification, coreference resolution, quotation identification, and speaker attribution. We benchmark state-of-the-art models on each of these sub-tasks independently, using a large dataset of annotated coreferences and quotations in literary novels (the Project Dialogism Novel Corpus). We also train and evaluate models for the speaker attribution task in particular, showing that a simple sequential prediction model achieves accuracy scores on par with state-of-the-art models. | # Improving Automatic Quotation Attribution In Literary Novels
Krishnapriya Vishnubhotla1,4, Frank Rudzicz2,4,1**, Graeme Hirst**1and **Adam Hammond**3 1Department of Computer Science, University of Toronto 2Faculty of Computer Science, Dalhousie University 3Department of English, University of Toronto 4Vector Institute for Artificial Intelligence
## Abstract
Current models for quotation attribution in literary novels assume varying levels of available information in their training and test data, which poses a challenge for in-the-wild inference. Here, we approach quotation attribution as a set of four interconnected sub-tasks:
character identification, coreference resolution, quotation identification, and speaker attribution. We benchmark state-of-the-art models on each of these sub-tasks independently, using a large dataset of annotated coreferences and quotations in literary novels (the Project Dialogism Novel Corpus). We also train and evaluate models for the speaker attribution task in particular, showing that a simple sequential prediction model achieves accuracy scores on par with state-of-the-art models1.
## 1 Introduction
We focus on the task of automatic *quotation attribution*, or *speaker identification*, in full-length English-language literary novels. The task involves attributing each quotation (dialogue) in the novel to the character who utters it. The task is complicated by several factors: characters in a novel are referred to by various names and aliases (*Elizabeth, Liz, Miss Bennet, her sister*); these aliases can change and be added over the course of the novel; and authors often employ differing patterns of dialogue in the text, whereby quotations are sometimes attached to the speaker explicitly via a speech verb, and at other times require keeping track of character turns over multiple paragraphs.
The development of automated methods has also been hindered by the paucity of annotated datasets on which models can be trained and evaluated.
Existing methods for quotation attribution fall into one of two groups: those that directly attribute the quotation to a named character entity and those 1Code and data can be found at https://github.
com/Priya22/speaker-attribution-acl2023 that treat it as a two-step process in which quotations are first attached to the nearest relevant *mention* of a character and mentions are then resolved to a canonical character name via a coreference resolution model. We contend that most use-cases of a quotation attribution system involve resolving the speaker mention to one among a list of character entities. Thus, the usability of these systems is very much dependent on their ability to compile such a list of character entities and to resolve each attributed mention to an entity from this list.
Here, we use the Project Dialogism Novel Corpus (Vishnubhotla et al., 2022), a large dataset of annotated coreferences and quotations in literary novels, to design and evaluate pipelines of quotation attribution. Our analysis shows that state-ofthe-art models are still quite poor at character identification and coreference resolution in this domain, thus hindering functional quotation attribution.
## 2 Background And Prior Work
Elson and McKeown (2010) introduce the CQSA corpus, which contains quotations from excerpts from 4 novels and 7 short-stories that are annotated for the nearest speaker mention, which can be named (e.g., *Elizabeth*), or nominal (*her friend*).
On average, only 25% of the attributions in CQSA are to a named entity.
In contrast, He et al. (2013) link quotations directly to entities, and a list of characters and aliases is required for attribution. This list is generated with a named entity recognition (NER) model to obtain entity terms, which are then grouped together using Web resources such as Wikipedia.
The GutenTag package from Brooke et al. (2015)
contains modules for generating character lists and identifying speakers in literary texts. The former is based on the LitNER model (Brooke et al., 2016a), which bootstraps a classifier from a lowdimensional Brown clustering of named entities from Project Gutenberg texts. The speaker attri737 bution model is a simple rule-based approach that identifies the nearest named entity.
Sims and Bamman (2020) annotate the first 2000 tokens of 100 novels from the LitBank dataset1.
Quotations are linked to a unique speaker from a predefined list of entities. LitBank also contains annotations for coreference for these tokens (Bamman et al., 2020). The BookNLP package2from the same group contains pre-trained models for NER,
coreference resolution, and speaker attribution, although the latter is only at the mention-level.
Cuesta-Lazaro et al. (2022) attempt to reconcile the differences in pre-requisites and methodologies of prior attribution systems by proposing a modularization of the task into three sub-tasks: quotation identification, character identification, and speaker attribution. They evaluate baselines for each component, propose a new state-of-the-art method for speaker attribution, and quantify the relative importance of each module in an end-to-end pipeline.
Their speaker attribution module, however, considers only named mentions in the text as candidate speakers, leading to a lower performance on implicit and anaphoric quotations. Neither their dataset of 15 novels nor their model for speaker attribution have been made public, precluding comparison with our work below.
In our work, we follow this modular formulation, with some key differences: (a) we evaluate an additional sub-task of coreference resolution, allowing us to (b) test an attribution model that can work with both named and pronominal candidate mentions surrounding a quotation; and (c) we evaluate our models on a publicly available dataset.
## 3 Dataset: Pdnc
We briefly describe here the Project Dialogism Novel Corpus (Vishnubhotla et al., 2022). PDNC
consists of 22 full-length English novels, published in the 19th and 20th centuries, annotated with the following information:
Characters: A list of characters in the novel.
This includes characters who speak, are addressed to, or referred to multiple times in the novel. Each character is identified by a main name (e.g., *Elizabeth Bennet*), as well as a set of aliases (Liz, Lizzie, Eliza). We do not distinguish between the two, and treat each character entity as identifiable by a set of names (so that *Elizabeth Bennet, Liz, Lizzie, Eliza* 1https://github.com/dbamman/litbank 2https://github.com/booknlp/booknlp forms one character entity).
Quotations: Each uttered quotation in the novel is annotated with its speaker and addressee(s); with the referring expression, if any, that indicates who the speaker is; and with internal mentions, *i.e.,*
named or pronominal phrases within the quotation that refer to one or more character entities. The annotations in PDNC make it ideal for evaluating several aspects of quotation attribution in novels, including named entity recognition, coreference resolution, and speaker attribution.
## 4 Modularization Of The Task
Character identification: The goal of this subtask is to build a list of the unique character entities in a novel. Although NER models perform quite well at identifying spans of text that constitute a named entity (here, a character name), the task is complicated by the fact that characters can have multiple aliases in the text. Moreover, some characters may be introduced and referred to only by social titles (the policeman, the Grand Inquisitor, the little old man, the bystander).
Coreference resolution: The goals here are to identify text spans that refer to a character entity
(which we refer to as *mentions*) and to link each mention to the correct character entity or entities to which it refers. In addition to mentions that are personal pronouns such as *he, she,* and *them*, literary texts have an abundance of pronominal phrases that reflect relationships between characters, such as *her husband* and *their father*. Such phrases can also occur within quotations uttered by a character
(e.g., *my father*), requiring quotation attribution as a prerequisite for complete coreference resolution.
Quotation identification: Perhaps the most straightforward of our sub-tasks, here we identify all text spans in a novel that constitute dialogue, i.e., are uttered by a character entity or entities.
Speaker attribution: Finally, this sub-task links each identified quotation to a named character identity. While most models are designed to solve the more tractable and practical problem of linking quotations to the nearest relevant speaker mention, we subsume the mention–entity linking tasks under the coreference resolution module, equating the two tasks.
## 5 Models And Evaluation Metrics
We evaluate each of the modules of section 4 separately. In order not to confound the evaluation with cascading errors, at each step, we "correct" the outputs of the automated system from the previous step by using annotations from PDNC.
## 5.1 Character Identification
We evaluate two pipelines - GutenTag and BookNLP - on their ability to identify the set of characters in a novel, and potentially, the set of aliases for each character. In addition, we also test the NER system from the spaCy3 module as a proxy for the state-of-the-art in NER that is not trained explicitly for the literary domain.
Character recognition (CR): For each novel, we compute the proportion of annotated character entities that are identified as named entities of the category 'PERSON' (Doddington et al., 2004). We use a simple string-matching approach, where we try for either a direct match, or a unique match when common prefixes such as Mr. and Sir are removed. Thus, if a particular novel has N character entities annotated, the NER model outputs a list of K named 'PERSON' entities, and K′ of these entities are in turn matched with M out of the N
characters, the CR metric is calculated as M/N.
Character clustering: We use the clustering evaluation metrics of *homogeneity* (C.Hom), *completeness* (C.Comp), and their harmonic mean, *vscore* to evaluate named entity clusters. Homogeneity (between 0 and 1) is the fraction of named clusters that link to the same character entity; completeness is the number of homogeneous clusters a single entity is distributed over (ideal value of 1).
As an example, consider the case where we have three annotated characters for a novel: Elizabeth Bennet, *Mary Bennet*, and *The Queen*. The set of annotated aliases for the characters are {Elizabeth Bennet, Eliza, Lizzie, Liz}, *{Mary Bennet,*
Mary}, and *{The Queen}*. Say model M1 outputs the following entity clusters: *{Elizabeth Bennet,*
Eliza}, *{Liz, Lizzie}* and *{Mary Bennet, Mary}*;
model M2 outputs {Elizabeth Bennet, Mary Bennet, Eliza, Mary}, *{Liz, Lizzie}*. Each model has recognized two out of the three characters in our list; this evaluates to a CR score of 2/3. Each of the three clusters from model M1 refers solely to one character entity, resulting in a *homogeneity* score of 1.0. However, these three clusters are formed for only two unique character entities, resulting in a *completeness* score of 1.5 (*v-score* 0.6). Analogously, model M2 has a homogeneity score of 0.5 3https://explosion.ai/blog/spacy-v3
## 5.2 Coreference Resolution
We consider two pipelines for coreference resolution: BookNLP (based on Ju et al. (2018)) and spaCy (based on Dobrovolskii (2021)). Given a text, these neural coreference resolution models output a set of clusters, each comprising a set of coreferent mention spans from the input.
Evaluating this module requires annotations that link each mention span in a novel to the character entity referred to. PDNC, unfortunately, contains these mention annotations only for text spans within quotations. We therefore evaluate coreference resolution only on a subset of the mention spans in a novel, extracted as follows: We first identify the set of mention clusters from our models that can be resolved to an annotated character entity, using the character lists from PDNC and the string-matching approach described above. We then prune this to only include those mention spans that are annotated in the PDNC dataset, i.e, mention spans that occur within quotations, and evaluate the accuracy of the resolution.
Mention clustering (M-Clus): We compute the fraction of mention clusters that can be matched to a *unique* (Uniq) annotated character entity rather than to multiple (Mult) or no (None) entities.
Mention resolution (M-Res): For those mention spans within PDNC that are identified by the model and are assigned to a cluster that can be uniquely matched to a character entity (\# Eval), we compute the accuracy of the linking (Acc.).
## 5.3 Quotation Identification
Most models, rule-based or neural, can identify quotation marks and thus quotations. We evaluate how many of such quoted text instances actually constitute *dialogue*, in that they are uttered by one or more characters. Our gold standard is the set of quotations that have been annotated in PDNC,
which includes quotations uttered by multiple characters and by unnamed characters such as *a crowd*.
## 5.4 Speaker Attribution
The speaker-attribution part of BookNLP's pipeline is a BERT-based model that uses contextual and positional information to score the BERT embedding for the quotation span against the embeddings of mention spans that occur within a 50-word context window around the quotation; the highest-scoring mention is selected as the speaker. We supplement this approach by limiting the set of candidates to resolved mention spans from the coreference resolution step, thereby directly performing quotationto-entity linking. As we see from our results, this method, which we refer to as BookNLP+, greatly improves the performance of the speaker attribution model by eliminating spurious candidate spans.
We also evaluate a *sequential prediction model* that predicts the speaker of a quotation simply by looking at the sequence of speakers and mentions that occur in some window around the quotation.
We implement this as a one-layer RNN that is fed a sequence of tokens representing the five characters mentioned most recently prior to the quotation text, one character mention that occurs right after, and, optionally, the set of characters mentioned within the quotation.
## 6 Experimental Setup
We evaluate the models for character identification, coreference resolution, and quotation identification on the entire set of 22 novels in PDNC, since we are neither training nor fine-tuning these on this dataset. For the speaker attribution models, we define the training setup below.
We curate the set of mention candidates for each novel in the following manner: the mention clusters generated by BookNLP are used to extract the set of mention spans that could be successfully resolved to a character entity from the annotated PDNC
character lists for each novel. We append to this set the annotated mention spans (within quotations)
from PDNC, as well as explicit mention spans —
that is, text spans that directly match a named alias from the character list. Overlaps between the three sets are resolved with a priority ranking, whereby PDNC annotations are considered to be more accurate than explicit name matches, which in turn take precedence over the automated coreference resolution model.
We test with 5-fold cross-validation in two ways:
splitting the annotated quotations in each novel 80/20 and splitting the set of entire novels 80/20.
## 7 Results
From Table 1, we see that the neural NER models of spaCy and BookNLP are better at recognizing character names than GutenTag's heuristic system (0.81 and 0.85 vs 0.60). However, the strengths of GutenTag's simpler Brown-clustering–
based NER system are evident when looking at
| Model | CR | C.Hom | C.Comp | v-score |
|----------|------|---------|----------|-----------|
| spaCy | 0.81 | 0.16 | 1.02 | 0.27 |
| GutenTag | 0.60 | 0.98 | 1.33 | 1.12 |
| BookNLP | 0.85 | 0.86 | 1.18 | 0.99 |
Table 1: Character identification: Average scores across all the novels in the dataset. Column headings are defined in the text. Scores for each individual novel are reported in Appendix B.
\begin{tabular}{c|c c c c} & M-Clus & M-Res & M-Res \\ \hline \# Clus & **Uniq** & **Mult** & **None** & \# Eval & Acc. \\ \hline 1503.1 & 0.093 & 0.061 & 0.846 & 499.0 & 0.746 \\ 1662.8 & 0.043 & 0.003 & 0.953 & 1126.6 & 0.774 \\ \end{tabular}
Table 2: Coreference resolution: All scores are averaged over the 22 novels in PDNC. Column headings are defined in the text.
the homogeneity; when two named entities are assigned as aliases of each other, it is almost always correct. This shows the advantage of documentlevel named entity clustering as opposed to local span-level mention clustering for character entity recognition. The cluster quality metric, on the other hand, tells us that GutenTag still tends to be conservative with its clustering compared to BookNLP,
which nonetheless is a good strategy for the literary domain, where characters often share surnames.
Performance of these models on the coreference resolution task is significantly lower (Table 2). A majority of the mention clusters from both BookNLP and spaCy's coreference resolution modules end up as unresolved clusters, with no containing named identifier that could be linked to a PDNC character entity. However, when we evaluate mention-to-entity linking on the subset of clusters that can be resolved, both systems achieve accuracy scores of close to 0.78, although spaCy is able to resolve far fewer mentions (499 vs 1127).
The importance of the character identification and coreference resolution tasks can be quantified by looking the performance of the speaker attribution models (Table 3). The end-to-end pretrained BookNLP pipeline, when evaluated on the set of PDNC quotations (which were identified with accuracy of 0.94), achieves an accuracy of 0.42. When we restrict the set of candidate mentions for each quotation to only those spans that can be resolved to a unique character entity, the attribution accuracy increases to 0.61. However, the RNN model still beats this performance with an accuracy of 0.72 on the random data split. When BookNLP's contextual model is trained on data from PDNC, its
| Model | Quotations | Novels |
|--------------------|--------------|----------|
| BookNLP-OG | 0.40 | 0.40 |
| BookNLP+ (LitBank) | 0.62 | 0.61 |
| Seq-RNN | 0.72 | 0.64 |
| BookNLP+ (PDNC) | 0.78 | 0.68 |
accuracy improves to 0.78. These scores drop to 0.63 and 0.68 for the entire-novel split, where we have the disadvantage of being restricted only to patterns of mention sequences, and not speakers.
## 8 Analysis
We briefly go over some qualitative analyses of the errors made by models in the different subtasks, which serves to highlight the challenges presented by literary text and opportunities for future research.
Character Identification and Coreference Resolution: We manually examine the mention clusters identified by our coreference resolution modules that could not be matched a unique character entity as annotated in PDNC.
We find that, by far, the most common error is conflating characters with the same surname or family name within a novel. For example, several of the women characters in these novels are often referred to by the names of their husbands or fathers, prefixed with a honorific such as *Mrs.* or Miss. Thus *Mrs. Archer* refers to *May Welland* in The Age of Innocence and *Miss Woodhouse* refers to Emma Woodhouse in *Emma*. However, a surname without a title, such as Archer or *Woodhouse*,
generally refers to the corresponding male character. This results in the formation of mention clusters that take the spans *Miss Woodhouse* and Woodhouse to be coreferent, despite being different character entities. We see similar issues with father–son character pairs, such as *George Emerson* and Mr. Emerson in *A Room With A View*, and with character pairs that are siblings.
Speaker Attribution: We first quantify the proportion of quotations attributed to a mention cluster that cannot be resolved to a named character entity with the end-to-end application of the BookNLP
| Quotations | Novels | | | |
|--------------------|----------|------|------|------|
| Model | Exp. | Rest | Exp. | Rest |
| BookNLP-OG | 0.64 | 0.28 | 0.63 | 0.28 |
| BookNLP+ (LitBank) | 0.93 | 0.47 | 0.95 | 0.43 |
| Seq-RNN | 0.85 | 0.65 | 0.76 | 0.57 |
| BookNLP+ (PDNC) | 0.98 | 0.70 | 0.97 | 0.53 |
## Pipeline.
On average, 47.7% of identified quotations are assigned to an unresolved mention cluster as the speaker. The range of this value varies from as low as 12.5% (*The Invisible Man*) to as high as 78.7% (*Northanger Abbey*). A majority of these unresolved attributions occur with implicit and anaphoric quotations (76.2%), where the speaker is not explicitly indicated by a referring expression such as *Elizabeth said*, as opposed to explicit quotations (23.8%).
In Table 4, we break down the performance of the speaker attribution models by quotation type.
We see that even our local context–based RNN
model is able to identify the speaker of explicit quotations with a relatively high accuracy, and that the speaker for non-explicit quotations can also generally be modeled using the sequence of 5–6 characters mentioned in the vicinity of the quotation. The transformer-based models are of course able to use this local context more effectively by making use of linguistic cues and non-linear patterns of mentions and speakers in the surrounding text. Still, our best performing model achieves an accuracy of only 0.53 on implicit and anaphoric quotations when applied to novels unseen in the training set (the Novels split).
## 9 Conclusions And Future Work
In this work, we quantitatively evaluated the key components of a functional quotation attribution system. We showed that the initial task of recognizing characters and their aliases in a novel remains quite a challenge, but doing so greatly improves the performance of speaker attribution by limiting the set of candidate speakers. However, with existing coreference resolution systems, a large portion of mention clusters (around 90%) remain unresolved, so this remains a problem for new research.
## Limitations
There is much variation in literary writing and narrative styles, and our work here deals with a small, curated subset of this domain. The novels we analyze are all in the English language, and were published between the early 19th and early 20th centuries. The authors and novels themselves are drawn from what is considered to be the established literary canon, and are not necessarily representative of all the works of that era, let alone literary works of other eras. The texts we analyze are largely uniform in narrative style. We limit ourselves to only those quotations that are explicitly indicated as such in the text by quotation marks, thereby eliminating more-complex styles such as free indirect discourse (Brooke et al., 2016b) and stream-of-consciousness novels. We do not deal with nuances such as letters and diary entries nor quotations within quotations. The models we analyze for named entity recognition and coreference resolution use a fixed, binary formulation of the gender information conveyed by pronominal terms.
Though the development of fairer, more representative models is constrained by current datasets, we note that there is encouraging progress being made in this area (Bamman et al., 2020; Yoder et al.,
2021).
## References
David Bamman, Olivia Lewke, and Anya Mansoor.
2020. An annotated dataset of coreference in English literature. In *Proceedings of the 12th Language* Resources and Evaluation Conference, pages 44–54.
Julian Brooke, Adam Hammond, and Timothy Baldwin.
2016a. Bootstrapped text-level named entity recognition for literature. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 344–350.
Julian Brooke, Adam Hammond, and Graeme Hirst.
2015. GutenTag: an NLP-driven tool for digital humanities research in the Project Gutenberg corpus.
In *Proceedings of the Fourth Workshop on Computational Linguistics for Literature*, pages 42–47.
Julian Brooke, Adam Hammond, and Graeme Hirst.
2016b. Using models of lexical style to quantify free indirect discourse in modernist fiction. Digital Scholarship in the Humanities, 32:234–250.
Carolina Cuesta-Lazaro, Animesh Prasad, and Trevor Wood. 2022. What does the sea say to the shore?
A BERT based DST style approach for speaker to dialogue attribution in novels. In Proceedings of the
60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5820–5829.
Vladimir Dobrovolskii. 2021. Word-level coreference resolution. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 7670–7675, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The Automatic Content Extraction (ACE) Program ––– tasks, data, and evaluation. In *Language Resources and Evaluation* Conference, volume 2, pages 837–840. Lisbon.
David K Elson and Kathleen R McKeown. 2010. Automatic attribution of quoted speech in literary narrative. In Twenty-Fourth AAAI Conference on Artificial Intelligence.
Hua He, Denilson Barbosa, and Grzegorz Kondrak.
2013. Identification of speakers in novels. In *Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 1312–1320.
Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018.
A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446–1459, New Orleans, Louisiana. Association for Computational Linguistics.
Matthew Sims and David Bamman. 2020. Measuring information propagation in literary social networks.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 642–652.
Krishnapriya Vishnubhotla, Adam Hammond, and Graeme Hirst. 2022. The project dialogism novel corpus: A dataset for quotation attribution in literary texts. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 5838–5848, Marseille, France. European Language Resources Association.
Michael Yoder, Sopan Khosla, Qinlan Shen, Aakanksha Naik, Huiming Jin, Hariharan Muralidharan, and Carolyn Rosé. 2021. FanfictionNLP: A text processing pipeline for fanfiction. In *Proceedings of the Third* Workshop on Narrative Understanding, pages 13–23, Virtual. Association for Computational Linguistics.
## A Implementation Details
The BookNLP pipeline is available to use as a Python package, as is spaCy, with pretrained models for coreference resolution and speaker attribution. For the former, these models are trained on the LitBank corpus, which is a dataset from the literary domain. We use these pretrained models to evaluate performance on the character identification and coreference resolution tasks. GutenTag can be run either via a Web interface or a command-line executable (requiring Python 2). It was designed to interface with texts from the Project Gutenberg corpus. Some of the novels in PDNC were not found in GutenTag's predefined database of texts, so we exclude these when reporting average performance metrics.
## B Results By Novel
Tables 5 and 6 show for each novel in PDNC the per-model results for character identification that are summarized in Table 1.
| BookNLP | GutenTag | | | | | | | | | | |
|----------------------------------|------------|------|--------|-------|--------|---------|------|--------|-------|--------|---------|
| Novel | # Chars | CR | # Clus | C.Hom | C.Comp | v-score | CR | # Clus | C.Hom | C.Comp | v-score |
| A Room With A View | 63 | 0.83 | 60 | 0.95 | 1.19 | 1.06 | 0.48 | 35 | 1.00 | 1.17 | 1.08 |
| The Age of Innocence | 55 | 0.84 | 48 | 0.81 | 1.26 | 0.99 | 0.64 | 49 | 1.00 | 1.40 | 1.17 |
| Alice's Adventures in Wonderland | 51 | 0.67 | 34 | 0.97 | 1.03 | 1.00 | 0.25 | 14 | 1.00 | 1.08 | 1.04 |
| Anne of Green Gables | 113 | 0.87 | 102 | 0.92 | 1.08 | 0.99 | 0.19 | 25 | 1.00 | 1.14 | 1.06 |
| Daisy Miller | 10 | 1.00 | 13 | 1.00 | 1.30 | 1.13 | 0.80 | 12 | 1.00 | 1.50 | 1.20 |
| Emma | 18 | 0.89 | 17 | 0.71 | 1.09 | 0.86 | 0.89 | 27 | 1.00 | 1.69 | 1.26 |
| A Handful of Dust | 104 | 0.82 | 94 | 0.89 | 1.15 | 1.01 | − | − | − | − | − |
| Howards End | 55 | 0.95 | 64 | 0.89 | 1.27 | 1.05 | 0.49 | 33 | 0.97 | 1.23 | 1.08 |
| Night and Day | 50 | 0.94 | 53 | 0.77 | 1.17 | 0.93 | 0.62 | 40 | 0.97 | 1.30 | 1.11 |
| Northanger Abbey | 20 | 0.90 | 12 | 0.75 | 1.00 | 0.86 | 0.85 | 23 | 0.96 | 1.29 | 1.10 |
| Persuasion | 35 | 0.86 | 29 | 0.79 | 1.28 | 0.98 | 0.77 | 28 | 0.96 | 1.08 | 1.02 |
| Pride and Prejudice | 74 | 0.81 | 62 | 0.85 | 1.10 | 0.96 | 0.35 | 30 | 0.90 | 1.35 | 1.08 |
| Sense and Sensibility | 24 | 0.83 | 25 | 0.56 | 1.17 | 0.76 | 0.79 | 26 | 0.96 | 1.39 | 1.14 |
| The Sign of the Four | 35 | 0.94 | 32 | 0.72 | 1.05 | 0.85 | 0.60 | 28 | 1.00 | 1.33 | 1.14 |
| The Awakening | 22 | 0.82 | 17 | 0.88 | 1.07 | 0.97 | 0.77 | 21 | 0.95 | 1.25 | 1.08 |
| The Gambler | 27 | 0.70 | 22 | 0.91 | 1.18 | 1.03 | 0.59 | 22 | 1.00 | 1.38 | 1.16 |
| The Invisible Man | 31 | 0.94 | 40 | 0.95 | 1.36 | 1.12 | 0.61 | 32 | 1.00 | 1.68 | 1.25 |
| The Man Who Was Thursday | 30 | 0.80 | 35 | 0.97 | 1.55 | 1.19 | 0.53 | 23 | 1.00 | 1.44 | 1.18 |
| The Mysterious Affair at Styles | 30 | 0.80 | 25 | 0.88 | 1.05 | 0.96 | 0.70 | 28 | 0.96 | 1.35 | 1.12 |
| The Picture of Dorian Gray | 43 | 0.88 | 43 | 0.98 | 1.14 | 1.05 | 0.56 | 27 | 1.00 | 1.12 | 1.06 |
| The Sport of the Gods | 37 | 0.81 | 34 | 0.94 | 1.23 | 1.07 | 0.54 | 28 | 0.96 | 1.50 | 1.17 |
| The Sun Also Rises | 51 | 0.86 | 51 | 0.96 | 1.23 | 1.08 | − | − | − | − | − |
| Mean | 44.5 | 0.85 | 41.45 | 0.86 | 1.18 | 0.99 | 0.60 | 27.55 | 0.98 | 1.33 | 1.12 |
| Novel | # Chars | CR | # Clus | C.Hom | C.Comp | v-score |
|----------------------------------|-----------|------|----------|---------|----------|-----------|
| A Room With A View | 63 | 0.78 | 64 | 0.33 | 1.24 | 0.52 |
| The Age of Innocence | 55 | 0.85 | 90 | 0.04 | 1.00 | 0.09 |
| Alice's Adventures in Wonderland | 51 | 0.80 | 44 | 0.39 | 1.00 | 0.56 |
| Anne of Green Gables | 113 | 0.69 | 98 | 0.24 | 1.04 | 0.40 |
| Daisy Miller | 10 | 0.90 | 3 | 0.00 | 0.00 | 0.00 |
| Emma | 18 | 0.89 | 14 | 0.07 | 1.00 | 0.13 |
| A Handful of Dust | 104 | 0.71 | 85 | 0.26 | 1.00 | 0.41 |
| Howards End | 55 | 0.84 | 72 | 0.18 | 1.08 | 0.31 |
| Night and Day | 50 | 0.88 | 52 | 0.15 | 1.00 | 0.27 |
| Northanger Abbey | 20 | 0.90 | 15 | 0.07 | 1.00 | 0.12 |
| Persuasion | 35 | 0.89 | 36 | 0.06 | 1.00 | 0.11 |
| Pride and Prejudice | 74 | 0.68 | 78 | 0.17 | 1.00 | 0.29 |
| Sense and Sensibility | 24 | 0.83 | 21 | 0.10 | 1.00 | 0.17 |
| The Sign of the Four | 35 | 0.80 | 40 | 0.05 | 1.00 | 0.10 |
| The Awakening | 22 | 0.86 | 24 | 0.12 | 1.00 | 0.22 |
| The Gambler | 27 | 0.74 | 18 | 0.22 | 1.00 | 0.36 |
| The Invisible Man | 31 | 0.84 | 37 | 0.22 | 1.00 | 0.36 |
| The Man Who Was Thursday | 30 | 0.73 | 26 | 0.19 | 1.00 | 0.32 |
| The Mysterious Affair at Styles | 30 | 0.87 | 29 | 0.10 | 1.00 | 0.19 |
| The Picture of Dorian Gray | 43 | 0.86 | 32 | 0.19 | 1.00 | 0.32 |
| The Sport of the Gods | 37 | 0.81 | 43 | 0.12 | 1.00 | 0.21 |
| The Sun Also Rises | 51 | 0.82 | 56 | 0.32 | 1.12 | 0.50 |
| Mean | 44.5 | 0.81 | 44.40 | 0.16 | 1.02 | 0.27 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section (9)
✗ A2. Did you discuss any potential risks of your work?
The work is in the domain of literary texts and does not apply to any societal technologies that work with or interact with people.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 6, Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? 6, Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
subramanian-etal-2023-modular | Modular Visual Question Answering via Code Generation | https://aclanthology.org/2023.acl-short.65 | We present a framework that formulates visual question answering as modular code generation. In contrast to prior work on modular approaches to VQA, our approach requires no additional training and relies on pre-trained language models (LMs), visual models pre-trained on image-caption pairs, and fifty VQA examples used for in-context learning. The generated Python programs invoke and compose the outputs of the visual models using arithmetic and conditional logic. Our approach improves accuracy on the COVR dataset by at least 3{\%} and on the GQA dataset by 2{\%} compared to the few-shot baseline that does not employ code generation. | # Modular Visual Question Answering Via Code Generation
Sanjay Subramanian1 Medhini Narasimhan1 **Kushal Khangaonkar**1 Kevin Yang1 Arsha Nagrani2 Cordelia Schmid2 **Andy Zeng**2 Trevor Darrell1 **Dan Klein**1 1UC Berkeley 2Google Research
{sanjayss,medhini,kushaltk,yangk,trevordarrell,klein}@berkeley.edu,
{anagrani,cordelias,andyzeng}@google.com
## Abstract
We present a framework that formulates visual question answering as modular code generation. In contrast to prior work on modular approaches to VQA, our approach requires no additional training and relies on pre-trained language models (LMs), visual models pre-trained on image-caption pairs, and fifty VQA examples used for in-context learning. The generated Python programs invoke and compose the outputs of the visual models using arithmetic and conditional logic.
Our approach improves accuracy on the COVR
dataset by at least 3% and on the GQA dataset by 2% compared to the few-shot baseline that does not employ code generation.
## 1 Introduction
The scope of reasoning needed for visual question answering (VQA) is vast, and demands the synthesis of many skills - from grounding language to pixels (Goyal et al., 2017; Radford et al., 2021; Zhai et al., 2022) and spatial reasoning (Hudson and Manning, 2019) to commonsense and knowledgebased reasoning (Marino et al., 2019). Consider the question *"Is the carriage to the right of a horse?"*.
To consistently answer such questions correctly, a system must recognize that the question is the conjunction of two subquestions: *"Is there a horse?"*
and *"Is the carriage to the right of the horse?"*
Scaling the typical finetuning paradigm to all possible combinations of reasoning skills is prohibitively expensive in annotation cost and makes it difficult to add skills to an already-trained system.
Modular approaches, on the other hand - from classic methods (Krishnamurthy and Kollar, 2013),
to differentiable neural module networks (NMNs)
(Andreas et al., 2016; Hu et al., 2017; Saqur and Narasimhan, 2020)) - offer a potential route to leverage and scale to the compositional nature of visual reasoning as a means to generalize: i.e., *infinite use of finite means*. However, the modules 747 of an NMN must still be trained jointly on a large dataset, and are also restricted in that they (i) require a parser, which must be modified if modules are added or removed from the system, and (ii)
require retraining if a module is replaced.
In this work, we investigate an alternative class of modular VQA approaches, whereby building on the recent advent of highly capable out-of-the-box language models (LMs) (Chen et al., 2021; Ouyang et al., 2022) and visual language models (VLMs)
(Li et al., 2022), we develop systems that formulate VQA as a program synthesis problem. Specifically, our method CodeVQA, illustrated in Figure 1, uses code-writing LMs to take questions as input, and outputs code to (i) orchestrate a series of visual primitive APIs that wrap around VLMs to probe the image for specific pieces of visual information (e.g., captions, pixel locations of entities, or image-text similarity scores), and (ii) reason about that information with the full expression of Python code (e.g. arithmetic, logic structures, feedback loops, etc.) to arrive at an answer. From a practical perspective, the modularity of CodeVQA combined with the few-shot prompting capabilities of LMs enable it to adapt to a broad range of desired VQA
label distributions without additional model training, and benefits from replacing individual modules with improved versions as they become available.
We evaluate CodeVQA in the few-shot VQA setting, which has seen a great deal of recent work
(Alayrac et al., 2022; Jin et al., 2021; Yang et al.,
2021; Tiong et al., 2022). Our method outperforms previous approaches by at least 3% on the COVR
dataset (Bogin et al., 2021), which requires reasoning over multiple images, and by 2% on the GQA dataset (Hudson and Manning, 2019). Our results suggest that the benefits of modularity with recent off-the-shelf models can be realized in VQA
without additional model training.1 1Our code and annotated programs will be available at https://github.com/sanjayss34/codevqa.
![1_image_0.png](1_image_0.png)
## 2 Related Work
Several recent approaches for reasoning tasks consist of an LM that writes programs and an interpreter for these programs. Liang et al. (2022) applies this approach to robotics. Cheng et al. (2023)
introduces a framework for reasoning jointly over tables, text, and images, where the images are represented by image captions. Subramanian et al.
(2022) used a syntactic parser and hard-coded rules rather than an LM to aggregate outputs from CLIP
(Radford et al., 2021) for zero-shot referring expression comprehension; their finding that CLIP is not useful for spatial keywords motivates our code generation approach to spatial reasoning.
Concurrent with our work, other papers have introduced similar frameworks for multi-hop VQA
(Gupta and Kembhavi, 2022; Surís et al., 2023).
These papers conflate the benefit of program synthesis with the benefits of the LM, in-context examples, and vision models used as primitives. By contrast, we analyze the effect of program synthesis by comparing CodeVQA against a strong LM-based few-shot baseline using the same in-context example selection method. Moreover, while these frameworks rely on supervised VQA or object detection models, we show that we can obtain comparable performance (on the GQA dataset) using only the LM and models pre-trained on image-text pairs.
## 3 Few-Shot Vqa Via Code Generation
In visual question answering (VQA), the inputs to the system are an image and a question and the output is a textual answer. We consider the fewshot VQA setting in which the system has access to only a small number (50) of human-annotated
## Vqa Instances.
Overview. Fig 1 illustrates our approach. Given an image and a corresponding question, CodeVQA first generates a Python program using just the question. It then executes this program, using the image when necessary, to predict the answer. We first define the set of code primitives that our system uses (§ 3.1). Then we describe how we generate a program that composes these primitives based on the question (§ 3.2). Finally, we enumerate the pre-trained models that we employ (§ 3.3).
## 3.1 Code Primitives
Primitives define basic operations over the image or over text that are often useful for VQA. In CodeVQA,
we use three primitives, which are defined below. Each of these primitives is implemented using image-text matching (ITM), image-text contrastive (ITC), and image-captioning models, each of which can be trained with only image-caption pairs. The difference between ITM and ITC is that ITC computes separate image and text embeddings and takes a dot product, while ITM performs early fusion on the image and text features and is thus more computationally expensive. We note that our framework is not tied to this choice of primitives and can support other, more complex primitives that could draw on other aspects of the programming language and third-party libraries.
query(image, question) This function answers a question about the given image. Our implementation of this function is based on PnP-VQA
(Tiong et al., 2022) and PICa (Yang et al., 2021)
and is implemented with the following steps: (1)
using the ITM model, compute the GradCAM (Selvaraju et al., 2016) between the question and the image (averaged over question tokens), (2) sample K = 20 image patches based on their GradCAM score, (3) generate a captions from the sampled patches using the captioning model, (4) Repeat steps (2) and (3) until C unique captions have been generated, and (5) predict the answer by prompting an LM with the question, captions, and in-context examples. The in-context examples in step (5) are selected as described in § 3.2. When the dataset involves reasoning over multiple images, each incontext example has the captions for all images.
get_pos(image, text) This function computes the GradCAM between the given text tokens and the image using the ITM model and returns the (x, y) pair that maximizes the GradCAM value.
Note that this application of GradCAM is different from the one in query since we do not average over all question tokens. See Appendix B for more information on how we compute GradCAM maps.
## Find_Matching_Image(Images, Text) In The
setting where multiple images are associated with each question, there are questions that refer specifically to one image (e.g. "What is the woman holding?"). This function can be used to select the most relevant image from the set. It is implemented by scoring each image with the text using the ITC
model and picking the image with the highest score.
## 3.2 Code Generation
In the first stage of CodeVQA, we generate a Python program based on the question. Using Python over a domain-specific language is advantageous because (1) it supports arithmetic as well as control flow including loops and if statements (Liang et al.,
2022)–all of which we use in our programs–and (2)
large LMs for code generation (e.g. Codex (Chen et al., 2021)) have been trained on a large amount of Python code.
We construct a prompt that consists of an instruction, constants that define the dimensions of the image, and import statements and API
documentation (as a code comment) that specify the available functions. In addition to the prompt, the input to the LM also includes expertannotated programs for several in-context examples. An in-context example for few-shot prompting on the COVR dataset is shown below (question in gray, the program is highlighted).
![2_image_0.png](2_image_0.png)
For an example of the rest of the prompt for the LM, see Appendix A. When executing the generated program results in a runtime error, we return call query on the image and the original question
(including captions for all images if the instance involves multiple images).
Since all annotated programs cannot fit into a single input to the model, we must select which programs to use as in-context examples for each test question. Following Wang et al. (2022), we use sentence embeddings2to retrieve the most similar questions for each test question.
## 3.3 Component Models
Our approach relies on four pre-trained models: a code generation model, an ITM model, an ITC
model, an IC model, and a question-answering LM
for answering questions based on captions. We use the code-davinci-002 model (Chen et al., 2021)
via the OpenAI API for both generating programs and for question-answering. We use the BLIP models (Li et al., 2022) finetuned for ITM, ITC, and captioning.
## 4 Experiments 4.1 Implementation Details
See Appendix C for implementation details.
## 4.2 Datasets
The GQA dataset (Hudson and Manning, 2019)
contains multi-hop questions generated from human-annotated scene graphs of individual images in Visual Genome (Krishna et al., 2016). The COVR dataset (Bogin et al., 2021) contains multihop questions about *sets of images* in the Visual Genome and imSitu (Yatskar et al., 2016) datasets.
These questions are synthetically generated from templates and are then paraphrased by humans. Unless otherwise specified, we present results on the paraphrased questions. The NLVR2 dataset (Suhr 2https://huggingface.co/sentence-transformers/all-mpnetbase-v2
| GQA | COVR | NLVR2 | |
|----------------------|--------|---------|------|
| Model | Acc. | Acc. | Acc. |
| Finetuned VisualBERT | - | 57.9 | 67.0 |
| VinVL-Base | 65.1 | - | 83.1 |
| Zero-shot FewVLM | 29.3 | - | - |
| PnP-VQA | 42.3 | - | - |
| Few-shot FewVLM | 35.7 | - | - |
| Few-shot PnP-VQA | 46.6 | 45.8 | 63.4 |
| CodeVQA (ours) | 49.0 | 50.7 | 64.0 |
Table 1: **Results on GQA (testdev), COVR (test), and**
NLVR2 (test-public) datasets from CodeVQA, Few-shot PnP-VQA, and prior work VisualBERT (Li et al., 2019),
VinVL-Base (Zhang et al., 2021), FewVLM (Jin et al.,
2021), PnP-VQA (Tiong et al., 2022) FewVLM randomly samples 16 few-shot examples. Our method outperforms all few-shot methods from prior work. Highest few-shot scores for each full dataset are in **bold**.
et al., 2019) contains statements about *pairs of images*, and the task is to determine whether each statement is true or false (we rephrase the statements as questions before feeding it to the methods that we evaluate). Appendix G has further details about the datasets. For each of the three datasets, we wrote programs for 50 questions randomly sampled from the corresponding training set. Unless stated otherwise, we put 12 in-context examples in a prompt for a single-image dataset and 6 incontext examples in a prompt for a multi-image dataset (since including captions for multiple images increases the necessary context size for each example). We report the exact-match accuracies of the lower-cased answers.
## 4.3 Baseline
Our baseline is an adaptation of PnP-VQA (Tiong et al., 2022) to the few-shot setting. We refer to it as
"Few-shot PnP-VQA." This baseline is equivalent to running the five-step query procedure described in § 3.1 for every question. We also compare to zero-shot and few-shot methods from prior work.
## 4.4 Results
Table 1 shows the results on the three datasets.
CodeVQA has the highest accuracy among the fewshot techniques. It has markedly better performance on COVR, which makes sense because in this dataset, the baseline approach must combine information across image captions for multiple images when given a single prompt. On the other hand, our method loops over the images and queries a single image at a time or selects the image most relevant to the question. Indeed, Table 3 shows that CodeVQA has the greatest advantage on instances involving 4 or 5 images.
Fig. 2 shows a qualitative comparison of CodeVQA and the baseline Few-shot PnP-VQA on the COVR dataset. CodeVQA answers the question correctly by answering a simpler question for each image and comparing the answers, while Few-shot PnP-VQA answers incorrectly despite producing captions with the necessary information.
## 4.5 Ablations
Table 2 compares embedding-based retrieval of incontext examples with random retrieval. CodeVQA's improvement over Few-shot PnP-VQA is greater when in-context examples are retrieved by embedding. Embedding-based retrieval offers a systematic way to collect relevant in-context examples rather than curating a single set of examples as in Gupta and Kembhavi (2022).
In Appendix E, we include ablations for the question-answering LM and for the number of shots in the prompt as well as results on validation sets. Table 4 shows that CodeVQA
improves over Few-shot PnP-VQA when either code-davinci-002 or text-davinci-003 is used as the question-answering LM. Table 5 shows roughly constant accuracy as the number of incontext examples is varied.
| Retrieval Method | Few-shot PnP-VQA | CodeVQA |
|-------------------------|--------------------|-----------|
| text-davinci-003 Random | 48.15 | 49.9 |
| Embedding | 49.4 | 52.5 |
| code-davinci-002 Random | 49.5 | 50.7 |
| Embedding | 52.1 | 55.3 |
Table 2: **Comparing Example Retrieval Techniques** on 2000 GQA validation examples. Italicized GPT model name denotes the model used as the question-answering LM.
## 4.6 Analysis
Figure 3 breaks down accuracy by question type.
CodeVQA's greatest improvement (roughly 30%) is in the subset consisting of questions about left/right or top/bottom object positions. There is also an improvement in "and" and "or" questions. This improvement could be related to the recent finding that LMs benefit from converting multi-hop into
![4_image_1.png](4_image_1.png)
![4_image_2.png](4_image_2.png)
![4_image_0.png](4_image_0.png)
![4_image_3.png](4_image_3.png)
CodeVQA Few-shot PnP-VQA
single-hop questions (Press et al., 2022).3
| Number of images | | | | | |
|--------------------|------|------|------|------|------|
| 1 | 2 | 3 | 4 | 5 | |
| # of Instances | 12 | 915 | 828 | 696 | 4440 |
| Few-shot PnP-VQA | 91.7 | 51.5 | 48.3 | 47.0 | 46.9 |
| CodeVQA | 75.0 | 53.3 | 48.7 | 53.2 | 53.4 |
Table 3: **Accuracy by number of images per instance** on COVR validation set.
We analyzed sources of error in CodeVQA on 100 examples in the COVR validation set for which CodeVQA answered incorrectly: irrelevant captions (31%), mistake in find_matching_image
(12%), program generation error (14%), questionanswering error (25%), predicted answer could be considered correct (14%), ground-truth is unclear/incorrect (16%), and numerical error (1%).
Note that these categories are not mutually exclusive, and 13 of the 100 examples were marked with multiple categories. Thus, more errors are due to execution of the modules than program generation.
## 5 Conclusion
In this paper, we have introduced a framework for modular few-shot VQA. Our approach prompts an LM to generate a Python program that invokes pretrained visual modules and composes the outputs of these modules to predict the answer. Unlike previous modular VQA techniques, this framework does not require (re-)training modules or a parser. Also, obtaining interpretable module outputs from previous modular approaches is nontrivial (Subramanian et al., 2020), whereas in our approach the modules are frozen and thus interpretable. CodeVQA can also be viewed as expanding pipelined systems (Zeng et al., 2022) to the full expression of code. Our ap-3Accuracy on this kind of question can also be improved by improving the LM. For instance, using text-davinci-003 as the LM for QA closes the gap between Few-shot PnP-VQA
and CodeVQA on "and" questions in GQA.
## 6 Limitations
While the initial results are promising, the accuracy of our method remains lower than human VQA accuracy and models finetuned on the VQA datasets, which suggests that there may still be substantial progress that must be made before few-shot VQA
methods with code synthesis are useful for practical real world applications. Also, further work is needed on extending the framework to additional primitives, as the results in Appendix F show that doing so does not always lead to improvements over the baseline method. Another limitation of our approach is that it relies on large capable LMs, which may be restricted in use due to compute requirements or cost (e.g. via available APIs). We also focus in this work on benchmarking VQA capabilities with English as the primary language –
future work may extend this to other languages via multilingual LMs.
## 7 Acknowledgements
We thank the members of the Berkeley NLP group, Grace Luo, and the anonymous reviewers for feedback on earlier drafts of this paper. We are grateful to Ben Bogin and Shivanshu Gupta for their assistance in evaluating CodeVQA and Few-shot PnPVQA on the private COVR test set. SS, MN, and TD were supported in part by the DoD, including an NDSEG fellowship (for SS) and DARPA's LwLL, PTG, and/or SemaFor programs, by the NSF, and/or by the Berkeley Artificial Intelligence Research (BAIR) industrial alliance program.
## References
Sandhini Agarwal, Gretchen Krueger, Jack Clark, Alec Radford, Jong Wook Kim, and Miles Brundage. 2021.
Evaluating clip: Towards characterization of broader capabilities and downstream implications. *ArXiv*,
abs/2108.02818.
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. 2022.
Flamingo: a visual language model for few-shot learning. *ArXiv*, abs/2204.14198.
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1545–1554, San Diego, California. Association for Computational Linguistics.
Ben Bogin, Shivanshu Gupta, Matt Gardner, and Jonathan Berant. 2021. Covr: A test-bed for visually grounded compositional generalization with real images. In *Conference on Empirical Methods in Natural Language Processing*.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, Igor Babuschkin, S. Arun Balaji, Shantanu Jain, Andrew
Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. *ArXiv*,
abs/2107.03374.
Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2023. Binding language models in symbolic languages. In *The Eleventh International Conference on Learning Representations*.
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904–6913.
Tanmay Gupta and Aniruddha Kembhavi. 2022. Visual programming: Compositional visual reasoning without training. *ArXiv*, abs/2211.11559.
Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In *Proceedings of the IEEE* international conference on computer vision, pages 804–813.
Drew A. Hudson and Christopher D. Manning. 2019.
Gqa: A new dataset for real-world visual reasoning and compositional question answering. *2019* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6693–6702.
Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. 2021. A good prompt is worth millions of parameters: Low-resource promptbased learning for vision-language models. *ArXiv*,
abs/2110.08484.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations. *International Journal of Computer Vision*, 123:32–73.
Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: Connecting natural language to the physical world. Transactions of the Association for Computational Linguistics, 1:193–
206.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H.
Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In *ICML*.
Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq R. Joty, Caiming Xiong, and Steven C. H. Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. In *Neural Information Processing Systems*.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language.
ArXiv, abs/1908.03557.
J. Liang, Wenlong Huang, F. Xia, Peng Xu, Karol Hausman, Brian Ichter, Peter R. Florence, and Andy Zeng.
2022. Code as policies: Language model programs for embodied control. *ArXiv*, abs/2209.07753.
Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. 2023. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. *arXiv preprint arXiv:2303.05499*.
Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external knowledge. *2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 3190–3199.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow instructions with human feedback. *ArXiv*,
abs/2203.02155.
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2022. Measuring and narrowing the compositionality gap in language models. *ArXiv*, abs/2210.03350.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763.
PMLR.
Candace Ross, Boris Katz, and Andrei Barbu. 2020.
Measuring social biases in grounded vision and language embeddings. In North American Chapter of the Association for Computational Linguistics.
Raeid Saqur and Karthik Narasimhan. 2020. Multimodal graph networks for compositional generalization in visual question answering. In *Neural Information Processing Systems*.
Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. 2016. Grad-cam: Visual explanations from deep networks via gradient-based localization.
International Journal of Computer Vision, 128:336–
359.
Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, and Matt Gardner. 2020. Obtaining faithful interpretations from compositional neural networks. In *Annual* Meeting of the Association for Computational Linguistics.
Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, and Anna Rohrbach.
2022. ReCLIP: A strong zero-shot baseline for referring expression comprehension. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 5198–5215, Dublin, Ireland. Association for Computational Linguistics.
Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 6418–6428, Florence, Italy. Association for Computational Linguistics.
Dídac Surís, Sachit Menon, and Carl Vondrick. 2023.
Vipergpt: Visual inference via python execution for reasoning. *arXiv preprint arXiv:2303.08128*.
Anthony Meng Huat Tiong, Junnan Li, Boyang Li, Silvio Savarese, and Steven CH Hoi. 2022. Plug-andplay vqa: Zero-shot vqa by conjoining large pretrained models with zero training. Findings of ACL:
EMNLP.
Zhenhailong Wang, Manling Li, Ruochen Xu, Luowei Zhou, Jie Lei, Xudong Lin, Shuohang Wang, Ziyi Yang, Chenguang Zhu, Derek Hoiem, Shih-Fu Chang, Mohit Bansal, and Heng Ji. 2022. Language models with image descriptors are strong few-shot video-language learners. *ArXiv*, abs/2205.10747.
Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. 2021.
An empirical study of gpt-3 for few-shot knowledgebased vqa. In *AAAI Conference on Artificial Intelligence*.
Mark Yatskar, Luke Zettlemoyer, and Ali Farhadi. 2016.
Situation recognition: Visual semantic role labeling for image understanding. *2016 IEEE Conference on* Computer Vision and Pattern Recognition (CVPR),
pages 5534–5542.
Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. 2022. Socratic models: Composing zero-shot multimodal reasoning with language. *arXiv preprint arXiv:2204.00598*.
Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. 2022. Lit: Zero-shot transfer with
locked-image text tuning. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18123–18133.
Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Revisiting visual representations in vision-language models. *2021 IEEE/CVF*
Conference on Computer Vision and Pattern Recognition (CVPR), pages 5575–5584.
## A Code Generation Prompts A.1 Gqa
The preamble of the prompt (gray)–containing the instruction, constants, import statements, and API documentation–and a single incontext example are shown below (question in green, program highlighted). In our main GQA experiments, 12 in-context examples are used for each evaluation example.
"""Write Python code to answer the questions about each image."""
\# Global constants
\# min x coordinate LEFT = 0
\# min y coordinate BOTTOM = 0 \# max x coordinate RIGHT = 24 \# max y coordinate TOP = 24 from PIL import Image from utils import open_images, query, find_matching_image, get_pos """
API Reference:
open_image(path: str) -> Image - opens the image at the path and returns it as an Image object query(img: Image, question: str) -> str -
queries the image returns an answer to the question get_pos(img: Image, object: str) -> (float, float) - returns the position of the object in the image """
\# Image 1: Does the bench look silver and metallic?
img = open_image("Image1.jpg")
is_silver = query(img, "Does the bench look silver and metallic?")
is_metallic = query(img, "Does the bench look metallic?")
if is_silver == "yes" and is_metallic == "yes":
answer = "yes" else:
answer = "no"
## A.2 Covr
The preamble of the prompt (gray)–containing the instruction, constants, import statements, and API documentation–and a single incontext example (question in green, program highlighted) are shown below. In our COVR experiments, 6 in-context examples are used for each evaluation example.
"""Write Python code to answer the questions about each image."""
\# Global constants
\# min x coordinate LEFT = 0 \# min y coordinate BOTTOM = 0 \# max x coordinate RIGHT = 24 \# max y coordinate TOP = 24 from PIL import Image from utils import open_images, query, find_matching_image, get_pos
"""
API Reference:
open_image(path: str) -> List[Image] - opens the images in the given directory and returns them in a list of Image objects query(img: Image, question: str) -> str -
queries the region of the image in the given coordinates and returns an answer find_matching_image(images: List[Image], text:
str) -> Image - returns the image that best matches the text get_pos(img: Image, object: str) -> (float, float) - returns the position of the object in the image """
\# Image Set 1: Is it true that there are more ladies that are wearing black shirt than men that are wearing black shirt?
images = open_images("ImageSet1.jpg") ladies_total = 0 men_total = 0 for image in images:
ladies_exist = query(image, "Is there a lady?")
if ladies_exist == "yes":
ladies_count = int(query(image, "How many ladies are wearing black shirt?"))
ladies_total += ladies_count man_exist = query(image, "Is there a man?")
if men_exist == "yes":
men_count = int(query(image, "How many men are wearing black shirt?"))
men_total += men_count
![7_image_0.png](7_image_0.png)
## B Gradcam
![7_Image_1.Png](7_Image_1.Png)
Our computation of GradCAM follows prior work that uses vision transformers (Tiong et al., 2022; Li et al., 2021). We are given a question with tokens q1*, ..., q*T and an image that is tokenized into K × K patches. We use layer L = 6 to compute 754 GradCAM, following Tiong et al. (2022). We compute a GradCAM map for each token as follows.
Let C ∈ R
T ×K2be the cross-attention map from layer L. Let G ∈ R
T ×K2be the gradient of the image-text matching score with respect to C. Then the GradCAM map for token i is given by the ith row of CJ*ReLU*(G)), where J denotes elementwise multiplication. As stated in Section 3.1, for the query primitive, we take the average GradCAM map across all question tokens, whereas for the get_pos primitive, we take the average GradCAM map across the input text tokens (which are part of the question tokens).
## C Implementation Details
To generate captions for in-context examples in each dataset, we run steps 1 − 4 for each of the 50 questions in the database of in-context examples.
For GQA experiments, we use C = 7 captions per image, and for COVR experiments, where each question is associated with multiple images, we use C = 3 captions per image.4 We use C = 7 captions for the NLVR2 dataset. Each reported accuracy result represents a single evaluation run over the corresponding evaluation set. For NLVR2 and some instances of COVR, the text input is a statement (to be classified as True/False). We convert each such statement to a question by adding the prefix "Is it true that" to it and converting the answer to "yes"/"no." We use question embeddings to select 12 examples for GQA and 6 examples for COVR and NLVR2.
## D Qualitative Comparisons
We include qualitative comparisons of our method CodeVQA to the baseline Few-shot PnPVQA (text-davinci-003) in Fig 5. In all the instances, we can see that PnP-VQA produces captions that are irrelevant to the question, resulting in incorrect answers. On the other hand, CodeVQA
breaks down the question into a Python code block.
CodeVQA uses if-else conditions along with the predefined visual modules get_pos(image, text)
and query(image, text) to focus on the right regions of the image, arriving at the correct answer in an explainable fashion.
Fig. 6 shows two examples from the NLVR-2 dataset where our method CodeVQA answers the 4We chose this number of captions to be the maximum possible subject to the number of shots and the context size of the davinci model, which we used as our question-answering LM in preliminary experiments.
![8_image_0.png](8_image_0.png)
questions correctly. In the first example, it queries each of the images for the count of the pandas, and answers the question correctly based on that. In the second example, our method breaks the question down into three simpler queries and an if-else statement to arrive at the correct answer.
Fig. 7 shows the correct results of our method on complex multireference questions in the COVR
dataset. CodeVQA is able to break down the logic to obtain the counts of images with a cake on a white plate and images with a lemon on a white plate and then evaluates if the two counts are the same.
In the second more complex example, our method uses for loops and complex if-else logic to first locate the images that satisfy the criterion,
"pillows on a couch near a table" and *"pillows on* a couch near a bed" to count the individual occurrences.
## E Additional Quantitative Results
Table 4 shows results on validation sets and compares the accuracies of CodeVQA and Fewshot PnP-VQA when using code-davinci-002 and text-davinci-003 as the question-answering LM.
Table 5 shows how the accuracies of CodeVQA
and Few-shot PnP-VQA vary with the number of shots in the prompt. Figure 4 shows the breakdown of accuracy by question type for 2000 GQA
validation examples, which we used for initial experimentation (similar to Figure 3 but on validation examples). We note that on this sample, Few-shot PnP-VQA has an advantage on "and" questions.
| GQA | COVR | | | | | | |
|---------------------|--------|------------|---------|-------|------------|------|------|
| Model | Shots | Val Sample | Testdev | Shots | Val Sample | Val | Test |
| Few-shot PnP-VQA | 12 | 49.4 | 44.9 | 6 | 51.4 | - | - |
| w/ text-davinci-003 | | | | | | | |
| CodeVQA (ours) | 12 | 52.5 | 46.8 | 6 | 54.4 | - | - |
| w/ text-davinci-003 | | | | | | | |
| Few-shot PnP-VQA | 12 | 52.1 | 46.6 | 6 | 49.0 | 47.8 | 45.8 |
| w/ code-davinci-002 | | | | | | | |
| CodeVQA (ours) | 12 | 55.3 | 49.0 | 6 | 54.5 | 52.9 | 50.7 |
| w/ code-davinci-002 | | | | | | | |
Table 4: **Validation and test results on GQA and COVR.** OpenAI model name (text-davinci-003 or code-davinci-002)
denotes which model was used as the question-answering model. GQA validation sample contains 2000 examples from the GQA
validation set. COVR validation sample contains 1000 examples from the COVR non-paraphrased validation set. Highest scores on are in **bold**.
| Number of shots | | | |
|-----------------------------------|------|------|------|
| Method | 8 | 12 | 16 |
| text-davinci-003 Few-shot PnP-VQA | 48.3 | 49.4 | 49.5 |
| CodeVQA | 52.8 | 52.5 | 52.7 |
| code-davinci-002 Few-shot PnP-VQA | 50.6 | 52.1 | 51.2 |
| CodeVQA | 55.1 | 55.3 | 55.4 |
Table 5: Accuracy with different numbers of shots on 2000 GQA validation examples.
## F **Experiments With Additional Primitives**
We also experiment with two other primitives, on datasets involving counting objects or knowledge retrieval:
find_object(image, object_description)
This function returns a set of references to objects in the image that match the given description, and we use it for counting objects. We implement this function using Grounding DINO (Liu et al., 2023),
which is an open-vocabulary object detector that is also trained on referring expression comprehension.
We evaluate this primitive on the VQAv2 dataset
(Goyal et al., 2017), for which we use only this primitive and query, as well as the COVR and NLVR2 datasets. We used 12 in-context examples for the VQAv2 dataset. Table 6 shows the results indicating that using this module for counting rather than query yields mixed results. Qualitatively, we observe that the object detector is not always accurate. In particular, the detector may not handle referring expressions with qualifiers correctly (e.g.
"boats holding people"; on the other hand, a caption may say that the boats are empty). We also observe that captions often contain the number of objects when the number is small, so query can be effective on counting.
knowledge_query(question) This function returns the answer to a question based on world knowledge (e.g. "Which football team has won the most Super Bowls?"). We implement this function using the same LM that is used for query. In order to better match the format of the OK-VQA dataset, we add a large negative bias to the logits of the following tokens to prevent the LM from generating them: hyphens, "to", and ◦. This choice was made based on preliminary experiments on the OK-VQA dataset.
We evaluate this primitive on the OK-VQA dataset
(Marino et al., 2019), for which we use only this primitive and query. We used 7 in-context examples to be consistent with the OK-VQA results in Surís et al. (2023). Table 7 provides the results, showing that for questions involving both visual information and general knowledge, breaking down the questions in this way does not lead to improved accuracy.
For both VQAv2 and OK-VQA, we use the standard evaluation method associated with the VQAv2 dataset, which takes into account the set of groundtruth answers for each question.
## G Licenses And Other Dataset Details
GQA is licensed under the CC-BY-4.0 license
(https://creativecommons.org/licenses/by/4.0/). The COVR repository (https://github.com/benbogin/covr-dataset) is
![10_image_1.png](10_image_1.png)
Few-shot PnP-VQA CodeVQA (Ours)
![10_image_0.png](10_image_0.png) ![10_image_2.png](10_image_2.png)
![10_image_3.png](10_image_3.png)
![10_image_4.png](10_image_4.png)
| VQAv2 | COVR | NLVR2 | OK-VQA |
|-------------------------------------------------------------|------------------|---------|----------|
| Few-shot PnP-VQA | 66.84 | 47.8 | 63.4 |
| CodeVQA | - | 52.9 | 64.0 |
| CodeVQA | 65.91 | 52.9 | 66.0 |
| w/ find_object | Few-shot PnP-VQA | 54.1 | |
| CodeVQA | 53.5 | | |
| w/ knowledge_query | | | |
| Table 6: Results with find_object used for counting objects | | | |
Table 7: Results with knowledge_query on the OK-VQA
validation set.
licensed under an MIT license (though imSitu images may not be licensed). The text in both datasets is written in English. The annotations in NLVR2 are licensed under CC-BY-4.0, but the images in the dataset are not licensed. The annotations in VQAv2 are licensed under CC-BY-4.0.
The testdev set of GQA contains 12578 instances.
The test set of COVR contains 7024 instances. The validation set of COVR contains 6891 instances.
The public test set of NLVR2 contains 6967 instances. The validation set of OK-VQA contains 5046 instances. For VQAv2, we evaluate on a random sample of 4000 examples from the validation set.
During the development and intermediate evaluations of our method, we evaluated on a random sample of 200 training examples and a random sample of 2000 validation examples from GQA, a random sample of 200 training examples and the validation set from COVR, a random sample of 2000 training examples from NLVR2, a random sample of 1200 training examples from OK-VQA,
and a random sample of 2200 training examples from VQAv2.
![11_image_0.png](11_image_0.png)
Code:
panda_count = 0 for image in images:
![11_image_1.png](11_image_1.png)
![11_image_2.png](11_image_2.png)
Code:
images = open_images("ImageSet7.jpg") rows_of_three = query(images[0], "Are the laptops in horizontal rows of three?") == "yes" open_laptops = query(images[0], "Are there rows of open laptops?") == "yes" closed_laptops = query(images[0], "Are there rows of closed laptops?") == "yes" if rows_of_three and open_laptops and closed_laptops:
answer = "yes" else:
answer = "no" Answer: No Figure 6: **NLVR2 Results**. We show example results from the NLVR-2 dataset of our method CodeVQA.
## H Ethics And Impact Statement
One goal of our work is to decrease the need for
(re-)training VQA systems. Achieving this goal would mean a decrease in carbon emissions from training models. However, our approach also has a high inference cost, given the use of large language models. A decision to employ our approach should take into consideration this computational cost and the associated environmental impact.
Another potential positive impact of our approach is improved interpretability via the generated programs. These programs offer to people familiar with Python a record of which visual tasks the system uses for a given question and how the system combines the outputs of these tasks to predict the answer.
Our system relies on pre-trained vision-language models to predict answers to visual questions. Prior work (Ross et al., 2020; Agarwal et al., 2021) has found evidence of social biases in vision-language models trained on image-captions. Therefore, our system may exhibit these biases as well. Practitioners should be aware of this risk and ideally should take steps to mitigate this risk when they consider deploying this system in ways that can impact human lives.
![12_image_0.png](12_image_0.png)
![12_image_1.png](12_image_1.png)
Answer: 1 Question: Are there the same number of images that have a cake on a white plate as there
![12_image_2.png](12_image_2.png) are images that have a lemon on a white plate?
![12_image_3.png](12_image_3.png)
Figure 7: **COVR Results**. We show results on the COVR dataset where our method correctly answers the question by referencing all the images.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
5
✓ A2. Did you discuss any potential risks of your work?
5 and Appendix H
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3.1
✓ B1. Did you cite the creators of artifacts you used?
3.1
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix G
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We used VQA datasets for evaluation of VQA approaches, which is clearly in line with intended use.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The questions and answers in these datasets generally refer to simple properties of objects, not ones that would reveal the identify of people.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix G
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix G
## C ✓ **Did You Run Computational Experiments?** 3
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Our approach does not involve training and much of the computation is done by models that we access via the OpenAI API.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Some of the implementation details, such as the number of captions are discussed/analyzed in Appendix C.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix C
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We will provide code upon publication, as stated in a footnote in the Introduction.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
zampieri-etal-2023-target | Target-Based Offensive Language Identification | https://aclanthology.org/2023.acl-short.66 | We present TBO, a new dataset for Target-based Offensive language identification. TBO contains post-level annotations regarding the harmfulness of an offensive post and token-level annotations comprising of the target and the offensive argument expression. Popular offensive language identification datasets for social media focus on annotation taxonomies only at the post level and more recently, some datasets have been released that feature only token-level annotations. TBO is an important resource that bridges the gap between post-level and token-level annotation datasets by introducing a single comprehensive unified annotation taxonomy. We use the TBO taxonomy to annotate post-level and token-level offensive language on English Twitter posts. We release an initial dataset of over 4,500 instances collected from Twitter and we carry out multiple experiments to compare the performance of different models trained and tested on TBO. | # Target-Based Offensive Language Identification
Marcos Zampieri1, Skye Morgan2, Kai North1**, Tharindu Ranasinghe**3 Austin Simmons2, Paridhi Khandelwal2, Sara Rosenthal4**, Preslav Nakov**5 1George Mason University, USA, 2Rochester Institute of Technology, USA
3Aston University, UK, 4IBM Research, USA
5Mohamed bin Zayed University of Artificial Intelligence, UAE
[email protected]
## Abstract
We present TBO, a new dataset for Targetbased Offensive language identification. TBO
contains post-level annotations regarding the harmfulness of an offensive post and tokenlevel annotations comprising of the target and the offensive argument expression. Popular offensive language identification datasets for social media focus on annotation taxonomies only at the post level and more recently, some datasets have been released that feature only token-level annotations. TBO is an important resource that bridges the gap between postlevel and token-level annotation datasets by introducing a single comprehensive unified annotation taxonomy. We use the TBO taxonomy to annotate post-level and token-level offensive language on English Twitter posts. We release an initial dataset of over 4,500 instances collected from Twitter and we carry out multiple experiments to compare the performance of different models trained and tested on TBO.
## 1 Introduction
Confrontational and often offensive behavior is pervasive in social media. Online communities, social media platforms, and tech companies are well aware of the problem and have been investigating ways to cope with the spread of offensive language. This has sparked growing interest in the AI and NLP communities in identifying offensive language, aggression, and hate speech in user-generated content (Davidson et al., 2017; Vidgen and Derczynski, 2020; Mubarak et al., 2020; Aggarwal et al., 2023).
The interest in this topic has motivated the study of offensive language online from different angles. Popular shared tasks organized in the past few years have created benchmark datasets, e.g.,
OLID (Zampieri et al., 2019a), which are widely used in research on this topic.
WARNING: This paper contains offensive examples.
762 Most of these shared tasks and datasets, e.g., HatEval (Basile et al., 2019) and OffensEval (Zampieri et al., 2020), have modeled offensive language at the post-level, where the goal is to predict the label of each post (e.g., offensive vs. not offensive, or hate speech vs. not hate speech). More recently, Pavlopoulos et al. (2021) developed the Toxic Spans Detection (TSD) dataset, which is annotated at the token level to focus on explainability by identifying the token spans that make a post offensive and toxic. One limitation of TSD is that it focuses exclusively on the toxic spans, while the target of the offensive expression is not annotated, for example:
(1) Canadians are very friendly, but their *politicians* are **shit**.
In the TSD dataset, *shit* would be labeled as toxic, but there would be no attempt to identify the actual target, which is *politicians*. Note that both *Canadians* and *politicians* could potentially be the target of the offensive expression. Knowing the target is important for understanding the nature of the offensive post (e.g., hate speech vs. general profanity),
an aspect that has been captured by a few annotation taxonomies (Basile et al., 2019; Zampieri et al.,
2019a). Another aspect not previously addressed is the issue of harmfulness, which is often related to polarity. All tasks so far have made the assumption that posts containing a curse word would be harmful; yet, consider the following example:
## (2) This Is One Good Looking **Motherfucker**.
Even though the word *motherfucker* is used, this sentence has positive polarity, and thus annotating this word as offensive or toxic would likely yield incorrect predictions for offensive language detection. Curse words with positive polarity are a relatively common phenomenon, and thus systems should be able to recognize the use of such words in the context of harm.
To address these limitations, we introduce a novel annotation taxonomy called Target-Based Offensive language identification (TBO). We use TBO to annotate a new dataset containing over 4,500 posts from Twitter. Our task addresses two important gaps in previous research: detecting
(i) the offensive span along with its target, and (ii) its harmfulness. Furthermore, we derive two post-level labels from the token-level annotation as described in Section 3. We draw inspiration from the popular aspect-based sentiment analysis task
(Pontiki et al., 2016), which promoted explainability in sentiment analysis. Here, we apply a similar idea to offensive language identification. The main contributions of our work are as follows:
1. A new target-based taxonomy that will open new avenues for research in offensive language identification with a special focus on explainability.
2. Development and release of the TBO dataset containing 4,500 manually annotated posts from Twitter.
3. An evaluation of multiple models trained on this dataset. To the best of our knowledge, this is the first computational modeling of offensive expressions and targets in offensive language identification. The code and the pretrained models are made freely available to the community.1
## 2 Related Work
The interest in studying and modeling offensive language in social media continues to grow. This is evidenced by the creation of many widely-used datasets released in the past few years (Founta et al., 2018; Davidson et al., 2017; Zampieri et al.,
2019a; Rosenthal et al., 2021) and the organization of multiple popular shared tasks at SemEval and other venues. Along with the aforementioned HatEval (Basile et al., 2019), OffensEval (Zampieri et al., 2019b, 2020), and TSD (Pavlopoulos et al.,
2021), some related tasks have recently been organized also at SemEval, namely HaHackathon
(Meaney et al., 2021) on humor and offensiveness, and MAMI (Fersini et al., 2022) on multimodal (text and image) offensive content targeted at women.
1https://github.com/LanguageTechnologyLab/TBO
Popular related tasks organized in other venues include HASOC (Modha et al., 2021; Satapara et al., 2022) at the Forum for Information Retrieval
(FIRE) and TRAC (Kumar et al., 2018, 2020) at the TRAC workshop. As discussed in a survey by Poletto et al. (2021), all these competitions have provided participants with important benchmark datasets to evaluate the performance of systems trained to detect multiple different types of offensive content.
With the exception of the aforementioned TSD
(Pavlopoulos et al., 2021) and HateXplain (Mathew et al., 2021) datasets, which deal with token spans, all the datasets and competitions discussed in this section target post-level offensive language identification where systems are trained to attribute a label, such as offensive or not offensive, to an instance, typically a post or a comment. The identification of offensive spans has been, so far, mostly unexplored, and our TBO dataset fills this gap. Moreover, the unified taxonomy with target and harmfulness modeled as triples is a new way of reformulating the problem and it is the main new contribution of our work. We believe that our TBO taxonomy opens new avenues for future research.
## 3 Target-Based Offensive Language Identification 3.1 Annotation Taxonomy
Our TBO taxonomy builds on the annotation framework proposed in OLID (Zampieri et al., 2019a),
which was widely replicated in several datasets for English (Rosenthal et al., 2021) and other languages (Pitenis et al., 2020; Sigurbergsson and Derczynski, 2020; Çöltekin, 2020; Gaikwad et al.,
2021). As a result, the OLID taxonomy has become a *de facto* standard for general offensive language identification due to the flexibility it provides by representing multiple phenomena, which were treated in isolation in many other studies such as cyberbulling, hate speech, and general profanity
(Rosa et al., 2019; Poletto et al., 2021). OLID's hierarchical annotation model comprises of three levels: level A (offensive or not offensive), level B
(targeted or untargeted), and level C (group, person, or other). The assumption is that the type and the target of posts is paramount in determining the type of offensive content, e.g., offensive posts targeted at a group are often considered hate speech, while such targeted at an individual are often considered cyberbulling.
| Tweet | TARGET | ARGUMENT | HARMFUL |
|----------------------------------------------|----------------|------------|-----------|
| @USER Liberals are all Kookoo !!! | Liberals | Kookoo | YES |
| @USER He is a DUMBASS !!!!! | He | DUMBASS | YES |
| @USER @USER @USER Says the fat Antifa member | Antifa member | fat | YES |
| @USER Oh shit stay safe!! | NULL | shit | NO |
| @USER Master of None was so fucking good. | Master of None | fucking | NO |
Table 1: Examples of tweets from the TBO dataset with corresponding annotations of TARGET, ARGUMENT,
HARMFULNESS triples.
In TBO, we consider offensive posts as defined by OLID level A with multiple types and targets. The TBO annotation taxonomy models tokenlevel annotation of these offensive posts in triples:
(TARGET, ARGUMENT, HARMFULNESS).
TARGET The target of the offensive argument, such as a person or a group. This can also be NULL when the instance is untargeted, e.g., *Oh shit, stay safe!*
ARGUMENT The span containing the offensive tokens.
HARMFULNESS YES, if the argument is harmful to the target; otherwise, NO. Harmful expressions will often correlate with negative polarity as in the case of sentiment analysis.
Examples of triples are shown in Table 1. Note that the relationship between TARGET and ARGUMENT
can be 1:M, M:1, or even M:M. Here is an example of an M:M relationship:
(3) *Peter* is an **idiot** and an **asshole**, and so is John.
In this case, four triples can be formed: (Peter, idiot, YES), (Peter, *asshole*, YES), (John, *idiot*,
YES), and (John, *asshole*, YES).
Overall, our two TBO subtasks are substantially different from previous tasks on this topic: we address the identification of targets rather than spans, and we further focus on the harmfulness of the offensive arguments. To the best of our knowledge, this is the first work in which these two aspects of offensive language identification have been addressed.
## 3.2 The Tbo Dataset
We sampled data from the SOLID dataset (Rosenthal et al., 2021), the largest English offensive language dataset with over 9 million tweets.
SOLID's semi-supervised annotation strategy follows OLID's three-layer annotation taxonomy, which enabled us to use a sampling strategy geared towards collecting complex targets and a wide variety of offensive arguments. In particular, we sampled social media posts with an aggregate offensive score (OLID/SOLID level A) in the range [0.6–
1.0] to ensure that our sampled data was rich in curse words and offensive arguments. We further filtered posts to have at least 11 tokens, ensuring we obtained longer posts, which tended to contain longer arguments and often times, several associated targets. Lastly, we used the SOLID level C
score to filter posts that target groups with the goal of obtaining posts that are more likely to contain multiple targets.
To measure the inter-annotator agreement (IAA),
we first performed a trial annotation experiment with 350 tweets annotated by seven trained annotators working on the project. Four of them were graduate students in computing based in the USA aged 22-30, while three were researchers in NLP aged 30-45 based in the USA and UK. We report 0.81 Kappa IAA for harmfulness and 0.78 for the target. After the trial experiment, we randomly selected samples for training and testing, which were then annotated by the same annotators.
The final dataset comprises over 4,500 tweets.
The training set has a total of 4,000 instances and includes 6,924 triples, 4,863 of which are harmful.
Table 2 provides some statistics about the number of tweets per set along with the number of harmful and harmless triples.
| Set | Instances | Triples | Harmful | Harmless |
|-------|-------------|-----------|-----------|------------|
| Train | 4,000 | 6,924 | 4,863 | 2,061 |
| Test | 673 | 1,096 | 640 | 456 |
| Total | 4,673 | 8,020 | 5,503 | 2,517 |
Table 2: Number of tweets and triples in the TBO
dataset, and their harmfulness.
| Set | Targeted | Harmful |
|-------|------------|-----------|
| Train | 3,167 | 2,890 |
| Test | 505 | 445 |
| Total | 3,672 | 3,335 |
Table 3: Number of targeted and harmful tweets.
We further compute the number of targeted and harmful posts in the dataset and we present this information in Table 3. We considered a post targeted if it contains at least one targeted triple, and harmful if it contains at least one harmful triple.
## 4 Methods
We experimented with three types of models:
Triple Prediction Models Since the goal of TBO
is to predict all elements of an offensive tweet (target, argument, and harmfulness), we are more interested in models that can output triples instead of individual elements. Therefore, we used the following models capable of predicting triples. **Sequence Labeling** (Barnes et al., 2022) where a BiLSTM is used to extract targets and arguments separately and then we train a relation prediction model to predict the harmfulness. **Dependency**
Graph, adapted from the head-final approach of Barnes et al. (2021), where the target, the arguments, and the harmfulness are modeled as a dependency graph parsing problem. Finally, two versions in RACL (Chen and Qian, 2020): **RACL-GloVe**
and **RACL-BERT**, which use GloVe 840B and BERT-large as input embeddings, respectively.
Token Classification Models We experimented with different token classification architectures, which we trained on two tasks separately: target identification and argument identification. These implementations are largely adopted from the toxic spans detection task (Pavlopoulos et al., 2021).
Our **BiLSTM** is a Bi-LSTM-CRF model (Panchendrarajan and Amaresan, 2018). We also experimented with a token classification architecture in transformers, based on BERT-large, to which we refer as **BERT-token**.
Binary Prediction Models Finally, we experimented with a sentence classification architecture in transformers based on BERT-large, referred to as **BERT-post**. The classifier is trained at the post level: if the tweet contained at least one harmful triple, we considered the entire tweet harmful.
## 4.1 Evaluation Measures
As we are interested in extracting full triples, we propose evaluation measures that capture the relationship between all predicted elements.
(1) Spans - Token-level F1 for TARGET and AR-**GUMENT** This evaluates how well these models are able to identify the elements of a tuple.
(2) Targeted F1 A true positive example requires the combination of exact extraction of the target, and correct harmfulness label.
(3) Target Argument We used two evaluation measures that evaluate the model's capability to extract the target and the argument jointly. The first one is Non-polar Target Argument F1 (NTAF1),
where each prediction is considered as a pair of
(TARGET, ARGUMENT) and a true positive is defined as an exact match of all elements. We also used Target Argument F1 (TAF1), which uses the same measures as NTAF1, but includes harmfulness as well: (TARGET, ARGUMENT, HARMFUL-NESS).
(4) Harmful F1 Macro-F1 scores for harmfulness either at the tuple level or at the post level, depending on the model.
## 4.2 Experimental Setup
We used a GeForce RTX 3090 GPU to train the models. We divided the TBO dataset into a training set and a development set using an 80-20 split.
Transformers We used the configurations presented in Table 4 in all the experiments. We performed *early stopping* if the validation loss did not improve over ten evaluation steps.
| Parameter | Value |
|-----------------------------|---------|
| adam epsilon | 1e-8 |
| batch size | 64 |
| epochs | 3 |
| learning rate | 1e-5 |
| warmup ratio | 0.1 |
| warmup steps | 0 |
| max grad norm | 1.0 |
| max seq. length | 256 |
| gradient accumulation steps | 1 |
Table 4: Transformer parameter specification.
![4_image_0.png](4_image_0.png)
BiLSTM Model Configurations The configurations for the BiLSTM model are presented in Table 5. The training process was similar to that for the transformer models.
## 5 Results
Table 6 shows the results for all models from Section 4. We trained each model with five different random seeds, and we report the average evaluation scores. For all models and evaluation measures, the standard deviation was less than 0.0001.
All models performed well at extracting targets scoring more than 0.3 on Target F1 score.
RACL-BERT model performed best with a Target F1 score of 0.443, and it yielded the best overall result for Targeted F1. Comparatively, all models struggled with predicting the argument. None of the models we experimented with managed to reach an Argument F1 score of 0.3. RACLBERT performed best for predicting arguments as well. All of the triple prediction models performed competitively to token classification architectures.
As all models struggled with predicting the arguments, the target argument measures are low for all of them. Among the triple prediction models, RACL-BERT achieved the best NTAF1 and TAF1 scores. Both post-level models and tripleprediction models thrived on harmfulness prediction. RACL-BERT achieved the best result from the triple-prediction models scoring 0.693 Macro F1 score on the triple level.
## 6 Conclusion And Future Work
We presented our novel target-based Offensive language identification (TBO) taxonomy, which we used to annotate a new English dataset with over 4,500 tweets. We further evaluated the performance of various models on this new dataset and we discussed the evaluation results in detail.
We release all data as well as our code publicly.
We believe that the TBO taxonomy and our dataset will be widely used in research on this topic as they have addressed important gaps in previous annotation taxonomies, most notably target identification.
In future work, we plan to annotate more data using the taxonomy proposed above, including other languages. This will allow us to take advantage of existing cross-lingual learning models for making predictions as well as for studying cross-language and cross-cultural aspects of offensive language online. We would also like to create comparable annotated TBO datasets for other languages, which will allow us to take advantage of existing cross-lingual models for offensive language identification (Ranasinghe and Zampieri, 2020; Nozza, 2021). We believe that this will get us closer to what online platforms need (Arora et al., 2023).
## Ethics Statement
The dataset we presented in this paper was collected from SOLID (Rosenthal et al., 2021), a freely-available large-scale dataset containing data from Twitter. No new data collection has been carried out as part of this work. We did not collect or process writers'/users' information, nor have we carried out any form of user profiling, thus protecting users' privacy and anonymity. Note also that in SOLID, all Twitter handles are replaced with
@USER as a de-identification process. We understand that every dataset is subject to intrinsic biases and that computational models will inevitably learn biased information from any dataset. That being said, we believe that the token-level annotation in TBO will help cope with biases found in models trained on tweet-level annotations by improving the model's interpretability.
Intended Use Our intended use is the same as for SOLID, the dataset we sampled our examples from (Rosenthal et al., 2021). We aim to encourage research in automatically detecting and limiting offensive content towards a target from being disseminated on the web. Using our dataset for its intended use can alleviate the psychological burden for social media moderators who are exposed to extremely offensive content. Improving the performance of offensive content detection systems can decrease the amount of work for human moderators, but some human supervision is still necessary to avoid harm and ensure transparency. We believe that content moderation should be a trustworthy and transparent process applied to clearly harmful content so it does not hinder individual freedom of expression rights. We distribute our dataset under a Creative Commons license, the same as for SOLID.
Any biases found in the dataset are unintentional.
| Model | (1) Spans | (2) Targeted | (3) Tar. Arg. | (4) Harm | | |
|-------------------|-------------|----------------|-----------------|------------|-------|-------|
| Target F1 | Arg. F1 | F1 | NTAF1 | TAF1 | F1 | |
| Sequence labeling | 0.326 | 0.193 | 0.238 | 0.185 | 0.178 | 0.633 |
| Dependency graph | 0.368 | 0.213 | 0.282 | 0.206 | 0.201 | 0.657 |
| RACL-GloVe | 0.335 | 0.208 | 0.241 | 0.202 | 0.191 | 0.621 |
| RACL-BERT | 0.442 | 0.256 | 0.381 | 0.243 | 0.233 | 0.693 |
| BERT-token | 0.412 | 0.236 | - | - | - | - |
| BiLSTM | 0.315 | 0.182 | - | - | - | - |
| BERT-post | - | - | - | - | - | 0.745 |
Table 6: Experiments comparing the different models on the TBO dataset.
## Limitations
Biases Human data annotation for a sentimentrelated task, e.g., aspect-based sentiment analysis, hate speech detection, etc., involves some degree of subjectivity. While we included important quality control steps in the TBO annotation process, this intrinsic subjectivity will inevitably be present in TBO and learned by the models (see also the Ethics Statement above). That being said, the hierarchical annotations presented in OLID, TBO, and other similar datasets aim to increase the annotation quality by breaking down the decision process, thus providing clearer guidelines to the annotators.
Dataset Collection Another factor that may be considered as a limitation is the dataset size: 4,500 instances and 8,000 triples. We would expect models to perform better when the dataset is expanded in the future. We are addressing this limitation by annotating more data that will be ready for release soon. Finally, another limitation is that this is currently an English-only dataset. We would like to expand TBO to other languages and to take advantage of cross-lingual models (XLM-R, mBERT,
etc.) for multilingual predictions.
Risks A dataset containing offensive content is at risk of misuse. The dataset can be maliciously used to build models that unfairly moderate text (e.g., a tweet) that may not be offensive based on biases that may or may not be related to demographic and/or other information present within the text. Due to the nature of the task, this dataset can be also used maliciously to display offensive content.
The dataset should not be used for this purpose; our intended use is discussed in the Ethics Statement. Intervention by human moderators would be required to ensure that malicious uses do not occur.
## References
Piush Aggarwal, Pranit Chawla, Mithun Das, Punyajoy Saha, Binny Mathew, Torsten Zesch, and Animesh Mukherjee. 2023. HateProof: Are hateful meme detection systems really robust? In Proceedings of TheWebConf.
Arnav Arora, Preslav Nakov, Momchil Hardalov, Sheikh Muhammad Sarwar, Vibha Nayak, Yoan Dinkov, Dimitrina Zlatkova, Kyle Dent, Ameya Bhatawdekar, Guillaume Bouchard, and Isabelle Augenstein. 2023. Detecting harmful content on online platforms: What platforms need vs. where research efforts go. *ACM Comput. Surv.*
Jeremy Barnes, Robin Kurtz, Stephan Oepen, Lilja Øvrelid, and Erik Velldal. 2021. Structured sentiment analysis as dependency graph parsing. In *Proceedings of ACL*.
Jeremy Barnes, Laura Ana Maria Oberländer, Enrica Troiano, Andrey Kutuzov, Jan Buchmann, Rodrigo Agerri, Lilja Øvrelid, and Erik Velldal. 2022.
SemEval-2022 task 10: Structured sentiment analysis. In *Proceedings of SemEval*.
Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti.
2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In *Proceedings of SemEval*.
Çagrı Çöltekin. 2020. A corpus of Turkish offensive ˘
language on social media. In *Proceedings of LREC*.
Zhuang Chen and Tieyun Qian. 2020. Relation-aware collaborative learning for unified aspect-based sentiment analysis. In *Proceedings of ACL*.
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of ICWSM.
Elisabetta Fersini, Francesca Gasparini, Giulia Rizzi, Aurora Saibene, Berta Chulvi, Paolo Rosso, Alyssa Lees, and Jeffrey Sorensen. 2022. SemEval-2022
Task 5: multimedia automatic misogyny identification. In *Proceedings of SemEval*.
Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of Twitter abusive behavior. In *Proceedings of ICWSM*.
Saurabh Gaikwad, Tharindu Ranasinghe, Marcos Zampieri, and Christopher M Homan. 2021. Crosslingual offensive language identification for low resource languages: The case of Marathi. In *Proceedings of RANLP*.
Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression identification in social media. In Proceedings of TRAC.
Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2020. Evaluating aggression identification in social media. In *Proceedings of TRAC*.
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. HateXplain: a benchmark dataset for explainable hate speech detection. In *Proceedings of* AAAI.
JA Meaney, Steven Wilson, Luis Chiruzzo, Adam Lopez, and Walid Magdy. 2021. SemEval 2021 task 7: Hahackathon, detecting and rating humor and offense. In *Proceedings of SemEval*.
Sandip Modha, Thomas Mandl, Gautam Kishore Shahi, Hiren Madhu, Shrey Satapara, Tharindu Ranasinghe, and Marcos Zampieri. 2021. Overview of the HASOC subtrack at FIRE 2021: Hate speech and offensive content identification in English and IndoAryan languages and conversational hate speech. In Proceedings of FIRE.
Hamdy Mubarak, Kareem Darwish, Walid Magdy, Tamer Elsayed, and Hend Al-Khalifa. 2020.
Overview of OSACT4 Arabic offensive language detection shared task. In *Proceedings of OSACT*.
Debora Nozza. 2021. Exposing the limits of zero-shot cross-lingual hate speech detection. In *Proceedings* of ACL.
Rrubaa Panchendrarajan and Aravindh Amaresan. 2018.
Bidirectional LSTM-CRF for named entity recognition. In *Proceedings of PACLIC*.
John Pavlopoulos, Jeffrey Sorensen, Léo Laugier, and Ion Androutsopoulos. 2021. SemEval-2021 task 5:
Toxic spans detection. In *Proceedings of SemEval*.
Zeses Pitenis, Marcos Zampieri, and Tharindu Ranasinghe. 2020. Offensive language identification in Greek. In *Proceedings of LREC*.
Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2021. Resources and benchmark corpora for hate speech detection: a systematic review. *Language Resources and Evaluation*, 55(2):477–523.
Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud María Jiménez-Zafra, and Gül¸sen Eryigit. ˘
2016. SemEval-2016 task 5: Aspect based sentiment analysis. In *Proceedings of SemEval*.
Tharindu Ranasinghe and Marcos Zampieri. 2020. Multilingual offensive language identification with crosslingual embeddings. In *Proceedings of EMNLP*.
Hugo Rosa, N Pereira, Ricardo Ribeiro, Paula Costa Ferreira, Joao Paulo Carvalho, S Oliveira, Luísa Coheur, Paula Paulino, AM Veiga Simão, and Isabel Trancoso. 2019. Automatic cyberbullying detection:
A systematic review. *Computers in Human Behavior*,
93:333–345.
Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, and Preslav Nakov. 2021. SOLID:
A large-scale semi-supervised dataset for offensive language identification. In *Findings of the ACL*.
Shrey Satapara, Prasenjit Majumder, Thomas Mandl, Sandip Modha, Hiren Madhu, Tharindu Ranasinghe, Marcos Zampieri, Kai North, and Damith Premasiri.
2022. Overview of the HASOC subtrack at FIRE
2022: Hate speech and offensive content identification in English and Indo-Aryan languages. In *Proceedings of FIRE*.
Gudbjartur Ingi Sigurbergsson and Leon Derczynski.
2020. Offensive language and hate speech detection for Danish. In *Proceedings of LREC*.
Bertie Vidgen and Leon Derczynski. 2020. Directions in abusive language training data, a systematic review: Garbage in, garbage out. *PLOS One*,
15(12):e0243300.
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar.
2019a. Predicting the type and target of offensive posts in social media. In *Proceedings of NAACL*.
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar.
2019b. SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval). In *Proceedings of SemEval*.
Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Çagrı Çöltekin. ˘
2020. SemEval-2020 task 12: Multilingual offensive language identification in social media (OffensEval 2020). In *Proceedings of SemEval*.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Section 6.3
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2.2, Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 2.2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 6.4
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 6.4
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data comes from SOLID where the authors replace all twitter handles with @USER. No identifying information is present. See Section 5.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Information regarding the data can be found in the original dataset, SOLID, our dataset is derived from. All content is in English.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2.2 and Tables 2 and 3
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3 and supplemental material The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3 and Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 2.2.
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
We had internal documents outlining the annotation tasks. All participants were familiar with the annotation guidelines as they were working in the project. We have not relied on external annotators for this task.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 2,2. Please note that participants were not paid for the annotation because they were collaborators in the project.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section 2.2. We have collected data from SOLID which is a dataset that adheres to the Twitter guidelines and the data is anonymized.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
The data comes from SOLID, we did not collect new data.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Section 2.2. |
ponce-etal-2023-unsupervised | Unsupervised Subtitle Segmentation with Masked Language Models | https://aclanthology.org/2023.acl-short.67 | We describe a novel unsupervised approach to subtitle segmentation, based on pretrained masked language models, where line endings and subtitle breaks are predicted according to the likelihood of punctuation to occur at candidate segmentation points. Our approach obtained competitive results in terms of segmentation accuracy across metrics, while also fully preserving the original text and complying with length constraints. Although supervised models trained on in-domain data and with access to source audio information can provide better segmentation accuracy, our approach is highly portable across languages and domains and may constitute a robust off-the-shelf solution for subtitle segmentation. | # Unsupervised Subtitle Segmentation With Masked Language Models
David Ponce∗1,2and **Thierry Etchegoyhen**∗1and **Victor Ruiz**1 1 Vicomtech Foundation, Basque Research and Technology Alliance (BRTA)
2 University of the Basque Country UPV/EHU
{adponce,tetchegoyhen,vruiz}@vicomtech.org
## Abstract
We describe a novel unsupervised approach to subtitle segmentation, based on pretrained masked language models, where line endings and subtitle breaks are predicted according to the likelihood of punctuation to occur at candidate segmentation points. Our approach obtained competitive results in terms of segmentation accuracy across metrics, while also fully preserving the original text and complying with length constraints. Although supervised models trained on in-domain data and with access to source audio information can provide better segmentation accuracy, our approach is highly portable across languages and domains and may constitute a robust off-the-shelf solution for subtitle segmentation.
## 1 Introduction
Subtitling is one of the principal means of providing accessible audiovisual content. With the ever increasing production of audiovisual content in multiple domains and languages, in the current digital era, subtitle provision can benefit from automation support, via Automatic Speech Recognition and/or Machine Translation (Volk et al., 2010; Aliprandi et al., 2014; Etchegoyhen et al., 2014; Tardel, 2020; Bojar et al., 2021).
Subtitles are subject to specific constraints in order to achieve adequate readability, including layout, on-screen duration and text editing. Among these constraints, segmentation addresses the maximum number of characters per line, the number of lines per subtitle, and breaks at natural linguistic frontiers. Segmentation has been shown to be an important readability factor (Perego et al., 2010; Rajendran et al., 2013), with improperly segmented subtitles resulting in increased cognitive effort and reading times for users. Thus, automated subtitling systems need to generate properly segmented subtitles to achieve readability.
*These authors contributed equally to this work.
A typical baseline for subtitle segmentation, still used in some production systems, is simple character counting, whereby line breaks are inserted before reaching the maximum allowed number of characters per line. Although simple and fast, this approach does not address the need for linguistically correct segments and, therefore, falls short in terms of readability. Several approaches have been proposed to improve segmentation by automated means. Álvarez et al. (2014) proposed a machine learning method where subtitle breaks are predicted by Support Vector Machine and Linear Regression models trained on professionally-created subtitles.
A similar method based on Conditional Random Fields was then shown to improve over these results
(Alvarez et al., 2017). Approaches that directly generate subtitle breaks within Neural Machine Translation have also been proposed in recent years
(Matusov et al., 2019; Karakanta et al., 2020a). Recently, Papi et al. (2022) developed a multilingual segmenter which generates both text and breaks and may be trained on textual input only, or on joint text and audio data.
Although quality subtitle segmentation may be achieved with the aforementioned approaches, they require supervised training on segmented subtitle corpora. At present, the largest subtitle corpus is Open Subtitles (Lison et al., 2018), which mainly covers entertainment material, contains subtitles mostly created by non-professionals or automatically translated, and does not include line breaks. The MuST-Cinema corpus (Karakanta et al., 2020b), on the other hand, is a multilingual speech translation corpus that includes subtitles breaks, but is only available for 8 languages at the moment. Considering the vast amount of languages and domains in audiovisual content, the lack of segmented training data hinders the development of robust automated subtitling systems.
In this work, we describe a novel unsupervised method based on pretrained masked language models (MLM), where line and subtitle breaks are inserted according to the likelihood of a segment acting as an isolated unit, as approximated by the probability of a punctuation mark occurring at a given segmentation point. In our experiments, this novel approach obtained competitive results on most metrics, while also fully preserving the original text and complying with length constraints. Our system may thus be used as a simple yet efficient subtitle segmenter with any pretrained masked language model, for any language covered by the model.
## 2 Approach
Our approach is based on the standard view that the more appropriate subtitle segments are those that may function as isolated grammatical chunks.
We further hypothesise that a relevant approximation for the identification of this type of unit is the likelihood of a punctuation mark being inserted at the end of a candidate segment, as punctuation may mark the closure of a syntactic unit and is often associated with discursive pauses. To test this hypothesis, we compute the likelihood of punctuation marks at different segmentation points, as predicted by a pretrained MLM, and select the insertion point with the highest likelihood.1 The segmentation candidates are determined under a sliding-window approach over the entire input text. We first generate the list of all pairs <α, β>
over the unprocessed portion of the text, where α is a segmentation candidate of length under a specified limit K, corresponding to the maximum number of characters per line, and β is the remaining portion of the text to be segmented.
We then score all segmentation candidates α with one of the LM scoring variants described below. A segmentation marker, either end-of-line
(<eol>), or end-of-block indicating the end of a subtitle (<eob>), is then appended to the best scoring candidate, and β becomes the input text to be segmented in a recursive iteration of the process.
Since our method does not rely on any additional information, such as an audio source, to determine the segmentation type, an <eob> tag is inserted every even segment or when β is empty; otherwise, an <eol> tag is inserted. We thus generate subtitles with a maximum of two lines, following a standard recommendation in subtitling. We also define a minimal number of characters (min) in α for the 1Throughout our experiments, we used the following punctuation marks: '.', ',', '?', '!', ':' and ';'.
segmentation process to apply, and do not segment lines that are under the specified character limit.
We evaluated three approaches to compute segmentation scores over each candidate pair <α, β>:
- **Substitution:** The last token of α is masked and the score is the highest MLM probability among punctuation marks on this mask.
- **Insertion:** A mask is appended to α and the score is the highest MLM probability among punctuation marks on this mask.
- **LM-Score:** The score is the average of the perplexity of α and β, as derived from the MLM probabilities for each token in the corresponding sequence.
The first two methods are variants of our core approach. The third method, while also based on the same pretrained MLM, relies instead on the pseudoperplexity of the sequences according to the MLM,
computed following Salazar et al. (2020). We included this latter variant to measure the potential of using LM scoring directly, without resorting to the likelihood of punctuation marks.
## 3 Experimental Setup
Corpora. For all experiments, we used the MustST-Cinema corpus (Karakanta et al., 2020b),
which is derived from TED talks and contains both line and subtitle break markers. In addition to being publicly available, it also allows for a direct comparison with the supervised models of Papi et al. (2022). We report results of our approach on the 6 MuST-Cinema datasets for which comparative results were available, directly predicting segmentation on the test sets without any training.2 Methods. For our approach, we tested the three variants described in Section 2. We used BERT
(Devlin et al., 2019) as our MLM for all languages.3. Additionally, we included a variant called overt clueing (OC), where an overt punctuation mark at the end of a candidate segment increments the mask score by 1. We then compared the results of the best LM-based variant with those obtained by alternative approaches. In all cases, our results were computed with min = 15, as this value obtained the best results overall over the development
| English | Spanish | German | | | | | | | |
|--------------|-----------|----------|--------|-------|--------|-------|-------|--------|-------|
| Method | Sigma | EOL | EOB | Sigma | EOL | EOB | Sigma | EOL | EOB |
| Substitution | 71.65 | +19.86 | -10.96 | 69.34 | +12.36 | -5.74 | 69.31 | +19.05 | -7.05 |
| Insertion | 76.77 | +19.18 | -9.91 | 73.47 | +12.98 | -4.91 | 70.85 | +18.53 | -7.96 |
| LM-Score | 69.97 | +21.40 | -8.66 | 67.70 | +13.29 | -5.37 | 64.07 | +16.45 | -6.51 |
sets, although the differences were minor with the other values we tested (1, 10 and 20).4 We used the simple character counting approach
(hereafter, *CountChars*) as baseline, and, as representative supervised methods on the selected datasets, the models described by (Papi et al., 2022).
Their core supervised approach is based on a Transformer (Vaswani et al., 2017) architecture with 3 encoder layers and 3 decoder layers, trained on textual MuST-Cinema input only (*MC.Text*), or on complementary audio data as well via an additional speech encoder with 12 encoder layers
(*MC.Multi*). They trained each variant on either monolingual data alone (*mono*), or in a multilingual setting (*multi*). Finally, they also report results for a variant (*OS.Text*) trained on the Open Subtitles corpus (Lison et al., 2018) for their zero-shot experiments.
Evaluation. We use the subtitle-oriented metric Sigma (Karakanta et al., 2022), which computes the ratio of achieved BLEU (Papineni et al., 2002) over an approximated upper-bound BLEU score, on text that includes line and subtitle breaks. Sigma is meant to support the evaluation of imperfect texts, i.e. text that differs from the reference when breaks are omitted. Although our approach does not produce imperfect text, achieving perfect BLEU scores when breaks are ignored, we used this metric for comparison purposes. We also report break coverage results (Papi et al., 2022), defined as the ratio of predicted breaks over reference breaks, which we computed separately for the EOL and EOB
breaks. Finally, we include length conformity results (CPL), measured as the percentage of subtitle lines whose length is under the maximum number of characters defined by the subtitle guidelines (42 in the TED guidelines5).
## 4 Lm-Based Segmentation Variants
We first compared the three methods described in Section 2 on the English, Spanish and German datasets, with the results described in Table 1. In terms of Sigma, the Insertion method obtained the best results in all cases. It also obtained the best scores in terms of coverage for the EOL marker, except in Spanish, although all three variants tend to overgenerate end-of-line markers to similar extents.
The LM-Score variant obtained the worst results in terms of Sigma, but outperformed the alternatives in terms of EOB coverage, a metric on which the three variants performed markedly better than on EOL coverage. Considering the overall results, we selected the Insertion variant as the most balanced one for all remaining experiments reported below.
## 5 Comparative Results
In Table 2, we present the results obtained by the selected approaches on the languages for which results were available with supervised models trained on in-domain data. Overall, our approach outperformed the *CountChars* baseline across the board, and was in turn outperformed by the supervised variants in terms of Sigma scores. Although it is clear from these results that training segmentation models on in-domain data, with or without audio data, provides clear advantages in terms of subtitle segmentation, it is worth noting that Sigma does not, by design, reflect the actual BLEU score without breaks, i.e. the generation of imperfect text, which is a by-product of the above supervised approaches and non-existent in ours.6In terms of CPL, all supervised models generate subtitle lines that overflow the limit, to a significant degree, whereas the selected unsupervised models trivially respect the length constraint.
6The results indicated in Table 3 on unseen data seem to indicate that their *MC.Multi* model can reach BLEU scores close to 100, thereby limiting the negative impact of imperfect text generation in these cases.
| English | French | German | Italian | | | | | | |
|------------|----------|----------|-----------|-------|-------|-------|-------|-------|-------|
| Method | Training | Sigma | CPL | Sigma | CPL | Sigma | CPL | Sigma | CPL |
| CountChars | N/A | 63.71 | 100% | 62.87 | 100% | 62.34 | 100% | 61.49 | 100% |
| MC.Text | mono | 84.87 | 96.6% | 83.68 | 96.7% | 83.62 | 90.9% | 82.22 | 90.0% |
| multi | 85.98 | 88.5% | 84.56 | 94.3% | 84.02 | 90.9% | 83.04 | 91.2% | |
| MC.Multi | mono | 85.76 | 94.8% | 84.25 | 93.9% | 84.22 | 91.4% | 82.62 | 89.9% |
| multi | 87.44 | 95.0% | 86.49 | 94.1% | 86.40 | 89.9% | 85.33 | 90.0% | |
| MLM | N/A | 76.77 | 100% | 73.78 | 100% | 70.85 | 100% | 71.38 | 100% |
| MLM+OC | N/A | 77.89 | 100% | 76.07 | 100% | 75.63 | 100% | 74.20 | 100% |
Table 2: Comparative results between unsupervised methods and supervised approaches trained on in-domain data
| Dutch | | | | |
|------------|------------|------|-------------------|------|
| Method | BLEU Sigma | CPL | EOL EOB | |
| CountChars | 100 | 63.2 | 100% -21.2 | -7.1 |
| OS.Text | 89.5 | 64.4 | 71.2% -31.4 -51.3 | |
| MC.Text | 61.3 | 74.4 | 77.8% -23.4 | -9.9 |
| MC.Multi | 99.9 | 80.3 | 91.4% -27.2 | 0.4 |
| MLM | 100 | 68.7 | 100% +20.4 -10.0 | |
| MLM+OC | 100 | 73.9 | 100% +21.2 -10.0 | |
| Spanish | | | | |
| Method | BLEU Sigma | CPL | EOL EOB | |
| CountChars | 100 | 63.2 | 100% -24.6 | -4.4 |
| OS.Text | 92.6 | 64.1 | 71.2% -32.3 -45.4 | |
| MC.Text | 69.6 | 75.8 | 70.1% -47.6 -19.3 | |
| MC.Multi | 99.6 | 78.7 | 91.8% -22.4 | 4.7 |
| MLM | 100 | 73.5 | 100% +13.0 -4.9 | |
| MLM+OC | 100 | 75.6 | 100% +13.4 -4.6 | |
In Table 3, we show the comparative results between the selected unsupervised methods and the supervised variants, in languages where zero-shot results were available for the latter approaches. In this scenario, in terms of Sigma our approach obtained results on a par with the supervised *MC.Text* models trained on in-domain MuST-Cinema data, outperformed the *OS.Text* models trained on Open Subtitles data, and was surpassed by the *MC.Multi* model, which exploits additional audio information, by 3.1 and 6.4 points. In terms of break coverage, in most cases our unsupervised method outperformed the supervised variants, to a significant degree compared to the text-based *OS.Text* and *MC.Text* models. Regarding BLEU scores without breaks, only the *MC.Multi* model reaches a score close to the perfect one achieved by the unsupervised models, whereas the *MC.Text* model is outperformed by 38.7 and 31.4 points in Dutch and Spanish, respectively. In all cases, the CPL scores indicate that none of the supervised approaches fully meet the length constraint, leading to overflowing lines in 8.2% of the cases at best and 29.9% at worst. In this scenario as well, the unsupervised approaches fully meet the length constraint, by design.
Overall, overt clueing improved over our core method by an average of 3.12 Sigma points, indicating that some likely punctuation configurations were not properly captured by our MLM approximation. In general, our approach tends to overgenerate EOL markers, whereas the opposite is true for the selected supervised models. Determining which of these tendencies leads to better subtitle readability would require a specific human evaluation which we leave for future research.
Although the zero-shot Sigma results obtained by the supervised *MC.Multi* method show the potential of this approach to provide pretrained models applicable to other languages, two important aspects are worth considering. First, the available zero-shot results were obtained on datasets in the same domain as the data seen to train the supervised models. A more complete assessment of the capabilities of these models in zero-shot settings, which would be the most frequent scenario considering the lack of training data across domains and languages, would require specific evaluations in other domains. Secondly, although segmentation is a key aspect for subtitle readability, length conformity is an equally important constraint, if not more so considering that subtitles with lines over the CPL limit are considered invalid in subtitling. Our proposed unsupervised method can thus be seen as a pragmatic approach which guarantees valid subtitles while also providing quality segmentation across the board.7
## 6 Conclusions
We described an unsupervised approach to subtitle segmentation, based on pretrained masked language models, where line or subtitle breaks are inserted according to the likelihood of punctuation occurring at candidate segmentation points.
Although supervised models, trained on indomain data with audio support, were shown to perform better that this simple textual approach in terms of the Sigma metric, they tend to generate imperfect text to varying degrees, while also failing to fully meet length constraints that are essential for subtitling.
In contrast, our LM-based textual approach outperformed supervised models in most cases in terms of break generation coverage, while also fully preserving the original text, complying with length constraints, and obtaining competitive results in terms of Sigma. This simple approach may thus provide a highly portable complementary solution for subtitle segmentation across languages and domains.
## 7 Limitations
The first clear limitation of our approach is its textbased nature. This prevents important audio information, typically silences in speech patterns, from being exploited to generate subtitle breaks. A more complete system could be devised though, for instance by associating our text-based approach with the information provided by a forced alignment toolkit, whenever audio information is available.
A simple method along these lines could be the following: 1. Apply our MLM-based segmentation but only generating a unique segmentation tag SEG; 2. Insert EOB markers wherever the 7Examples of segmented subtitles can be found in Appendix A.
silence between two aligned words is above a specified threshold; 3. Traverse the text sequentially and replace SEG with EOL if there exists a previous marker of type EOB, otherwise replace with EOB. We left this use of our method in combination with audio information for future research, as audio alignment for subtitles typically involves additional factors such as non-literal transcriptions.
Additionally, our method is limited in its adaptability to specific segmentation guidelines, which may be company-specific. The main adaptable parameters of our methods are the minimum and maximum parameters of the segmentation window, and the set of predefined punctuation marks over which masking is computed, neither of which could fully model idiosyncratic segmentation guidelines. However, in our experience at least, segmentation in real professional data tends to display varying degrees of consistency with respect to guidelines, and natural linguistic breaks seem to be the dominant factor for subtitle segmentation. A specific evaluation would be needed on data from varied professional datasets to determine the extent to which our method might deviate from specific guidelines.
Finally, other aspects of subtitling, such as the recommendation in some guidelines for subtitles to appear in a pyramidal view, i.e. with the first line shorter than the second line, have not been taken into consideration in this work. Our aim was to evaluate our core LM-based approach without additional variables that can vary across guidelines and may also have led to results that are more difficult to interpret overall. Our approach could nonetheless be easily augmented with constraints on relative line lengths within subtitles, by incrementing the scores of segmentation candidates that respect this surface-level constraint.
## 8 Ethical Considerations
Our approach involves the use of large pretrained language models, whose computational performance is typically higher when deployed in more powerful environments with GPUs. Under such usage, electric consumption and associated carbon footprint are likely to increase and users of our method under these conditions should be aware of this type of impact. However, subtitle segmentation is often performed offline, where efficient processing is less of a concern, and lower-cost CPU deployments are an entirely viable option. All our results were obtained with a single large LM deployed on CPU, with the aim of reducing energy consumption at inference time.
Additionally, our method requires no training for the task at hand and thus removes the cost of model training associated with the supervised methods with which we compare our results. For instance, Papi et al. (2022) indicate that they use four K80 GPUs to train their models, which we took as comparison points, with 1 day of training for their text-only models and 1 week for their multimodal segmenters. Therefore, given the large number of potential language pairs and domains in need of segmented subtitle content, our approach can provide competitive results with a comparatively lesser impact on energy resource consumption.
## Acknowledgements
We thank the anonymous reviewers for their helpful comments. This work was partially supported by the Department of Economic Development and Competitiveness of the Basque Government (Spri Group) through funding for the StreAmS project
(ZL-2021/00700).
## References
Carlo Aliprandi, Cristina Scudellari, Isabella Gallucci, Nicola Piccinini, Matteo Raffaelli, Arantza del Pozo, Aitor Álvarez, Haritz Arzelus, Renato Cassaca, Tiago Luis, et al. 2014. Automatic live subtitling: state of the art, expectations and current trends. In *Proceedings of NAB Broadcast Engineering Conference:*
Papers on Advanced Media Technologies, Las Vegas, volume 13.
Aitor Álvarez, Haritz Arzelus, and Thierry Etchegoyhen.
2014. Towards customized automatic segmentation of subtitles. In Advances in Speech and Language Technologies for Iberian Languages, pages 229–238.
Springer.
Aitor Alvarez, Carlos-D Martínez-Hinarejos, Haritz Arzelus, Marina Balenciaga, and Arantza del Pozo.
2017. Improving the automatic segmentation of subtitles through conditional random field. *Speech Communication*, 88:83–95.
Ondˇrej Bojar, Dominik Machácek, Sangeet Sagar, ˇ
Otakar Smrž, Jonáš Kratochvíl, Peter Polák, Ebrahim Ansari, Mohammad Mahmoudi, Rishu Kumar, Dario Franceschini, Chiara Canton, Ivan Simonini, ThaiSon Nguyen, Felix Schneider, Sebastian Stüker, Alex Waibel, Barry Haddow, Rico Sennrich, and Philip Williams. 2021. ELITR multilingual live subtitling:
Demo and strategy. In Proceedings of the 16th Conference of the European Chapter of the Association
for Computational Linguistics: System Demonstrations, pages 271–277, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Thierry Etchegoyhen, Lindsay Bywood, Mark Fishel, Panayota Georgakopoulou, Jie Jiang, Gerard van Loenhout, Arantza del Pozo, Mirjam Sepesy Maucec, ˇ
Anja Turner, and Martin Volk. 2014. Machine translation for subtitling: A large-scale evaluation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14),
pages 46–53, Reykjavik, Iceland. European Language Resources Association (ELRA).
Alina Karakanta, François Buet, Mauro Cettolo, and François Yvon. 2022. Evaluating subtitle segmentation for end-to-end generation systems. In *Proceedings of the Thirteenth Language Resources and* Evaluation Conference, pages 3069–3078, Marseille, France. European Language Resources Association.
Alina Karakanta, Matteo Negri, and Marco Turchi.
2020a. Is 42 the answer to everything in subtitlingoriented speech translation? In *Proceedings of the* 17th International Conference on Spoken Language Translation, pages 209–219, Online. Association for Computational Linguistics.
Alina Karakanta, Matteo Negri, and Marco Turchi.
2020b. MuST-cinema: a speech-to-subtitles corpus.
In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 3727–3734, Marseille, France. European Language Resources Association.
Pierre Lison, Jörg Tiedemann, and Milen Kouylekov.
2018. OpenSubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora.
In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC*
2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Evgeny Matusov, Patrick Wilken, and Yota Georgakopoulou. 2019. Customizing neural machine translation for subtitling. In *Proceedings of the* Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 82–93, Florence, Italy.
Association for Computational Linguistics.
Sara Papi, Alina Karakanta, Matteo Negri, and Marco Turchi. 2022. Dodging the data bottleneck: Automatic subtitling with automatically segmented ST
corpora. In *Proceedings of the 2nd Conference of the* Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 480–487, Online only.
Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Elisa Perego, Fabio Del Missier, Marco Porta, and Mauro Mosconi. 2010. The cognitive effectiveness of subtitle processing. *Media psychology*, 13(3):243– 272.
Dhevi J Rajendran, Andrew T Duchowski, Pilar Orero, Juan Martínez, and Pablo Romero-Fresco. 2013. Effects of text chunking on subtitling: A quantitative and qualitative examination. *Perspectives*, 21(1):5–
21.
Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 2699–2712, Online. Association for Computational Linguistics.
Anke Tardel. 2020. Effort in semi-automatized subtitling processes: speech recognition and experience during transcription. *Journal of Audiovisual Translation*, 3(2):79–102.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Martin Volk, Rico Sennrich, Christian Hardmeier, and Frida Tidström. 2010. Machine translation of TV subtitles for large scale production. In Proceedings of the Second Joint EM+/CNGL Workshop: Bringing MT to the User: Research on Integrating MT in the Translation Industry, pages 53–62, Denver, Colorado, USA. Association for Machine Translation in the Americas.
## A Segmentation Examples
Table 4 provides examples of subtitles in the MuSTCinema test sets segmented with either the character counting baseline or our LM-based approach, in its insertion variant without resorting to overt punctuation clueing.
In these examples, the MLM approach generates end-of-line and end-of-subtitle breaks that are overall in line with natural linguistic breaks, contrary to the character counting baseline. As such, on either short, medium or longer input, the readability of the generated subtitles is significantly enhanced with our approach.
## B Extended Results
The results presented in Section 5 were limited to the subset of languages and metrics for which published comparative results were available on the MuST-Cinema datasets. In Table 5, we present the complete list of results obtained with our method, for all languages and metrics. The selected variant of our method is the insertion masking approach, which was selected for the main results in our paper, with a segmentation window starting at 15 characters and ending at 42. We do not include BLEU
scores computed over text that includes segmentation breaks, as the results are identical to those obtained with the Sigma metric for our approach, which does not generate imperfect text.
Across languages, the results are relatively uniform, with the best Sigma scores obtained in English and the lowest in Dutch, for a difference of 4.1 points between the two languages. In terms of break coverage, the best results were obtained for Spanish and the worst for Romanian, although results were also relatively uniform across languages.
In all cases, overt clueing, where overt punctuation marks raised the LM score by 1, improved Sigma scores, although it had less of an impact on break coverage results, where both variants performed similarly overall.
## C Results With Different Min **Parameters**
As noted in Section 3, considering preliminary results over the development set we selected a default value of 15 for the min parameter, which indicates the number of characters after which the segmentation process applies. In Table 6, we present comparative results on the test sets with different min values. In terms of Sigma, values of 15 and 20 led to rather similar results; values of 1 and 10 resulted in slightly lower results, with the lowest results achieved with the former.
In terms of <eol> and <eob> coverage, the former increases with larger min values, which is expected given the more restricted space to insert these end-of-line markers as the value increases; for <eob>, the restricted insertion space results in increased under-generation, which in turn results in better scores for lower values of the min parameter.
| CountChars | MLM |
|------------------------------------------------|---------------------------------------------------------------------|
| They're things you access through your <eol> | They're things you access <eol> |
| computer. <eob> | through your computer. <eob> |
| Every row of data is a life whose story <eol> | Every row of data is a life <eol> |
| deserves to be told with dignity. <eob> | whose story deserves to be told <eob> with dignity. <eob> |
| During the winter, struggling to get <eol> | During the winter, struggling to get warm, <eol> |
| warm, my neighbors would have no choice <eob> | my neighbors would have no choice <eob> |
| but to bypass the meter after their heat <eol> | but to bypass the meter <eol> |
| was shut off, just to keep their family <eob> | after their heat was shut off, <eob> |
| comfortable for one more day. <eob> | just to keep their family comfortable <eol> for one more day. <eob> |
Table 4: Examples of subtitles segmented via character counting and MLM-based mask insertion
Language Method BLEU Sigma EOL EOB CPL
DE MLM 100 70.85 18.53 -7.96 100%
MLM+OC 100 75.63 19.81 -7.78 100%
EN MLM 100 76.77 19.18 -9.91 100%
MLM+OC 100 77.89 19.86 -9.73 100%
ES MLM 100 73.47 12.98 -4.91 100%
MLM+OC 100 75.59 13.45 -4.63 100%
FR MLM 100 73.78 16.51 -6.58 100%
MLM+OC 100 76.07 17.47 -6.12 100%
IT MLM 100 71.38 18.49 -9.55 100%
MLM+OC 100 74.20 20.34 -8.57 100%
NL MLM 100 68.71 20.37 -9.96 100%
MLM+OC 100 73.88 21.22 -9.96 100%
PT MLM 100 71.59 20.03 -10.81 100%
MLM+OC 100 75.50 19.87 -10.02 100%
RO MLM 100 69.45 23.37 -10.44 100%
MLM+OC 100 74.13 23.37 -10.09 100%
Table 5: Complete results with MLM mask insertion on the MuST-Cinema test sets (min=15)
| Language | min | BLEU | Sigma | EOL | EOB |
|------------|-------|--------|---------|--------|-------|
| 1 | 100 | 72.31 | 28.75 | -0.18 | |
| 10 | 100 | 73.96 | 22.68 | -4.43 | |
| 15 | 100 | 75.63 | 19.81 | -7.78 | |
| 20 | 100 | 75.28 | 14.54 | -11.21 | |
| DE | 1 | 100 | 74.30 | 37.33 | -0.98 |
| 10 | 100 | 77.14 | 24.49 | -7.77 | |
| 15 | 100 | 77.89 | 19.86 | -9.73 | |
| 20 | 100 | 77.16 | 15.24 | -12.68 | |
| EN | 1 | 100 | 73.00 | 20.87 | 0.28 |
| 10 | 100 | 74.32 | 18.24 | -2.04 | |
| 15 | 100 | 75.59 | 13.45 | -4.63 | |
| 20 | 100 | 75.83 | 8.66 | -7.87 | |
| ES | 1 | 100 | 73.89 | 24.68 | -0.73 |
| 10 | 100 | 75.26 | 20.83 | -3.93 | |
| 15 | 100 | 76.07 | 17.47 | -6.12 | |
| 20 | 100 | 76.75 | 12.5 | -10.05 | |
| FR | 1 | 100 | 72.01 | 29.75 | -3.66 |
| 10 | 100 | 73.75 | 24.71 | -6.61 | |
| 15 | 100 | 74.20 | 20.34 | -8.57 | |
| 20 | 100 | 73.66 | 14.62 | -11.61 | |
| IT | 1 | 100 | 72.16 | 26.83 | -5.47 |
| 10 | 100 | 73.56 | 23.26 | -8.47 | |
| 15 | 100 | 73.88 | 21.22 | -9.96 | |
| 20 | 100 | 74.40 | 16.81 | -12.43 | |
| NL | 1 | 100 | 72.87 | 26.38 | -6.24 |
| 10 | 100 | 74.53 | 22.15 | -8.08 | |
| 15 | 100 | 75.50 | 19.87 | -10.02 | |
| 20 | 100 | 74.98 | 14.17 | -13.36 | |
| PT | 1 | 100 | 72.05 | 32.3 | -4.51 |
| 10 | 100 | 73.76 | 26.98 | -7.52 | |
| 15 | 100 | 74.13 | 23.37 | -10.09 | |
| 20 | 100 | 74.89 | 17.53 | -12.83 | |
| RO | | | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Not applicable. Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** 3
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We didn't trained any models for this paper, and inference was performed on CPU.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
yadav-etal-2023-exploring | Exploring Continual Learning for Code Generation Models | https://aclanthology.org/2023.acl-short.68 | Large-scale code generation models such as Copilot and CodeT5 have achieved impressive performance. However, libraries are upgraded or deprecated very frequently and re-training large-scale language models is computationally expensive. Therefore, Continual Learning (CL) is an important aspect that remains under-explored in the code domain. In this paper, we introduce a benchmark called CodeTask-CL that covers a wide range of tasks, including code generation, translation, summarization, and refinement, with different input and output programming languages. Next, on our CodeTask-CL benchmark, we compare popular CL techniques from NLP and Vision domains. We find that effective methods like Prompt Pooling (PP) suffer from catastrophic forgetting due to the unstable training of the prompt selection mechanism caused by stark distribution shifts in coding tasks. We address this issue with our proposed method, Prompt Pooling with Teacher Forcing (PP-TF), that stabilizes training by enforcing constraints on the prompt selection mechanism and leads to a 21.54{\%} improvement over Prompt Pooling. Along with the benchmark, we establish a training pipeline that can be used for CL on code models, which we believe can motivate further development of CL methods for code models. |
## Exploring Continual Learning For Code Generation Models
Prateek Yadav1∗
, Qing Sun2 †
, Hantian Ding2, Xiaopeng Li2**, Dejiao Zhang**2, Ming Tan2, Xiaofei Ma2, Parminder Bhatia2**, Ramesh Nallapati**2, Murali Krishna Ramanathan2, Mohit Bansal1,3**, Bing Xiang**2 University of North Carolina, Chapel Hill1, AWS AI Labs2, Amazon Alexa AI3
{praty, mbansal}@cs.unc.edu
{qinsun, dhantian, xiaopel, dejiaoz, mingtan, xiaofeim, parmib, rnallapa, mkraman, mobansal, bxiang}@amazon.com
## Abstract
Large-scale code generation models such as Codex and CodeT5 have achieved impressive performance. However, libraries are upgraded or deprecated very frequently and re-training large-scale language models is computationally expensive. Therefore, Continual Learning
(CL) is an important aspect that remains underexplored in the code domain. In this paper, we introduce a benchmark called CODETASKCL that covers a wide range of tasks, including code generation, translation, summarization, and refinement, with different input and output programming languages. Next, on our CODETASK-CL benchmark, we compare popular CL techniques from NLP and Vision domains. We find that effective methods like Prompt Pooling (PP) suffer from catastrophic forgetting due to the unstable training of the prompt selection mechanism caused by stark distribution shifts in coding tasks. We address this issue with our proposed method, Prompt Pooling with Teacher Forcing (PP-TF),
that stabilizes training by enforcing constraints on the prompt selection mechanism and leads to a 21.54% improvement over Prompt Pooling. Along with the benchmark, we establish a training pipeline that can be used for CL on code models, which we believe can motivate further development of CL methods for code models. Our code is available at https://github.com/amazon-science/codetaskcl-pptf.
## 1 Introduction
Code generation models (Nijkamp et al., 2022b; Wang et al., 2021b; Le et al., 2022; Fried et al.,
2022) can increase the productivity of programmers by reducing their cognitive load. These models require significant computation to train as they have billions of parameters trained on terabytes of data. Hence, they are trained once and are
∗Work conducted during an internship at Amazon †Corresponding author [email protected]
![0_image_0.png](0_image_0.png)
then used repeatedly for several downstream applications. However, as software development constantly evolves with new packages, languages, and techniques (Ivers and Ozkaya, 2020), it is expensive to retrain these models. Therefore, it is essential to continually improve these models to avoid errors, generate optimized code, and adapt to new domains and applications.
We explore continual learning (CL) (Ring, 1998; Thrun, 1998) abilities of code-generation models and aim to improve them. Specifically, we present a CODETASK-CL benchmark for code-based CL and aim to train a model on sequentially presented tasks with different data distributions without suffering from catastrophic forgetting (CF) (McCloskey and Cohen, 1989). This occurs when the model overfits the current task, resulting in a decline in performance on previously learned tasks.
Given the lack of CL benchmarks for the code domain, we create a benchmark called CODETASKCL using existing datasets. It consists of tasks like code completion (Iyer et al., 2018, 2019; Clement et al., 2020), code translation (Chen et al., 2018; Lachaux et al., 2020), code summarization (Wang 782 et al., 2020a,b), and code refinement (Tufano et al.,
2019). This benchmark presents a new and challenging scenario as it necessitates the adaptation of the model to varying input and output programming languages. Along with this benchmark, we also present a training framework to easily apply CL methods to code generation models.
Next, we evaluate the effectiveness of popular CL methods from NLP and Vision domains in the context of code generation models. We consider prompting methods (Wang et al., 2022b; Li and Liang, 2021a) and experience-replay (De Lange et al., 2019) due to their good performance for pre-trained models (Wu et al., 2022a). We also experiment with Prompt Pooling (PP) (Wang et al.,
2022c), an effective prompting-based method for CL in the vision domain. Our results show that Prompt Pooling suffers from catastrophic forgetting on our proposed CODETASK-CL benchmark because of the complex distribution shift from varying input and output programming languages across tasks. With further investigation, we find that the unconstrained prompt selection mechanism leads to an unstable training problem. To address this, we propose our method *Prompt Pooling with Teacher Forcing* (PP-TF), which imposes constraints on prompt selection during training by assigning certain prompts to fixed tasks during training (see Figure 1). This results in stable training and better performance. Interestingly, we find when a replay buffer is available, the simple experience-replay (De Lange et al., 2019) method outperforms other CL methods and achieves performance similar to a multitask baseline (Crawshaw, 2020) where all tasks are provided at once.
In summary, our contributions include: (1) being the first study on CL for code generation tasks, (2) establishing a benchmark and a novel pipeline that supports CL for code generation to motivate future work, (3) identifying and addressing the unstable training issue of Prompt Pooling through our proposed method PP-TF, and (4) discussion on the best CL methods to use in different use cases.
## 2 Related Work
Code Generation Models. Code generation and language modeling for source code is an emerging research field experiencing active growth. Several model architectures have been examined recently, including encoder-only models (Feng et al., 2020; Guo et al., 2020), encoder-decoder models (Ahmad et al., 2021; Wang et al., 2021b), and decoder-only models (Nijkamp et al., 2022b; Chen et al., 2021; Nijkamp et al., 2022a). However, none of these models have been studied in the context of continual learning.
Continual Learning. There are various methods for Continual Learning (CL) and they fall into three categories: Regularization, *Replay*, and *parameter isolation* methods. **Regularization methods** (Kirkpatrick et al., 2017; Zenke et al., 2017; Schwarz et al., 2018) assign importance to model components and add regularization terms to the loss function. **Replay methods** (De Lange et al.,
2019; Rebuffi et al., 2017; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2018) retain a small memory buffer of data samples and retrain them later to avoid catastrophic forgetting (CF). **Parameter isolation methods**, such as prompting-based methods
(Wang et al., 2022b,a; Li and Liang, 2021a; Liu et al., 2021; Qin and Eisner, 2021), introduce or isolate network parameters for different tasks. For a more comprehensive overview of all CL methods, we refer the reader to Delange et al. (2021);
Biesialska et al. (2020).
To the best of our knowledge, there are currently no studies or benchmarks for CL on code generation models. Therefore, we evaluate the effectiveness of prompting (Wang et al., 2022b; Li and Liang, 2021a) and experience replay (Chaudhry et al., 2018; Buzzega et al., 2020) based methods, which have demonstrated strong performance in CL on large pretrained models (Raffel et al., 2020).
We do not consider regularization methods as they are not effective in continually learning large-scale pretrained models (Wu et al., 2022b). Next, we discuss our proposed benchmark and methods.
## 3 Codetask**-Cl Benchmark**
We present the CODETASK-CL benchmark to assess the CL abilities of code generation models. We also provide a novel training pipeline that can be used to continually train and evaluate code generation models. All of the datasets used to create the CODETASK-CL benchmark are available under the MIT license and more details on the dataset splits and input-output domains are in Table 2.
## 3.1 Coding Tasks
Code Generation aims to generate a code snippet from a natural language description. We use the CONCODE dataset (Iyer et al., 2018) which is a collection of tuples that consist of natural language descriptions, code environments, and code snippets, obtained from approximately 33,000 Java projects on GitHub. The objective of the study is to generate class member functions utilizing the natural language descriptions and class environment.
Code Summarization aims to generate a summary for a piece of code. We use the CodeSearchNet dataset (Husain et al., 2019), which consists of six programming languages (Python, Java, JavaScript, PHP, Ruby, and Go). The data for this task consists of the first paragraph of each documentation.
Code translation refers to the transformation of a program written in a particular programming language into another language while maintaining its functionality. We use the Java → C\# dataset compiled by Lu et al. (2021) that provides pairs of code that perform the same tasks.
Code Refinement aims to improve the code by fixing bugs within the code automatically. We use the dataset provided by Tufano et al. (2019) consisting of pairs of faulty and corrected Java functions.
## 3.2 Evaluation
Next, we define the metrics used to evaluate a model continually on these datasets. We follow Lu et al. (2021) and evaluate each task using BLEU
(Papineni et al., 2002). We follow (Chaudhry et al.,
2018) to continually evaluate model's performance.
We measure the *average BLEU* after learning all the tasks as, <BLEU> =
1 N
PN
k=1 bN,k, where N
is the total number of tasks and bi,j represents the BLEU score on task j after learning task i. Additionally, we report the average forgetting metric, denoted by <Forget>, to assess the model's ability to retain performance on previously learned tasks.
This metric is calculated as the average difference between the maximum accuracy obtained for each task t and its final accuracy, given by <Forget> =
1 N−1 PN−1 t=1 (maxk∈1,...,N−1 bk,t − bN,t).
## 4 Prompt Pooling With Teacher Forcing
Prompt Pooling (Wang et al., 2022c) is a highly effective technique that possesses two key benefits.
Firstly, the number of prompts required does not increase linearly with the number of tasks. Secondly, the prompts within the pool can be utilized across multiple tasks, thereby enabling the reuse of previously acquired knowledge. These abilities are advantageous in real-world scenarios, particularly when a model needs to be continually adjusted to accommodate a large number of users/tasks.
In Prompt Pooling (PP), a set of learnable prompts P = {Pi}M
i=1 are defined and shared by multiple tasks. We follow Wang et al. (2022c) and utilize a query and key-matching process to select the prompts for each task. This process has four steps: (1) a learnable key, represented as ki ∈ R
d, is defined for each prompt, resulting in a prompt pool of the form {(ki, Pi)}M
i=1; (2) a query function q(x) is defined, which takes an input x from a given task and produces a query vector qx ∈ R
d;
(3) the top-k keys are selected based on the cosine similarity between the query qx and all the key vectors {ki}M
i=1; (4) we obtain the final input vector xp by pre-pending the example x with the prompts corresponding to the selected keys. Then xp is fed into the pre-trained model f and we minimize the following loss function to *only* optimize the selected prompts and the corresponding keys while keeping the pre-trained model fixed.
L = LLM(xp, y) + λX ksi∈Ks sim(q(x), ksi ) (1)
where LLM is the language modeling loss, y is the target sequence given the input x, Ks is the set of selected keys from Step (3) above.
The query-key mechanism described above is an Expectation-Maximization (EM) (Moon, 1996)
procedure. Given an example, we first select the top-k keys based on the cosine similarity (E-Step)
and then train these selected keys to pull them closer to the query (M-Step). The training is stable when all tasks are jointly learned. However, in the CL context, tasks are sequentially trained which makes training unstable. Hence, we propose Prompt Pooling with Teacher Forcing (PP-TF) that removes the E-Step by assigning each {(ki, Pi)}
pair to fixed tasks and only performs the M-Step of optimizing the keys. To encourage knowledge sharing, we allow a few {(ki, Pi)} pairs to be shared across tasks (see Figure 1). With these assignments/constraints in place, when training on task t, we use teacher forcing to select top-k prompts that are assigned to the task. Thus, for learning task t, our loss function becomes,
$$\mathcal{L}=\mathcal{L}_{LM}(x_{p},y)+\lambda\sum_{k_{s_{i}}\in K_{s}\cap K_{t}}sim(q(x),k_{s_{i}})\tag{2}$$
where, Kt denotes the prompts assigned to task t for teacher forcing. As training progresses, the queries and keys learn to align in a stable manner, while also allowing for information sharing among
Method (↓) Replay [5k] Code Gen. Code Trans. Code Summ. Code Ref. <BLEUTest> <BLEUVal> <ForgetVal>
![3_image_0.png](3_image_0.png)
Sequential FT ✗ 6.42 2.76 3.13 77.75 22.52 22.44 39.64 MTL ✗ 32.24 74.87 14.69 79.23 50.26 49.25 -
Individual FT ✗ 38.61 83.34 14.32 77.73 53.50 52.68 - **Shared Prompts** ✗ 0.63 6.75 0.37 78.5 21.56 21.71 30.33 Shared Prompts + ER ✓ 13.82 45.87 14.36 78.64 38.17 36.93 8.46 Task Specific Prompts ✗ 22.93 65.37 14.57 78.81 **45.42 44.56** 0.00 Prompt Pooling (PP) ✗ 2.41 7.47 2.62 78.67 22.79 23.10 27.43 Prompt Pooling (PP) + ER ✓ 16.33 50.96 13.13 78.71 39.78 38.47 6.41 PP + Teacher Forcing ✗ 24.28 59.37 14.15 79.50 **44.33 43.10** 1.68 CodeT5 + ER ✓ 32.92 77.94 11.74 78.43 **50.26 49.03** 2.22 Table 1: BLEU scores on the test set for the individual tasks and average BLEU (↑) and Forgetting (↓) metrics after sequentially learning Code Generation → Code Translation → Code summarization → Code Refinement Tasks.
tasks through the shared prompts. During inference, we discard the assignment for (key, prompt) pair and use cosine similarity to select the top-k pairs across the whole pool.
## 5 Experiments
We focus on the scenario of known task identities for continual learning. This is commonly the case in code-related domains and task identities can also be determined through input and output analysis in certain situations. In the field of NLP and Vision, methods utilizing experience replay and prompting have been highly effective for CL on large pretrained models (Wang et al., 2022c, 2021a; Wu et al., 2022a). Moreover, regularization methods are shown to not work well in conjunction with pre-trained models (Wu et al., 2022a), and hence, we skip them from our study. Next, we present these methods along with some baseline methods.
## 5.1 Baselines
Sequential Finetuning (Yogatama et al., 2019)
updates all model parameters for every incoming task in a sequential manner. This approach has been shown to suffer from catastrophic forgetting and serves as a lower bound for CL methods.
Individual Models (Howard and Ruder, 2018) finetune a separate models for each new task. This is considered an upper bound for CL methods.
Multitask Learning (Crawshaw, 2020) simultaneously learns multiple tasks at once, without experiencing distribution shift, resulting in a strong performance. For multitask learning, we prepend the task descriptors to the input and follow Wang et al. (2021b) to ensure balanced sampling across tasks with varying dataset sizes.
Shared Prompt Tuning (SP) defines M soft continuous prompts (Li and Liang, 2021b) which are added and fine-tuned for each example from all tasks. They are trained via gradient descent while keeping the pretrained model's parameters fixed.
Task Specific Prompt Tuning (TSPT) defines a
![3_image_1.png](3_image_1.png)
![3_image_2.png](3_image_2.png)
![3_image_3.png](3_image_3.png)
total of M soft continuous prompts (Li and Liang, 2021b) that are divided across N tasks, resulting in
⌊
M
N⌋ task-specific prompts.
Experience Replay (ER) (Riemer et al., 2019)
involves maintaining a memory buffer B of examples from the previous task. The buffer randomly stores an equal number of samples from each past task and is used to retrain the model at later stages. Moreover, as several of the other methods outlined in this study can benefit from ER, we also include results with and without the utilization of ER.
## 5.2 Main Results 5.2.1 Task-Cl Experiments
We use CodeT5 model (Wang et al., 2021b) as our pre-trained model when learning the CODETASKCL benchmark. In Table 1, we report results for a single run on the methods described above and their ER variants. For more implementation details and hyperparameters used please refer to Appendix A.1. First, we find that the popular prompt pooling demonstrates catastrophic forgetting with a test BLEU score of 22.79%. Even when using ER
with PP the performance is 39.78% which is still much worse than other methods. In contrast, PP
+ TF even without ER outperforms PP and PP +
ER by 21.54% and 4.55% respectively. Moreover, our results show that the *CodeT5 + ER* method which finetunes the full CodeT5 model with ER
performs the best with an average test BLEU
score of 49.21%. Please refer to Appendix A.3 for experiments on the effect of buffer size on performance.
Discussion: We find that task-specific prompts are more effective than other prompting-based CL
methods. However, due to their high storage requirements that scales linearly with the number of tasks, this approach is not feasible for large-scale applications where the model needs to be adapted
![4_image_0.png](4_image_0.png)
for a large number of users or tasks. In contrast, a memory buffer might be available due to privacy concerns (Yoon et al., 2021) in many situations. In such cases, the *PP-TF* is the recommended method.
Given these findings, we believe that the current Prompt Pooling based methods can be further improved in order to reuse knowledge across tasks.
## 5.2.2 Training Instability Of Prompt Pooling
To show the root of catastrophic forgetting in prompt pooling, we evaluate how queries and keys align in the representation space after learning each task. To do so, we first select a subset of 5k training samples from four tasks resulting in 20k examples.
We utilize a fixed codeT5 encoder as our query function that encodes provided examples to obtain queries. These queries remain unchanged during training and the keys are initialized using the data.
We then use principal component analysis (PCA)
(Pearson, 1901) on the queries and keys to obtain the first three principal components and plot them.
After learning each task, we repeat the PCA step on the fixed queries and the updated prompt keys.
From Figure 2, we observe before the training starts, the keys (represented by red crosses) are evenly distributed among the queries of different tasks. However, after completing the training on the first task (CodeGen), most of the keys move toward the queries associated with that CodeGen
(denoted by orange stars). This indicates that the prompts corresponding to these keys were primarily used for the CodeGen task and were trained by it. As a large portion of the prompts from the pool are utilized during the training of the CodeGen task, there are no key vectors available for allocation to the second task (CodeTrans). As a result, when learning the CodeTrans, some keys used for the previous task are pulled toward CodeTrans's queries and the corresponding prompts are updated. As each subsequent task is introduced, the key vectors are dynamically adjusted to align with the current task's queries, leading to a unstable process of matching in which updates to the key-prompt pairs are frequently in conflict with the previous tasks. Hence leading to catastrophic forgetting on the previous tasks.
## 6 Conclusion
In conclusion, we have introduced a novel benchmark, CODETASK-CL, tailored to cover a broad spectrum of tasks in the code domain, aiming to fuel advancements in Continual Learning (CL) for large-scale code generation models. Our study underscores the shortfalls of popular CL methods like Prompt Pooling when applied to coding tasks, predominantly due to catastrophic forgetting. However, we demonstrate that our proposed method, Prompt Pooling with Teacher Forcing (PP-TF), can effectively mitigate this issue, leading to a significant improvement of 21.54% over the baseline. Furthermore, we establish a comprehensive training pipeline catering to CL on code models. We believe that our contributions, both in the form of the CODETASK-CL benchmark and the PP-TF
method, will ignite further exploration and innovation in CL techniques specifically designed for the dynamic and evolving realm of code generation.
## Limitations
This work primarily focuses on evaluating the efficacy of existing continual learning (CL) methods for code generation models. It is important to note that many of these methods were specifically designed for natural language processing or computer vision domains and may not directly transfer to the code generation domain. Nevertheless, we have made efforts to identify and address any issues encountered during our analysis. It should be acknowledged, however, that the scope of our work is limited by the selection of methods and the benchmark used. While we have utilized the most popular CL methods from various categories, there may be methods that have not been included in this study due to their inefficacy in natural language processing or computer vision tasks but may be effective in code generation. As such, we encourage further research within the community to explore the potential of CL methods for code-generation models.
## Acknowledgment
We thank Amazon for the Amazon Post-Internship Fellowship award that supported Prateek during this work. We also thank all the reviewers for their feedback on the paper.
## References
Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2655–2668, Online. Association for Computational Linguistics.
Magdalena Biesialska, Katarzyna Biesialska, and Marta R. Costa-jussà. 2020. Continual lifelong learning in natural language processing: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6523–6541, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. 2020. Dark experience for general continual learning: a strong, simple baseline. In *Advances in Neural Information Processing Systems*, volume 33, pages 15920–15930. Curran Associates, Inc.
Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. 2018. Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Xinyun Chen, Chang Liu, and Dawn Song. 2018. Treeto-tree neural networks for program translation. In Advances in neural information processing systems, pages 2547–2557.
Colin B Clement, Dawn Drain, Jonathan Timcheck, Alexey Svyatkovskiy, and Neel Sundaresan. 2020.
Pymt5: multi-mode translation of natural language and python code with transformers. arXiv preprint arXiv:2010.03150.
Michael Crawshaw. 2020. Multi-task learning with deep neural networks: A survey. *ArXiv*,
abs/2009.09796.
Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. 2019. Continual learning: A
comparative study on how to defy forgetting in classification tasks. *arXiv preprint arXiv:1909.08383*,
2(6).
M. Delange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars. 2021.
A continual learning survey: Defying forgetting in classification tasks. *IEEE Transactions on Pattern* Analysis and Machine Intelligence, pages 1–1.
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. Codebert: A
pre-trained model for programming and natural languages. *arXiv preprint arXiv:2002.08155*.
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. Incoder:
A generative model for code infilling and synthesis.
arXiv preprint arXiv:2204.05999.
Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Jian Yin, Daxin Jiang, et al. 2020. Graphcodebert: Pre-training code representations with data flow. *arXiv preprint* arXiv:2009.08366.
Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification.
In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339.
Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Codesearchnet challenge: Evaluating the state of semantic code search. *arXiv preprint arXiv:1909.09436*.
James Ivers and Ipek Ozkaya. 2020. Untangling the knot: Enabling rapid software evolution. Technical report, CARNEGIE-MELLON UNIV PITTSBURGH PA.
Srinivasan Iyer, Alvin Cheung, and Luke Zettlemoyer.
2019. Learning programmatic idioms for scalable semantic parsing. *arXiv preprint arXiv:1904.09086*.
Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. *arXiv preprint* arXiv:1808.09588.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell.
2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526.
Marie-Anne Lachaux, Baptiste Roziere, Lowik Chanussot, and Guillaume Lample. 2020. Unsupervised translation of programming languages. *arXiv* preprint arXiv:2006.03511.
Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven CH Hoi. 2022. Coderl:
Mastering code generation through pretrained models and deep reinforcement learning. arXiv preprint arXiv:2207.01780.
Xiang Lisa Li and Percy Liang. 2021a. Prefix-tuning:
Optimizing continuous prompts for generation.
Xiang Lisa Li and Percy Liang. 2021b. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597, Online. Association for Computational Linguistics.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, too. *arXiv:2103.10385*.
David Lopez-Paz and Marc'Aurelio Ranzato. 2017.
Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pages 6467–6476.
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, MING GONG, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, and Shujie LIU. 2021. CodeXGLUE: A machine learning benchmark dataset for code understanding and generation. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks* Track (Round 1).
Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pages 109–165. Elsevier.
T.K. Moon. 1996. The expectation-maximization algorithm. *IEEE Signal Processing Magazine*, 13(6):47–
60.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Haiquan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022a. A conversational paradigm for program synthesis. *ArXiv*, abs/2203.13474.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022b. Codegen: An open large language model for code with multi-turn program synthesis.
ArXiv preprint, abs/2203.13474.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Karl Pearson. 1901. Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559–572.
Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts.
In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203–5212, Online. Association for Computational Linguistics.
Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *ArXiv*, abs/1910.10683.
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001–2010.
Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, , and Gerald Tesauro. 2019.
Learning to learn without forgetting by maximizing transfer and minimizing interference. In *International Conference on Learning Representations*.
Mark B Ring. 1998. Child: A first step towards continual learning. In *Learning to learn*, pages 261–292.
Springer.
Jonathan Schwarz, Jelena Luketina, Wojciech M Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. 2018.
Progress & compress: A scalable framework for continual learning. *arXiv preprint arXiv:1805.06370*.
Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. 2022. Continual-t0: Progressively instructing 50+ tasks to language models without forgetting.
arXiv preprint arXiv:2205.12393.
Sebastian Thrun. 1998. Lifelong learning algorithms.
In *Learning to learn*, pages 181–209. Springer.
Michele Tufano, Cody Watson, Gabriele Bavota, Massimiliano Di Penta, Martin White, and Denys Poshyvanyk. 2019. An empirical study on learning bugfixing patches in the wild via neural machine translation. ACM Transactions on Software Engineering and Methodology (TOSEM), 28(4):1–29.
Chengyu Wang, Jianing Wang, Minghui Qiu, Jun Huang, and Ming Gao. 2021a. Transprompt: Towards an automatic transferable prompting framework for few-shot text classification. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2792–2802.
Wenhua Wang, Yuqun Zhang, Zhengran Zeng, and Guandong Xu. 2020a. Transˆ 3: A transformer-based framework for unifying code summarization and code search. *arXiv preprint arXiv:2003.03238*.
Yanlin Wang, Ensheng Shi, Lun Du, Xiaodi Yang, Yuxuan Hu, Shi Han, Hongyu Zhang, and Dongmei Zhang. 2020b. Cocosum: Contextual code summarization with multi-relational graph neural network.
arXiv preprint arXiv:2107.01933.
Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH
Hoi. 2021b. Codet5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696–8708.
Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, et al. 2022a. Dualprompt: Complementary prompting for rehearsalfree continual learning. European Conference on Computer Vision.
Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. 2022b. Learning to prompt for continual learning. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 139–149.
Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. 2022c. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 139–149.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. *arXiv preprint* arXiv:1910.03771.
Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, and Gholamreza Haffari. 2022a. Pretrained language model in continual learning: A comparative study. In *International Conference on Learning Representations*.
Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, and Gholamreza Haffari. 2022b. Pretrained language model in continual learning: A comparative study. In *International Conference on Learning Representations*.
Dani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373.
Jaehong Yoon, Wonyong Jeong, Giwoong Lee, Eunho Yang, and Sung Ju Hwang. 2021. Federated continual learning with weighted inter-client transfer. In *International Conference on Machine Learning*, pages 12073–12086. PMLR.
Friedemann Zenke, Ben Poole, and Surya Ganguli.
2017. Continual learning through synaptic intelligence. In *Proceedings of the 34th International* Conference on Machine Learning-Volume 70, pages 3987–3995. JMLR. org.
## A Appendix A.1 Implementation Details
In our experiments, we report the results of a single run. We used the *CodeT5-small* model (Wang et al.,
2021b) with 60M parameters from Huggingface
(Wolf et al., 2019), which is an encoder-decoder model pre-trained on CodeSearchNet (Husain et al.,
2019). We use a separate and fixed codeT5 encoder model as the query function to encode the input examples for prompt pooling. For all promptingrelated experiments, the CodeT5 model remains frozen and only the prompts are finetuned. In cases where we have ER with prompting methods, the ER is also applied while finetuning the prompts. Our prompt pool consisted of 500 prompts, with 100 prompts being selected to prepend to examples for each task. For the Shared Prompts method, we utilized 100 prompts that are used for all the tasks. For the Task-Specific Prompt method, we utilized different 100 prompts for each task. Unless otherwise specified, we used a buffer size of 5000 examples for all methods employing ER. The Adam (Kingma and Ba, 2014) optimizer was utilized, along with early stopping. The hyperparameters for our experiments were taken from Wang et al. (2021b), and the tasks from CODETASK-CL benchmark were learned in random order specified in Table 1. The results of our experiments included the Average validation and test BLEU scores, as well as the forgetting metric on the validation set. The implemntation of BLEU was taken from the CodeT5 paper (Wang et al., 2021b). We ran experiments on a single A6000 GPU with 48 GB of memory with total computation of 14 GPU days.
| Scenario | Task | Dataset Name | Input | Output Train | Validation | Test | |
|--------------|---------------|----------------|-------------|----------------|--------------|--------|------|
| Generation | CONCODE | | English | Java | 100k | 2k | 2k |
| Translation | CodeTrans | Java | C# | 10k | 0.5k | 1k | |
| Sumarization | CodeSearchNet | Ruby | English 25k | 1.4k | 1.2k | | |
| Task-CL | Refinement | BFP | Java | Java | 46k | 5.8k | 5.8k |
Table 2: Table providing the Dataset Statistics for the task used in CODETASK-CL benchmark. We specify the input and output domains along with the split sizes for train, validation, and test sets.
| Method (↓) | Buffer Size | Code Gen. Code Trans. Code Summ. Code Ref. <BLEUTest> <BLEUVal> <ForgetVal> | | | | | | | | | | | |
|---------------|---------------|-------------------------------------------------------------------------------|-------|-------|-------|-------|-------|-------|-------|------|-------|-------|----|
| 100 | | 24.11 | 61.87 | 10.72 | 77.82 | | 43.63 | 41.25 | 14.18 | | | | |
| 500 | | 29.39 | 57.56 | 11.33 | 78.70 | | 44.25 | 40.1 | 11.42 | | | | |
| CodeT5 + ER | 1000 | | 28.23 | 73.33 | 12.06 | 78.03 | | 47.91 | 46.74 | 6.98 | | | |
| 2000 | | 31.10 | 75.52 | 11.85 | 77.58 | | 49.01 | 47.59 | 5.99 | | | | |
| 5000 | | 32.92 | 77.94 | 11.74 | 78.43 | | 50.26 | 49.03 | 2.22 | | | | |
| MTL | | - | 32.24 | 74.87 | 14.69 | 79.23 | | 50.26 | 49.25 | - | | | |
| Individual FT | | - | 38.61 | | 83.34 | | 14.32 | | 77.73 | | 53.50 | 52.68 | - |
Table 3: Table showing performance on each task as we vary the Buffer Size when sequentially learning Code Generation →
Code Translation → Code summarization → code Refinement Tasks.
## A.2 Data Statistics For Codetask-Cl Benchmark
Table 2 shows the train, validation, and test data sizes for all the tasks used in the CODETASK-CL
benchmark. We also present the input and output domains for each of the individual tasks. Given the input and output domains for these tasks are starkly different this makes this benchmark challenging as the distribution shift is large. Please refer to Section 3 in the main paper for more details about the benchmark. All of the datasets used to create the CODETASK-CL benchmark are available under the MIT license.
## A.3 Impact Of Buffer Size On Er Performance.
If ER replay is possible, we find that *CodeT5 + ER*
is the most performant method. We go on to further assess the impact of buffer size on the performance.
In Table 3, we present the aggregated results for a total buffer size of 100, 500, 1000, 2000, and 5000.
Our findings suggest that the is an increase in performance as the buffer size increases. We observe that CodeT5 + ER with a small buffer size of 100 examples outperforms PP + ER (5k examples) by 3.85% respectively. Moreover, CodeT5 + ER with a buffer size of 1000 outperforms the best method without ER. Our findings are in line with that of Scialom et al. (2022) and demonstrate that whenever possible, we should use ER with pretrained models. Although in cases with no buffer with a large number of tasks, *PP + TF* is the best method to use.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5
✗ A2. Did you discuss any potential risks of your work?
Our work introduces no additional risks on top of the risk associated with the underlying technologies.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2
✓ B1. Did you cite the creators of artifacts you used?
Section 2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section2 and Appendix A2
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 2 and Appendix A2
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No the datasets used to create the benchmark are already anonymized and are not offensive as it is mostly code.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 2, Appendix A2, and Table 3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A2 and Table 3
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 and Appendix A1
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
mirbostani-etal-2023-deep | Deep Active Learning for Morphophonological Processing | https://aclanthology.org/2023.acl-short.69 | Building a system for morphological processing is a challenging task in morphologically complex languages like Arabic. Although there are some deep learning based models that achieve successful results, these models rely on a large amount of annotated data. Building such datasets, specially for some of the lower-resource Arabic dialects, is very difficult, time-consuming, and expensive. In addition, some parts of the annotated data do not contain useful information for training machine learning models. Active learning strategies allow the learner algorithm to select the most informative samples for annotation. There has been little research that focuses on applying active learning for morphological inflection and morphophonological processing. In this paper, we have proposed a deep active learning method for this task. Our experiments on Egyptian Arabic show that with only about 30{\%} of annotated data, we achieve the same results as does the state-of-the-art model on the whole dataset. |
## Deep Active Learning For Morphophonological Processing
Seyed Morteza Mirbostani, Yasaman Boreshban Dpt. of Computer Engineering, University of Guilan, Rasht, Iran
{m.mirbostani, boreshban}@msc.guilan.ac.ir Salam Khalifa, Seyed Abolghasem Mirroshandel, and **Owen Rambow**
Dpt. of Linguistics and Institute for Advanced Computational Science (IACS)
Stony Brook University, Stony Brook, USA
{first.last}@stonybrook.edu
## Abstract
Building a system for morphological processing is a challenging task in morphologically complex languages like Arabic. Although there are some deep learning based models that achieve successful results, these models rely on a large amount of annotated data. Building such datasets, specially for some of the lower-resource Arabic dialects, is very difficult, time-consuming, and expensive. In addition, some parts of the annotated data do not contain useful information for training machine learning models. Active learning strategies allow the learner algorithm to select the most informative samples for annotation. There has been little research that focuses on applying active learning for morphological inflection and morphophonological processing. In this paper, we have proposed a deep active learning method for this task. Our experiments on Egyptian Arabic show that with only about 30% of annotated data, we achieve the same results as does the state-of-the-art model on the whole dataset.
## 1 Introduction
Recently, there has been lots of interest in morphological (re-)inflection processing
(Narasimhan et al., 2015; Kirov and Cotterell, 2018; Belth et al., 2021). Having an acceptable model for morphological processing will help improve the performance of different natural language processing (NLP) tasks like speech synthesis
(Halabi, 2016), morphological disambiguation
(Khalifa et al., 2020; Inoue et al., 2021), and machine translation (Sennrich and Haddow, 2016; Erdmann et al., 2019; Alhafni et al., 2020). Despite recent progress in this field of study, there are lots of challenges for low-resource languages.
Especially for utilizing successful but data-hungry deep learning (DL) models, the need for annotated data is vital. However, data annotation is a hard, expensive, and time-consuming task. In addition, lots of annotated data does not contain useful information for improving the quality of a learning algorithm. In this paper, we propose a deep active learning (DAL) algorithm for morphophonological processing that is able to decrease the need for annotated data, by using only informative samples.
In our experiments, we have chosen Arabic, a morphologically rich language. In addition, lots of Arabic dialects are very low-resource and the results from this study can help in building required datasets in a smarter way. Among Arabic dialects, Cairene Egyptian Arabic has been selected, because it is well-studied and has many resources and it is appropriate for our DAL
simulation experiments. It should be noted that the proposed method is not specific to Arabic and it can be utilized on other languages or dialects.
As our baseline, we have chosen a very successful transformer model for character-level transduction tasks (Wu et al., 2021). We propose a pool-based DAL method in this study. To find the most uncertain (informative) samples, we combine an entropy strategy with a clustering method to keep an acceptable balance between the uncertainty and diversity of the chosen samples. The results of our experiments on the selected dataset show the success of the proposed DAL method.
## 2 Previous Work
In this section, we give a brief review of morphological inflection processing methods.
Some recent DAL methods will also be reviewed.
There are several approaches for applying DL models for the morphological inflection problem (Yang et al., 2022; Wehrli et al.,
2022; Batsuren et al., 2022; Wu et al., 2021; Dankers et al., 2021), which achieve successful results on different languages. Most of these models use character-level neural transducers using transformers, data augmentation, and recurrent 793
## Neural Network (Rnn)S.
In Arabic, there is also much non-neural research on morphological modeling, including finite state technology (Habash and Rambow, 2006), precompiled tabular morphological analyzers
(Buckwalter, 2002, 2004; Graff et al., 2009; Taji et al., 2018), and allomorphy modeling through linguistically descriptive rules (Habash et al.,
2022).
In recent years, DAL has been used in some sub-fields of NLP. Zhang et al. (2017) and Ru et al. (2020) achieved satisfactory results in text classification using active learning (AL) with convolutional neural networks and adversarial uncertainty sampling, respectively. Different acquisition functions using the conditional random field model have been applied in named entity recognition (Shen et al., 2017; Prabhu et al.,
2019; Liu et al., 2020). In neural machine translation, Peris and Casacuberta (2018) used an attention-based function, and Liu et al. (2018)
applied a reinforcement learning method. In another study, Zhao et al. (2020) proposed a word frequency-based acquisition function to train neural machine translation actively. More recently, Muradoglu and Hulden (2022) introduced a DAL
method for lemma inflection in written modern standard Arabic and some other languages. Their method uses entropy at the word level, while we find that the max of character-level entropy for a word performs best.
## 3 Background And Problem Definition
Morphophonology involves studying the relationship between morphology and phonology.
The goal is to analyze data in order to discover the underlying forms and ordered rules that explain the observed data (Hayes, 2008). Arabic morphophonology is particularly interesting due to its complex templatic and concatenative morphology. Changes in morphophonology can happen on the stem pattern and also on word boundaries and stem. In addition, phonological alterations can be triggered by the addition of morphemes in concatenative morphology cases. The main problem of this paper is morphophonological generation, in which an underlying representation (UR) is transformed into a surface form (SF). We also investigate analysis, i.e., learning to transform a SF to a UR.
| UR | SF | SF Arabic |
|----------------|------------|-------------|
| #$Ayil=In=uh# | #$aylInu# | é JJ ÊK A ƒ |
| #HAfiZ=In=hA# | #HafZinha# | Aî DJ ¢ ¯Ag |
| #bi-ti-SAdf=U# | #bitSadfu# | @ñ ¯XA' JK . |
![1_image_0.png](1_image_0.png)
![1_image_1.png](1_image_1.png)
Table 2: The sizes of different splits of the used dataset.
## 4 Dataset
We use the Arabic morphophonology dataset created by Khalifa et al. (2022). It uses a broad phonemic transcription. They generated URs from the CALIMAEGY morphological analyzer (Habash et al., 2012) for every SF extracted from the ECAL
dataset (Kilany et al., 2002). They also added the analyzer's segmentation to the UR part, delimiting word boundaries with \#, prefixes with −, and suffixes with =. The dataset contains pairs of
(UR, SF) as shown in Table 1. The split of this dataset is based on ECAL's split, which contains TRAIN, DEV, and EVAL subsets. Due to the fact that ECAL's splits are based on running texts, some words can occur in more than one split.
Therefore Khalifa et al. (2022) also created subsets of the DEV and EVAL sets, called DEV-OOV and EVAL-OOV, which only contain non-overlapping words with the TRAIN split. The sizes of these splits are given in Table 2.
## 5 Proposed Method
In this section, we give a brief description of the baseline network. Then, the proposed DAL method will be explained in more detail.
## 5.1 Baseline Network
We have done several experiments to choose the most successful model for our AL experiments.
Among different existing successful approaches
(e.g., (Wehrli et al., 2022) and (Wu et al.,
2021)), we chose Wu et al. (2021)'s system as our baseline system for conducting the DAL experiments because of its successful results on the utilized dataset. This is a transformer-based model that outperformed existing sequence-to-sequence models based on RNN for character-level transduction tasks, such as morphological inflection generation, and achieved state-of-the-art results. Due to the fact that in character-level transduction tasks, the dataset sizes are significantly smaller than other tasks like machine translation, they have proposed a smaller transformer. In the next subsection, we will describe our proposed algorithm.
## 5.2 Active Learning Method
In this research, we have used the pool-based AL
method in combination with the entropy strategy and clustering to determine uncertain samples based on the model's prediction preserving the diversity of chosen samples. The AL method considers all the available data of the pool, U, which contains 13,170 samples, unannotated.
Initially, about 10% of the samples (i.e., 1,400 samples) are chosen randomly from U for the data annotation process. In the first cycle of training the model, 500 samples from these 1,400 annotated samples are used for tuning, T , and the rest of the labeled samples (i.e., 900 samples) are used as the initial training dataset, L. The tune dataset is fixed throughout the procedure and determines the model with the highest accuracy during each AL
training cycle. However, the training dataset, L,
is increased by δ samples (i.e., 250 samples) per training cycle.
After training the model on the L dataset for the first time, all the pool samples are passed to the model for prediction. For each UR, the probability values are determined by computing the softmax of the model's output *logits*. Most data sampling strategies in the AL method are based on some uncertainty criteria. In the case of sequence-to-sequence models focusing on character-level tasks, the sampling method based on entropy criteria is a suitable choice for uncertainty detection.
The output *logits* for each character of the predicted SF word, wSF, corresponds to the elements of a character vocabulary generated according to the predicted SF words. Using Equation (1), each set of *logits*, ch, is used to calculate the probability vector of a character in wSF. Here, Pi(ch) is the probability value of the i th element in the probability vector, chiis the logit of the i th element, and N is the vocabulary size.
$$P_{i}(\mathbf{ch})={\frac{e^{c h_{i}}}{\sum_{i=1}^{N}e^{c h_{i}}}}$$
chi(1)
(1) $\pi$ .
$\pi$
The entropy of a character ch, E′(ch), is calculated by Equation (2) based on the probability values of all possible generated characters vocabulary.
$$E^{\prime}(\mathbf{ch})=-\sum_{i=1}^{N}P_{i}(\mathbf{ch})\log P_{i}(\mathbf{ch})$$
$${\mathrm{(2)}}$$
Equation (3) determines the entropy of the word wSF by choosing the maximum value among all its characters' entropy values. That is, predicted labels with the lowest confidence have the highest entropy.
$$E(w_{\mathrm{SF}})=\operatorname*{max}_{\mathbf{ch}\in w_{\mathrm{SF}}}E^{\prime}(\mathbf{ch})$$
$$(3)$$
′(ch) (3)
In each AL cycle, the trained model selects the next cycle's additional (informative) samples. δ number of UR words with the highest wSF entropy are sampled without replacement. According to Equation (4), these most informative samples, w∗,
are annotated and combined with the current L
dataset to be utilized by the baseline system in the next AL cycle for training.
$$\mathbf{w}^{*}=\arg\operatorname*{max}_{\mathbf{w}\in{\mathcal{U}}}E(\mathbf{w})$$
$$(4)$$
E(w) (4)
We augment this approach with a clustering technique to maintain diversity during the sampling process. In this approach, α number of UR words, α > δ, with the highest wSF entropy are selected for clustering. The optimal number of clusters is determined by computing the sum of the squared error (SSE) for various cluster counts, k. In each observation, Si, centroids are determined by taking the mean value, µi, of all the points, w, in each cluster. The sum of the deviation of the points from centroids for all clusters combined determines SSE.
An observation's cluster count, k, is optimal only if its SSE is **minimal** among all observations.
$${\mathrm{SSE}}=\sum_{i=1}^{k}\sum_{w\in S_{i}}||w-\mu_{i}||^{2}$$
$$\mathbf{\Sigma}({\boldsymbol{\Sigma}})$$
The best cluster count in each AL cycle is used for clustering. The ultimate goal is to find δ most informative UR words with an acceptable diversity of total α (i.e., 1,000) samples.
A character-based word embedding model based on RNN is trained on datasets L and T to extract features for performing efficient clustering. Using a word embedding model enables us to have a sense of UR words and their semantic relations with their corresponding SF words, considering L and T data points are labeled.
A word embedding model is used to vectorize α samples selected from U. Two sequences of one-hot vectors, representing a pair of UR and SF, pass through two long short-term memory (LSTM)
networks, and the norm of the difference between output embeddings is fed to a sigmoid activation function. A sample from the dataset is a related pair, and samples with the highest Levenshtein distance from each other are non-related. These samples are combined and used to train the network. The model converts a word to a vector based on a character vocabulary generated from the unique characters of the training set.
After standardizing the vectors, they are fed to principal component analysis (PCA) in order to retain the most significant features and minimize the computational costs by reducing dimensions.
The features dimension is reduced to 3 components for clustering. α samples were divided into clusters using the k-means method. Then, proportional to the size of clusters, δ samples are selected from each group. These samples will be annotated and moved to L for the next AL training cycle.
## 6 Experimental Results
The details of the model's parameters and other variables in our experiments can be seen in Appendix A. To evaluate our proposed DAL
method, we have run all experiments 5 times and the average and standard deviation of accuracy are visualized in Figure 1. We have reported random training (passive learning), AL with entropy, and AL with combined entropy and clustering method for each cycle of training.
As can be seen in Figure 1, the proposed DAL method (with and without clustering) grows much faster than the random curve and presents an asymptotical shape which shows that it has extracted all the useful information present in U when it reaches the asymptote using only 4,000 samples (i.e., about 30% of the training set). In contrast, the random (passive) learner requires the entire training set to achieve maximum performance. This is true for all evaluation sets, except for EVAL-OOV, where the random (passive) learner reaches maximum performance after 6,000 samples. We have no explanation for this unusual behavior of EVAL-OOV, but we do observe that the overall top performance on EVAL-OOV is 2 percentage points lower than on DEV-OOV, while EVAL and DEV have similar results. This supports the hypothesis that EVAL-OOV contains some very difficult cases which the underlying system (with active or passive learning) cannot learn, so that active and passive learning converge earlier.
Unfortunately, we could not obtain evidence from this dataset that the proposed hybrid method
(combining entropy and clustering) is effective.
Our model is designed to be language and model independent. It can be applied on different languages and it can be used on top of different character-based DL models. However, the proposed method should be applied on new languages and dialects in further studies. We expect that further studies will show that clustering can be effective, especially if many diverse phenomena are present in the data.
## 7 Error Analysis
We performed an error analysis on 100 randomly chosen errors from the DEV-OOV set, generated by models trained on 1,900 samples using AL
with entropy and random selection methods. On this sample size, the random selection approach achieves an error rate of 12.9%, which reduces to 7.8% through AL (a 40% error reduction). We distinguish three basic types of errors: the system deletes a letter (i.e., phoneme) it should not; the system changes a letter it should not; the system adds a letter it should not, or does not delete a letter it should. We summarize results in Table 3.
Starting at the top, we see that letter deletion is greatly reduced by moving from random selection to AL; affix deletion is a special case, where an entire affix is not realized, and this problem is eliminated. Letter change is also greatly reduced.
However, a special case of letter change becomes more prominent: vowel length changes. This is a common effect in Arabic phonology due to changes in syllabification resulting from adding affixes. Finally, we see that letter addition remains a problem, with the special case of the system failing to delete a letter in fact increasing. The only case which is reduced is the special case of i-deletion in the active participle, which the AL setting appears to learn much better. We then have a fairly small category with multiple errors, which remains about the same. As we expect when the error rate goes down, the proportion of problem cases in the
![4_image_0.png](4_image_0.png)
corpus goes up. We distinguish three cases. First, foreign words have lexically idiosyncratic rules which cannot be learned. Second, almost all Arabic dialects replace the /l/ of the definite determiner
/Al/ with the first letter of the following noun if it is coronal ("sun-letter assimilation"). However, Egyptian optionally also does this for /j/ and /k/,
and the optionality is reflected in the training data which makes consistent learning impossible. Third, the corpus has a number of actual errors in the gold standard, usually in the UR. So in summary, the AL system has improved in all error types except for the letter addition categories.
| Error Type | AL | Random |
|--------------------|------|----------|
| letter deletion | 11 | 25 |
| affix deletion | 0 | 2 |
| letter change | 11 | 17 |
| no v shortening | 5 | 2 |
| v shortening | 7 | 4 |
| letter addition | 8 | 6 |
| no letter deletion | 6 | 2 |
| no AP i deletion | 2 | 5 |
| multiple errors | 14 | 13 |
| foreign | 1 | 1 |
| sun letter | 10 | 8 |
| gold error | 25 | 15 |
| Sum | 100 | 100 |
## 8 Conclusion
In this paper, we have proposed a deep active learning method for the morphological inflection processing task. The proposed method can be used on different languages; however, as a case study, we have focused on the Egyptian Arabic dialect. The results of our experiment demonstrate the outstanding efficiency of the proposed method:
With only 30% of the total training dataset, we achieve the same accuracy as the state-of-the-art model trained on whole dataset.
Future research includes applying this method to different low-resources Arabic dialects and other languages for building datasets, using other baseline algorithms, working on new uncertainty measures, and exploring for which datasets the clustering method can be helpful. We also intend to investigate how we can exploit our insight from the error analysis that the letter addition cases remain high (or even increase).
## Acknowledgements
We would like to thank three anonymous reviewers for their comments. Experiments were performed on the SeaWulf HPC cluster maintained by RCC and the Institute for Advanced Computational Science (IACS) at Stony Brook University and made possible by National Science Foundation
(NSF) grant No. 1531492.
## Limitations
Like lots of deep learning algorithms, our work also needs GPU resources. In common learning problems, models will be trained once on the existing training datasets, using dev sets for tuning the models. Then the trained model would be ready for use. In contrast, in active learning, we need to train the model several times (i.e., whenever new annotated samples are added to the current training set, the model should be re-trained), which increases the need for GPU resources. However, the need for GPU, is not related to our proposed method and it is due to the nature of active learning. In addition, one can run the active learning method once (rather than iteratively) for building an acceptable dataset.
It should be noted that we have designed the algorithm in a way to be independent of the target language and utilized model. However, we only tested our method on Egyptian Arabic dialect and the accuracy of the model should be investigated on other languages and dialects using different learning models in further studies.
## Ethics Statement
The current work is a fundamental research and it is not essentially related to a particular application.
We do not predict any ethical concerns from the algorithms and technologies proposed in this work.
We have utilized publicly available dataset and open source libraries, which have been published before.
## References
Bashar Alhafni, Nizar Habash, and Houda Bouamor.
2020. Gender-aware reinflection using linguistically enhanced neural models. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing, pages 139–150.
Khuyagbaatar Batsuren, Gábor Bella, Aryaman Arora, Viktor Martinovic, Kyle Gorman, Zdenekˇ
Žabokrtský, Amarsanaa Ganbold, Šárka Dohnalová, Magda Ševcíková, Kate ˇ ˇrina Pelegrinová, Fausto Giunchiglia, Ryan Cotterell, and Ekaterina Vylomova. 2022. The SIGMORPHON 2022 shared task on morpheme segmentation. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 103–116, Seattle, Washington. Association for Computational Linguistics.
Caleb Belth, Sarah Payne, Deniz Beser, Jordan Kodner, and Charles Yang. 2021. The greedy and recursive search for morphological productivity. arXiv preprint arXiv:2105.05790.
Tim Buckwalter. 2002. Buckwalter Arabic morphological analyzer version 1.0. Linguistic Data Consortium (LDC) catalog number LDC2002L49, ISBN 1-58563-257-0.
Tim Buckwalter. 2004. Buckwalter Arabic Morphological Analyzer Version 2.0. LDC
catalog number LDC2004L02, ISBN 1-58563-324-0.
Verna Dankers, Anna Langedijk, Kate McCurdy, Adina Williams, and Dieuwke Hupkes. 2021. Generalising to German plural noun classes, from the perspective of a recurrent neural network. In *Proceedings* of the 25th Conference on Computational Natural Language Learning, pages 94–108.
Alexander Erdmann, Salam Khalifa, Mai Oudah, Nizar Habash, and Houda Bouamor. 2019. A little linguistics goes a long way: Unsupervised segmentation with limited language specific guidance. In *Proceedings of the 16th Workshop on* Computational Research in Phonetics, Phonology, and Morphology, pages 113–124.
David Graff, Mohamed Maamouri, Basma Bouziri, Sondos Krouna, Seth Kulick, and Tim Buckwalter.
2009. Standard Arabic Morphological Analyzer
(SAMA) Version 3.1. Linguistic Data Consortium LDC2009E73.
Nizar Habash, Ramy Eskander, and Abdelati Hawwari.
2012. A morphological analyzer for egyptian Arabic.
In *Proceedings of the twelfth meeting of the special* interest group on computational morphology and phonology, pages 1–9.
Nizar Habash, Reham Marzouk, Christian Khairallah, and Salam Khalifa. 2022. Morphotactic modeling in an open-source multi-dialectal Arabic morphological analyzer and generator. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 92–102, Seattle, Washington. Association for Computational Linguistics.
Nizar Habash and Owen Rambow. 2006. MAGEAD:
A morphological analyzer and generator for the Arabic dialects. In Proceedings of the International Conference on Computational Linguistics and the Conference of the Association for Computational Linguistics (COLING-ACL), pages 681–688, Sydney, Australia.
Nawar Halabi. 2016. Modern standard Arabic phonetics for speech synthesis. Ph.D. thesis, University of Southampton.
Bruce Hayes. 2008. *Introductory phonology*, volume 7.
John Wiley & Sons.
Go Inoue, Salam Khalifa, and Nizar Habash. 2021.
Morphosyntactic tagging with pre-trained language models for Arabic and its dialects. *arXiv preprint* arXiv:2110.06852.
Salam Khalifa, Jordan Kodner, and Owen Rambow.
2022. Towards learning Arabic morphophonology. In *Proceedings of the seventh Arabic Natural* Language Processing Workshop (WANLP) at EMNLP 2022, pages 295–301s.
Salam Khalifa, Nasser Zalmout, and Nizar Habash.
2020. Morphological analysis and disambiguation for Gulf Arabic: The interplay between resources and methods. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3895–3904.
H Kilany, H Gadalla, H Arram, A Yacoub, A El-Habashi, and C McLemore. 2002. Egyptian colloquial Arabic lexicon. LDC catalog number LDC99L22.
Christo Kirov and Ryan Cotterell. 2018. Recurrent neural networks in linguistic theory: Revisiting pinker and prince (1988) and the past tense debate.
Transactions of the Association for Computational Linguistics, 6:651–665.
Ming Liu, Wray Buntine, and Gholamreza Haffari.
2018. Learning to actively learn neural machine translation. In *Proceedings of the 22nd Conference* on Computational Natural Language Learning, pages 334–344.
Mingyi Liu, Zhiying Tu, Zhongjie Wang, and Xiaofei Xu. 2020. Ltp: A new active learning strategy for bert-crf based named entity recognition. *ArXiv*,
abs/2001.02524.
Saliha Muradoglu and Mans Hulden. 2022. Eeny, meeny, miny, moe. how to choose data for morphological inflection. arXiv preprint arXiv:2210.14465.
Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. 2015. An unsupervised method for uncovering morphological chains. *Transactions* of the Association for Computational Linguistics, 3:157–167.
Álvaro Peris and Francisco Casacuberta. 2018. Active learning for interactive neural machine translation of data streams. *arXiv preprint arXiv:1807.11243*.
Ameya Prabhu, Charles Dognin, and Maneesh Singh.
2019. Sampling bias in deep active classification: An empirical study. *arXiv preprint arXiv:1909.09389*.
Dongyu Ru, Jiangtao Feng, Lin Qiu, Hao Zhou, Mingxuan Wang, Weinan Zhang, Yong Yu, and Lei Li. 2020. Active sentence learning by adversarial uncertainty sampling in discrete space. *arXiv* preprint arXiv:2004.08046.
Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation.
arXiv preprint arXiv:1606.02892.
Yanyao Shen, Hyokun Yun, Zachary C Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. arXiv preprint arXiv:1707.05928.
Dima Taji, Jamila El Gizuli, and Nizar Habash. 2018.
An Arabic dependency treebank in the travel domain.
In *Proceedings of the Workshop on Open-Source* Arabic Corpora and Processing Tools (OSACT),
Miyazaki, Japan.
Silvan Wehrli, Simon Clematide, and Peter Makarov.
2022. Cluzh at sigmorphon 2022 shared tasks on morpheme segmentation and inflection generation.
In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 212–219.
Shijie Wu, Ryan Cotterell, and Mans Hulden.
2021. Applying the transformer to character-level transduction. In *Proceedings of the 16th Conference* of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1901–1907, Online. Association for Computational Linguistics.
Changbing Yang, Garrett Nicolai, Miikka Silfverberg, et al. 2022. Generalizing morphological inflection systems to unseen lemmas. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 226–235.
Ye Zhang, Matthew Lease, and Byron Wallace. 2017.
Active discriminative text representation learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31.
Yuekai Zhao, Haoran Zhang, Shuchang Zhou, and Zhihua Zhang. 2020. Active learning approaches to enhancing neural machine translation. In *Findings* of the Association for Computational Linguistics:
EMNLP 2020, pages 1796–1806.
## A Implementation Details
We used a character-level transducer based on transformers (Wu et al., 2021) as the baseline in our computational experiments. A transformer with 4 encoder-decoder layers, 4 self-attention heads, the embedding dimension of 256, and the hidden size of 1024 for the feed-forward layer is used.
The number of parameters of the model with these specifications is 7.37M, excluding embeddings and the pre-softmax linear layer.
Experiments were performed on a system with an Intel Core i7-8700K CPU 3.70GHz 6-Core, a GeForce GTX 1080 8GB, and 64GB of memory.
The minimum resources required for each AL
training cycle is 3.38GBs of GPU and 3.5GBs of RAM. A training cycle completes in less than 60 minutes.
The best hyper-parameter values of the experiment are given in Table A.1. We conducted multiple experiments with different values. For AL cycle sampling, various methods such as maximum entropy, maximum entropy limited to vowel letters, and mean entropy were employed.
However, the results of the maximum entropy outperformed others. Moreover, we performed all the experiments 5 times and reported the average and standard deviation of the results on Figure 1.
Regarding the implementation of the proposed algorithm, we have used PyTorch, NumPy, Pandas, Matplotlib, scikit-learn, and chars2vec software packages.
| Parameter | Value |
|------------------------------------|---------|
| AL Initial Sampling Method | random |
| AL Cycle Sampling Method | entropy |
| AL Cycle Clustering Method | k-means |
| AL Initial Training Samples Counts | 900 |
| AL Tuning Samples Counts | 500 |
| AL Pre-clustering Samples Counts | 1000 |
| AL Cycle Samples Counts | 250 |
| Training Batch Size | 400 |
| Evaluation Batch Size | 16 |
| Dropout | 0.3 |
| Character Embedding Dimension | 50 |
| PCA Components | 3 |
| Max Cluster Counts | 8 |
Table A.1: The best hyper-parameter values of the experiment
## B Additional Results
Employing the proposed method, we conducted the experiments in the reversed direction (i.e.,
SF to UR) for morphophonological analysis. As demonstrated in Figure 2, the DAL methods outperformed random training by extracting all the informative samples of the pool set, U, when reaching the asymptote using 8,000 samples. Since the current baseline is designed for morphophonological generation tasks, its performance is diminished for SF to UR. As our proposed method is model-agnostic, a more suitable baseline model for this task would achieve higher accuracies in morphophonological analysis for both passive and active learning.
![8_image_0.png](8_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In the Limitation section after Paper's conclusion.
✓ A2. Did you discuss any potential risks of your work?
In Ethics Statement section after Limitation section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Please refer to the abstract and introduction (Section 1) of the paper.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 3 And 4.
✓ B1. Did you cite the creators of artifacts you used?
Sections 3, 4, and appendix A.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sections 3 and 4.
## C ✓ **Did You Run Computational Experiments?** Appendix A.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5, Appendix A, and Appendix B.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
li-etal-2023-counterfactual | Counterfactual reasoning: Testing language models{'} understanding of hypothetical scenarios | https://aclanthology.org/2023.acl-short.70 | Current pre-trained language models have enabled remarkable improvements in downstream tasks, but it remains difficult to distinguish effects of statistical correlation from more systematic logical reasoning grounded on the understanding of real world. We tease these factors apart by leveraging counterfactual conditionals, which force language models to predict unusual consequences based on hypothetical propositions. We introduce a set of tests from psycholinguistic experiments, as well as larger-scale controlled datasets, to probe counterfactual predictions from five pre-trained language models. We find that models are consistently able to override real-world knowledge in counterfactual scenarios, and that this effect is more robust in case of stronger baseline world knowledge{---}however, we also find that for most models this effect appears largely to be driven by simple lexical cues. When we mitigate effects of both world knowledge and lexical cues to test knowledge of linguistic nuances of counterfactuals, we find that only GPT-3 shows sensitivity to these nuances, though this sensitivity is also non-trivially impacted by lexical associative factors. | # Counterfactual Reasoning: Testing Language Models' Understanding Of Hypothetical Scenarios
Jiaxuan Li University of California Irvine Irvine, CA 92617 [email protected] Lang Yu Meta Seattle, WA 98109 [email protected] Allyson Ettinger University of Chicago Chicago, IL 60637 [email protected]
## Abstract
Current pre-trained language models have enabled remarkable improvements in downstream tasks, but it remains difficult to distinguish effects of statistical correlation from more systematic logical reasoning grounded on the understanding of real world. We tease these factors apart by leveraging *counterfactual conditionals*, which force language models to predict unusual consequences based on hypothetical propositions. We introduce a set of tests from psycholinguistic experiments, as well as larger-scale controlled datasets, to probe counterfactual predictions from five pre-trained language models. We find that models are consistently able to override real-world knowledge in counterfactual scenarios, and that this effect is more robust in case of stronger baseline world knowledge—however, we also find that for most models this effect appears largely to be driven by simple lexical cues. When we mitigate effects of both world knowledge and lexical cues to test knowledge of linguistic nuances of counterfactuals, we find that only GPT3 shows sensitivity to these nuances, though this sensitivity is also non-trivially impacted by lexical associative factors.1
## 1 Introduction
Reasoning plays a central role in human communication (Frank and Goodman, 2012). While language models have demonstrated remarkable capacity on downstream tasks (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019), it remains unclear to what extent predictions generated by language models are consequences of correlation with linguistic heuristics in the context, versus robust reasoning about causal relations grounded on understanding of world knowledge.
In this paper we leverage *counterfactual conditionals* to investigate the capacity of pre-trained LMs (PLMs) to distinguish hypothetical scenarios from reality, and to examine how this interacts with models' use of existing real world knowledge as well as shallower associative cues. Counterfactuals consist of a premise which is false in the real world but true in the hypothetical world (e.g., "If cats were vegetarians"), and an imaginary consequence of this premise ("cats would love cabbages"). Testing language models with counterfactuals allows us to use language to manipulate what is true and what is hypothetical, and to test models' ability to separate and use this information for predictions.
Previous work has established the use of counterfactual scenarios to probe inference ability (Qin et al., 2019; Zellers et al., 2019; Mostafazadeh et al., 2016; Meng et al., 2022; Rajani et al., 2019; Saparov and He, 2022; Frohberg and Binder, 2021; Elazar et al., 2021; Rudinger et al., 2020), but the datasets lack systematic control of lexical cues and world knowledge, which makes it likely that the performance could be attributable to spurious cues in the datasets (Niven and Kao, 2019).
For our tests we draw on and adapt inputs from existing psycholinguistic experiments. We begin by testing models' ability to override existing world knowledge when the context indicates that the correct completion involves a hypothetical world (e.g., "if cats were vegetarian, cats would love *cabbages/fish*"). We test five popular PLMs, and find that models can increase their preference for counterfactual completions given counterfactual context—however, most models rely strongly on simple lexical cues. Next we control the effect of real world knowledge and lexical triggers, to test models' understanding of what counterfactual language implies about the world state. We find that most models fail to understand real-world implications of counterfactuals and largely rely on lexical triggers—with the exception of GPT3, which shows greater sophistication, but continues to show non-trivial susceptibility to interfer1Data and code available at https://github.com/
goldengua/Counterfactual_Inference_LM.
804 ence from lexical-associative cues. We discuss the implications and possible interpretations of these findings with respect to linguistic competence and predictive strategies of these models.
## 2 Exp1: Overriding World Knowledge
Our first experiment investigates whether LMs are able to take a counterfactual scenario and predict a counterfactual-consistent completion that contradicts general world knowledge.
Items We draw directly on counterfactual stimuli from the psycholinguistic study of Ferguson and Sanford (2008). There are 128 items from the original psycholinguistic experiments, and we synthetically generate 10,720 additional items (see Appendix A.2 for illustration of data generation process). We match target nouns and syntactic constructions across conditions in order to control lexical properties that influence language models' predictions. Table 1 shows example items from the synthetic large-scale dataset (see Section A.1 for example items from the small-scale dataset).
| Cond Sentence CW If cats were vegetarians, people would love them. Families would feed cats with fish/cabbages. RW Because cats are carnivores, people love them. Families would feed cats with fish/cabbages. BB Families would feed cats with fish/cabbages |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Table 1: Exp1 items (logical completion underlined).
![1_image_0.png](1_image_0.png)
The experiment includes two key conditions:
Counterfactual-World (CW) and Real-World (RW)
(Fig. 1). The CW condition presents a counterfactual scenario, e.g., in which cats are vegetarians.
The logical target completion in this example is
"cabbages", but because in reality cats are more likely to eat fish, this contradicts world knowledge.
By contrast, in the RW condition the logical completion is consistent with the real world ("feed cats with fish"). We also include one Baseline Bias (BB)
condition, for a more direct test of the strength of models' baseline preference for each completion.
Experiments We test counterfactual reasoning in five pre-trained language models. We include autoregressive transformers in the GPT family (GPT2 (Radford et al., 2019) and GPT-3 (Brown et al.,
2020)) and masked language models in the BERT
family (BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and MPNet (Song et al., 2020)) 2.
We test models by comparing the log-probability that each model assigns to the CW-congruent ("cabbages") and RW-congruent ("fish") completions given the contexts. For all conditions, we compute the percentage of items in which the CW-congruent continuation has a higher probability than the RWcongruent continuation. This means that in RW
and BB conditions, *lower* values reflect better predictions, since the CW-congruent completion is the less logical completion in these conditions.
Model Small-scale Large-scale
CW RW BB CW RW BB
GPT2 53.1 34.4 40.6 53.7 29.5 31.5
GPT3 **68.8 18.8 18.7 71.3 2.5 14.7**
BERT 46.9 43.8 31.2 34.2 14.3 35.2
RoBERTa 53.1 21.9 21.9 61.4 26.5 47.2
MPNet 50.0 21.9 21.9 66.9 15.6 36.6
Table 2: Percentage of preference for CW-congruent completion (e.g., "cabbages") in Exp1. In the CW condition, *higher* values reflect better predictions. In RW
and BB conditions, *lower* values reflect better predictions.
Results Table 2 shows the preferences for CWcongruent completions across all models and conditions, for the small-scale hand-designed items from the psycholinguistic experiment, and for the large-scale synthetic items. 3 We see that all mod-2We used the smallest uncased variants of GPT-2, BERT,
RoBERTa, and MPNet, and we used the text-davinci-003 variant of GPT-3 via API request. Experiments were conducted from April to August 2022.
3A potential concern with aggregated percentages shown in Table 2 and Table 6 is that given a specific instance, a model may assign a higher probability to a CW-congruent continuation in the CW condition because it incorrectly predicts the corresponding BB/RW item. This concern is mitigated by the fact that we focus our conclusions on the difference between the CW and RW conditions, rather than the accuracies in the els show stronger preference for CW-congruent continuations in the counterfactual (CW) context than in the other conditions (though in the case of BERT on the small-scale data, this difference is negligible). All models show below-chance preference for CW-congruent continuations in the RW
condition—which means above-chance preference for the correct RW-congruent continuations. However, though all model preferences for the correct CW-congruent continuation are higher in the CW
condition than in the RW condition, even in the CW condition the preference for CW-congruent conditions is at best slightly above chance for most models. The exception is GPT-3, which is the only model to prefer the CW-congruent continuation in greater than 70% of items.
We also see that GPT-3 shows exceptionally strong performance on both BB and CW conditions. This suggests, slightly counterintuitively, that stronger grasp of relevant world knowledge may in fact be associated with models *more* effectively overriding that knowledge in a counterfactual. To investigate this effect further, we examine the impact of world knowledge at the item level. We quantify strength of world knowledge as the difference between models' log-probability of CW- and RW-congruent continuations for a given item in the BB condition, and the strength of counterfactual preference as the difference between log-probability of CW- and RW-congruent continuations for a given item in the CW condition. We then compute the Pearson correlation between these strength measures. We find a significant correlation between the robustness of world knowledge encoding and strength of counterfactual preference in the CW condition across all language models (see Appendix A.3), further supporting a relationship between strength of world knowledge and counterfactual sensitivity. While previous work has suggested that large language models may have difficulty avoiding memorized texts when explicitly prompted to end famous quotes differently (McKenzie et al., 2022), our results suggest that world knowledge may in fact facilitate reasoning when accompanied with clear structural cues
(e.g. "if"). To better understand how world knowledge informs language models' predictions and inference, it will be important to continue expanding the scale of tests and more carefully operationalize definitions of world knowledge in future work.
## 3 Exp2: Impact Of Cue Words In Context
The first experiment suggests that models can to an extent override world knowledge given a counterfactual, particularly in cases when models have a strong handle on the relevant world knowledge.
However, it is possible that in these tests the models were not relying on sophisticated understanding of counterfactuals, but rather on simple lexical triggers in context. Consider, for instance, that models could perform well in Exp1 if they simply increase their preference for "cabbages" in the proximity of
"vegetarians", etc. To test the impact of these lexical triggers, we incorporate an additional condition.
Items Table 3 and Fig. 2 show a sample item and illustration of experimental set-up with the new added condition. In this Counterfactual-to-Reality
(CR) condition, models see the same counterfactual context, but the subsequent sentence references actual reality. So the correct completion is consistent with reality, but inconsistent with the lexical trigger
("vegetarians"). We generate sentences in the CR
condition by modifying CW sentences to include the discourse connective "In reality" and to include present tense in the second sentence.
Cond **Sentence**
CR If cats were vegetarians, people would love them.
In reality, families feed cats with *fish/cabbages*.
Table 3: Exp2 items (logical completion underlined).
![2_image_0.png](2_image_0.png)
Experiments As above, we calculate percentage of items in which models prefer the CW-congruent continuations. Models relying on information beyond simple lexical triggers should show a sharp drop in preference for the CW-congruent completion in the CR condition, where the correct completion should align with real world information.
| Model | Small-scale | Large-scale | | |
|---------|---------------|---------------|------|------|
| CW | CR | CW | CR | |
| GPT2 | 53.1 | 50.0 | 53.7 | 51.9 |
| GPT3 | 68.8 | 56.2 | 71.3 | 28.0 |
| BERT | 46.9 | 46.9 | 34.2 | 39.4 |
| RoBERTa | 53.1 | 37.5 | 61.4 | 57.3 |
| MPNet | 50.0 | 46.9 | 66.9 | 58.1 |
Results Table 4 shows the results. We see that most models show non-zero drop between CW and CR conditions—however, for most models this reduction is minor. It is only GPT-3 that shows a truly substantial drop in CW-congruent preference, and only in the large-scale synthetic dataset. This suggests that most models are largely following simpler lexical triggers, while GPT-3 has somewhat greater sensitivity to more detailed linguistic cues. Note, however that GPT-3's relative success on the synthetic data over the small-scale data may rely on larger distance between lexical triggers and target positions: see Appendix A.4 for evidence on GPT-3's sensitivity to linear distance.
## 4 Exp3: Inferring Real World State With Counterfactual Cues
The previous experiments indicate that models can override world knowledge in the face of counterfactual evidence, and that the ability to do this improves with stronger world knowledge—but for most models this performance appears to be driven largely by simple lexical triggers in the context, with the possible exception of GPT-3. In this section we remove the influence of pre-existing world knowledge, and hold constant lexical triggers across conditions, for a more direct test of models' sensitivity to linguistic indicators of counterfactuals, and what they say about the true state of the world. This task is particularly challenging because language models must infer the true state of the world based on the presence of counterfactuals, with lexical cues often being misleading.
Items We adapt stimuli from a psycholinguistic study with 96 controlled sentences (Ferguson, 2012). We additionally create a larger-scale synthetic dataset with 12,960 sentences, using the same events as the generated dataset from Section 2.
We modify the subject noun phrases such that there is no influence of existing world knowledge. For example, we modify the subject "cat" to "pet", so that there is no prior knowledge about the subject's preference for "cabbages" or "fish". As a result, existing world knowledge cannot inform the correct completion—instead, models need to infer based on the counterfactual language that the true state of the world is different from what the counterfactual states. Further, we control the lexical items used across different conditions to minimize effects of lexical cues on condition differences (see Table 5).
Table 5: Exp3 items (logical completion underlined).
![3_image_0.png](3_image_0.png)
| Cond | Sentence |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
| CWC | If the pets were vegetarians, people would love them. In fact, people feed the pets with fish/cabbages. |
| RWCA Because the pets are vegetarians, people love them. In fact, people feed the pets with fish/cabbages. BBC In fact, people feed the pets with fish/cabbages. | |
Fig. 3 shows the set-up of conditions. In the Counterfactual-World Context (CWC) condition, the scenario described in the first sentence is neutral with respect to real world knowledge—it is the use of the counterfactual ("if...were") that tips us off that this scenario is not true in reality. The correct completion, then, cannot be informed by world knowledge, and is also misaligned with the lexical trigger (e.g., "vegetarians"), so models must rely specifically on this implication from the counterfactual in order to perform well.
In the Real-World Context Alternative (RWCA)
condition, the context uses the same lexical triggers ("vegetarians") as the CWC condition. However, because there is no counterfactual language, the logical completion is now the word associated with the lexical trigger (e.g., "cabbages", associated with "vegetarians").
Given that the logical completions in CWC and RWCA differ, we also compare against a Baseline Bias Context (BBC) condition, to establish default model preference for the target factual completion in the presence of the new subject noun phrase.
Experiments We compare proportion of CWCcongruent completions across conditions. Good performance will assign high values in the CWC
condition and low values in the RWCA condition.
| Model | Small-scale | Large-scale | | | | |
|---------|---------------|---------------|------|------|------|------|
| CWC | RWCA | BBC | CWC | RWCA | BBC | |
| GPT2 | 66.7 | 66.7 | 33.3 | 35.8 | 32.2 | 72.6 |
| GPT3 | 62.5 | 33.3 | 50.0 | 47.6 | 32.2 | 73.8 |
| BERT | 45.8 | 33.3 | 50.0 | 53.0 | 53.0 | 71.5 |
| RoBERTa | 50.0 | 50.0 | 50.0 | 35.7 | 31.3 | 72.5 |
| MPNet | 37.5 | 33.3 | 62.5 | 41.4 | 32.3 | 68.5 |
Results Table 6 shows the results. In the smallscale dataset, most models show a similar preference in CWC and RWCA, suggesting again that their predictions are largely driven by lexical triggers. Only GPT-3 shows substantial difference between CWC and RWCA, indicating finer-grained sensitivity to counterfactual structures. This sensitivity is, however, less pronounced in the largescale dataset. Closer inspection suggests that GPT3's specific success on the small-scale data may in fact be attributable to canceling out of lexical triggers: in the small-scale dataset, there are lexical triggers supporting both continuations (see A.1 for more illustration of the characteristics of the smallscale dataset), which may cause lexical cues to cancel out, enabling more influence from other linguistic cues. To take one example, the small-scale dataset contains the item "If Helen had received her student loan, her bank balance would now be in credit. When she checked her bank balance she was **worried/happy** about her finance." In this item, among the lexical triggers ("student loan",
"in credit", "bank balance") there are potential associations with both the CWC-congruent completion
"worried" and the CWC-incongruent completion
"happy". By contrast, in the large-scale dataset, the major lexical trigger ("vegetarians") always favors the CWC-incongruent continuation ("cabbages"), causing strong lexical bias against the CWC-congruent continuation (see Appendix A.4 for further analysis on the role of conflicting lexical triggers and other linguistic factors). This suggests that GPT-3 does show real sensitivity to linguistic indicators of counterfactuals, but the effect of superficial lexical cues remains strong.
## 5 Conclusion
The experiments above have shown that when presented with counterfactual situations, PLMs are able to prefer completions that conflict with world knowledge—and counterintuitively, this sensitivity appears better in cases where that world knowledge is stronger. Our results also indicate, however, that models are in large part relying on simple lexical cues to inform these preferences. The only model that shows more sophisticated sensitivity to finegrained linguistic cues separating counterfactuals from reality is GPT-3—which successfully distinguishes conditions based on counterfactual cues, but nonetheless still shows strong influences from lexical associative cues. Why might world knowledge aid counterfactual sensitivity? Does GPT-3 truly understand counterfactuals? One possibility worth considering is that explanations in both of these cases involve volume of exposure. First, models' stronger world knowledge for a given fact suggests that models have encountered that fact more often in training—and this may in turn translate to more exposure to that type of knowledge in counterfactual contexts, enabling more straightforward memorization-based performance. Similarly, while GPT-3 may robustly understand counterfactuals, the massive data exposure for that model may enable a simpler path to success: GPT-3 could simply have developed lower-level knowledge of how linguistic cues like "If/had" versus "Because" mediate levels of association between nearby lexical cues and later words. We leave investigation of these hypotheses for future work.
## Limitations
The datasets in this paper systematically control lexical cues and world knowledge between critical conditions, allowing us to tease apart the effects of statistical heuristics versus reasoning about causal relations. However, the manipulation brings unnaturalness to sentences when scaling up into large-scale synthetic datasets, and constrains the level of linguistic complexity. As we have seen in Exp3, the small-scale dataset has more complex combinations of conflicting lexical triggers than the large-scale dataset, causing language models to behave differently across datasets. Though we further address the effects of conflicting lexical cues in Appendix A.4, it will be valuable to carry out additional investigation of effects of sentence naturalness, and to consider designing large-scale datasets using naturally-occurring data.
The study raises and leaves open a number of interesting questions: How exactly might counterfactual reasoning benefit from world knowledge?
To what extent does GPT-3's stronger performance reflect robust logical and counterfactual reasoning?
While we lay out some possible explanations in the Conclusion and investigate the role of other linguistic and non-linguistic factors in the above experiments and in the Appendix, we leave additional systematic analysis for future work.
Finally, the experiments use English, in which counterfactual conditionals have distinct and systematic linguistic markers relative to other types of conditionals. It would be interesting to investigate other languages in which counterfactual conditionals are not marked linguistically, and require world knowledge to disambiguate. For example, a Chinese conditional could be ambiguous between "if it had rained today" and "if it rains today".
## Ethics Statement
The datasets were either created and published by reseachers in psycholinguistics, or synthetically generated by the authors without use of harmful information. No experiments involving human subjects were included in the paper. The authors do not foresee any ethical concerns in this paper.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2021. Amnesic probing: Behavioral explanation with amnesic counterfactuals. *Transactions of* the Association for Computational Linguistics, 9:160–
175.
Heather J Ferguson. 2012. Eye movements reveal rapid concurrent access to factual and counterfactual interpretations of the world. *Quarterly Journal of Experimental Psychology*, 65(5):939–961.
Heather J Ferguson and Anthony J Sanford. 2008.
Anomalies in real and counterfactual worlds: An eye-movement investigation. Journal of Memory and Language, 58(3):609–626.
Michael C Frank and Noah D Goodman. 2012. Predicting pragmatic reasoning in language games. *Science*,
336(6084):998–998.
Jörg Frohberg and Frank Binder. 2021. Crass: A novel data set and benchmark to test counterfactual reasoning of large language models. *arXiv preprint* arXiv:2112.11941.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Ian McKenzie, Sam Bowman, and Ethan Perez. 2022.
Inverse scaling prize: Second round winners.
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual knowledge in gpt. *arXiv preprint arXiv:2202.05262*.
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696.
Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658–4664.
Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019.
Counterfactual story reasoning and generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5043–5053.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9.
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 4932–4942.
Rachel Rudinger, Vered Shwartz, Jena D Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A Smith, and Yejin Choi. 2020. Thinking like a skeptic: Defeasible inference in natural language.
In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4661–4675.
Abulhair Saparov and He He. 2022. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. *arXiv preprint arXiv:2210.01240*.
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. Mpnet: Masked and permuted pretraining for language understanding. Advances in Neural Information Processing Systems, 33:16857–
16867.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In ACL, pages 4791–4800.
## A Appendix A.1 Example Items In Small-Scale Dataset
Table 7 shows the example items from Exp1 and Exp2 in the small-scale psycholinguistic datasets, and Table 8 shows the example items from Exp3 in the small-scale dataset. Semantic association between the target word and key lexical items in the context is less salient (e.g. "language skills" and
"talk") in the small-scale dataset as compared to the association in the large synthetic dataset (e.g. "vegetarian" and "carrots"). In particular, sentences in Exp3 contain lexical triggers that could support both CWC-congruent and RWCA-congruent continuations. For instance, the key lexical items
("student loan", "bank balance", "in credit") could be logically associated with either of the feelings
("happy" or "worried").
Cond **Sentence**
CW If cats had developed language skills like humans it would be interesting to hear what they have to say. Judith would listen to her cat *meow/talk* and throw balls of wool for it to play with.
RW If cats are bored and want something to do they are usually very good at letting their owners know.
Judith would listen to her cat *meow/talk* and throw balls of wool for it to play with.
BB Judith would listen to her cat *meow/talk* and throw balls of wool for it to play with.
CR If cats had developed language skills like humans it would be interesting to hear what they have to say. **In reality**, Judith listens to her cat *meow/talk* and throws balls of wool for it to play with.
Table 7: Example Exp1 and Exp2 items in small-scale dataset (logical completion underlined).
Cond **Sentence**
CWC If Helen had received her first student loan, her
bank balance would now be in credit. When
she checked her bank balance today she was worried/happy with her financial situation.
RWCA Because Helen had received her first student loan,
her bank balance was now in credit. When she
checked her bank balance today she was *worried/happy* with her financial situation.
BBC When she checked her bank balance today she was
worried/happy with her financial situation.
Table 8: Example Exp3 items in small-scale dataset
(logical completion underlined).
## A.2 Generation Process Of Dataset
We design our synthetic dataset to parallel the psycholinguistic stimuli. We design a series of causal event pairs (e.g. "like"/"feed"), and situate these pairs within counterfactual conditional templates
(e.g. "if subject1 liked object1, subject2 would feed subject1 with object2"). For each subject/object slot, we define a class of nouns satisfying both selection restriction of the verb and world knowledge.
For example, in the template sentence "if subject1 liked vegetables, families would feed them with cabbages/chicken", subject1 can be carnivores (e.g.
"cats/lions/tigers"). We then vary lexical items in each subject/object slot, and other linguistic markers (e.g. modal, tense) in the template. Table 9 shows examples illustrating the data generation from a sample event in the CW-condition in Exp1.
Exp2 and Exp3 use the same template and we manipulate the syntactic structure or informativity of the subject as described in Section 3 and Section 4.
| Condition | Sentence | | | |
|--------------------------------|---------------------------------------------------------------------------|----------|-------------------|----------|
| Original | If | cats | were vegetarians, | families |
| would feed them with cabbages. | | | | |
| Subject1 | If dogs were vegetarians, | families | | |
| would feed them with cabbages. | | | | |
| Object1 | If cats were greens, families would feed them with cabbages. | | | |
| Subject2 | If cats were vegetarians, breeders would feed them with cabbages. | | | |
| Object2 | If cats were vegetarians, | breeders | | |
| would feed them with cabbages. | | | | |
| Modal | If | cats | were vegetarians, | families |
| might feed them with cabbages. | | | | |
| Tense | If cats had been vegetarians, families would have fed them with cabbages. | | | |
## A.3 Correlation With World Knowledge
Table 10 shows the correlation between the robustness of world knowledge representation and the strength of counterfactual preference in CW condition. Across all language models there is a significant correlation, with all correlation coefficients at or above 0.69, indicating that language models benefit from a good representation of world knowledge for this counterfactual task.
| Model | Small-scale | Large-scale | |
|---------|---------------|---------------|--------------|
| coef | p | coef | p |
| GPT2 | .86 | <.001*** | .82 <.001*** |
| GPT3 | .70 | .004** | .74 <.001*** |
| BERT | .91 | .001** | .69 <.001*** |
| RoBERTa | .86 | .001*** | .77 <.001*** |
| MPNet | .88 | <.001*** | .61 <.001*** |
## A.4 Follow-Up Analysis On Gpt-3'S Success
The previous experiments indicate that GPT-3 has the best performance in counterfactual tasks. We also find that GPT-3's success differs non-trivially between small-scale and large-scale datasets. In Exp2, GPT-3 is more successful on the large-scale dataset. By contrast, in Exp3, GPT-3 is more successful on the small-scale dataset. What kind of linguistic factors are driving the success of GPT3? Why is there an asymmetry between GPT-3's performance on the small-scale and large-scale datasets? We speculate that there are two possible reasons related to the design and characteristics of the small-scale and large-scale datasets. First, the linear distance between lexical triggers and target positions in the large-scale dataset is not controlled as carefully as in the small-scale dataset.
Second, lexical triggers in the large-scale dataset always favor a specific continuation, whereas in the small-scale dataset the cue can support both continuations.
In this experiment, we further explore these questions by investigating to what extent GPT-3's success relies on other linguistic factors. We first design a *Baseline* dataset by selecting a subset of the large-scale dataset from Exp3, with the criterion that the selected items have no strong bias toward either completion in the Baseline Bias Context (BBC) condition (see examples in Table 11).
Next, we test GPT-3's sensitivity to three classes of linguistics features: conflicting lexical triggers, linear distance to target position, and several other linguistic markers. We manipulate these linguistic features in the items of the CWC and RWCA conditions, to form three new datasets. The Cue dataset introduces a conflicting cue via a discourse connective "rather than" (see examples in Table 12). The Distance dataset changes the position of the conflicting lexical cues by using the discourse connective "instead of" (see examples in Table 13). The Marker dataset manipulates other fine-grained linguistic markers including sentence boundary, tense, discourse connective (see examples in Table 14).
There are 10,000 items in total. We again calculate percentage of items in which the model prefers CWC-congruent continuations.
Baseline We test GPT-3's preference for CWCcongruent continuations in the *Baseline* dataset to establish a baseline comparison for subsequent analysis. The results are shown in the right-hand column of Table 11. Similar to the results in Section 4, GPT-3 shows a greater preference for CWCcongruent continuations in the CWC condition than in the RWCA condition, even when there is not a strong preference in the BBC condition, which indicates GPT-3's sensitivity to counterfactual structure.
| Condition Sentence | GPT-3 | |
|----------------------|----------------------------------------------------------------------------------------------------------------|------|
| CWC | If the pet had loved vegetables, it would be very surprising. In fact, people feed the pet with fish/cabbages. | 34.8 |
| RWCA | Because the pet loved vegetables, it was very surprising. In fact, people feed the pet with fish/cabbages. | 27.3 |
| BBC | In fact, people feed the pet with | 42.5 |
| fish/cabbages. | | |
Table 11: *Baseline* dataset: Example items and percentage of preference for CWC-congruent completion (e.g.,
"fish").
Conflicting lexical cue Next, in the Cue dataset we test to what extent GPT-3's performance reflects canceling out of lexical cues, by adding a conflicting lexical cue to the context, licensed by the discourse connective "rather than". Though a new conflicting lexical cue appears, the logical completion should remain the same. Table 12 (right-hand column) shows that GPT-3 is greatly affected by the presence of conflicting lexical cues. After inserting the conflicting cue (e.g., "meat") into context, the percentage of CWC-congruent continuations (e.g.,
"fish") increased in both CWC and RWCA conditions, indicating a strong effect from the presence of a conflicting lexical cue.
Linear distance to target Next, we use the *Distance* dataset to test the extent to which the salience of lexical cues is affected by distance from the target word. To do this, we move the conflicting lexical cues to the beginning of the sentence, using the discourse connective "instead of". As a result, the conflicting cue (e.g. "meat") is moved farther away from the target, compared to it in Cue dataset. Table 13 (right-hand column) shows the results. The model is less likely to predict the CWC-congruent continuation (e.g., "fish") in both conditions. The result suggests that linear distance from lexical cues to the prediction target has a strong impact.
Table 13: *Distance* dataset: Example items and percentage of preference for CWC-congruent completion (e.g.,
"fish").
Other linguistic markers Finally, we use the Marker dataset to probe how other fine-grained linguistic markers affect the accuracy of predictions in counterfactual sentences. We test the effect of sentence boundaries (indicated by a period), discourse connectives (indicated by "In fact") and tense. All three manipulations make CWC-congruent continuations less coherent relative to the CWC condition in the *Baseline* dataset, while the tense and sentence boundary manipulations additionally cause the RWCA-congruent continuation to become more logical. Table 14 (right-hand column) shows the results. GPT-3 shows a fair amount of sensitivity to these linguistic markers. For the linguistic markers
(tense marker, sentence boundary marker) that shift the logical completion from CWC-congruent (e.g.
"fish") to RWCA-congruent (e.g. "cabbages"), GPT3 is less likely to prefer the CWC-congruent completion, with tense generating the strongest effect.
For the discourse connective manipulation, which deletes the connective "in fact", and should decrease the preference for the CWC-congruent completion, GPT-3 instead shows a slightly stronger preference for those CWC-congruent completions.
## A.5 Additional Metrics On Small-Scale Dataset
To further evaluate whether models' success on counterfactual inference disentangle with the preference towards a specific continuation, we also conduct by-item analysis on the small-scale datasets, and calculate the proportion of trials in which the model demonstrates a preference for the logical completion in both CW and RW conditions for Exp1, and in both CWC and RWCA conditions for Exp3. Table 15 shows the percentage of preference for logical completions in both counterfactual and factual conditions in Exp1 and Exp3. The results
| Condition Sentence | GPT-3 | |
|---------------------------------|--------------------------------------------------------------|------|
| If the pet had loved vegetables | | |
| CWC | rather than meat, it would be very | |
| (Rather) | surprising. In fact, people feed the pet with fish/cabbages. | 48.5 |
| RWCA | Because the pet loved vegetables | |
| (Rather) | rather than meat, it was very surprising. In fact, people feed the pet with fish/cabbages. | 47.0 |
| Condition Sentence | GPT-3 | |
|----------------------|----------------------------------|------|
| CWC | If instead of meat, the pet had loved vegetables, it would be very surprising. In fact, people feed the pet with | |
| (Instead) | fish/cabbages. | 28.5 |
| RWCA | Because instead of meat, the pet | |
| (Instead) | loved vegetables, it was very surprising. In fact, people feed the pet with fish/cabbages. | 33.8 |
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
fish/cabbages.
Table 14: *Marker* dataset: Example items and percentage of preference for CWC-congruent completion (e.g.,
"fish").
are consistent with the findings we report in Section 2 and Section 4. In Exp1, GPT-3, RoBERTa and MPNet show above-chance preference (25%)
for logical continuations in both conditions. In Exp3, only GPT-3 shows substantial preference for logical continuations.
Model GPT2 GPT3 BERT RoBERTa MPNet
Exp1 (CW + RW) 18.8 **50.0** 9.4 31.3 28.1
Exp3 (CWC + RWCA) 0 **29.2** 12.5 4.1 4.2
Table 15: Percentage of items in which both counterfactual (CW/CWC) and real scenarios (RW/RWCA) are predicted correctly in Exp1 and Exp3.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In page 5 Section Limitations
✓ A2. Did you discuss any potential risks of your work?
Yes, in page 5, Section Ethics statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In Section 1 Introduction and in abstract
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 2, 3, 4.
✓ B1. Did you cite the creators of artifacts you used?
Yes, in Section 2 and Section 4.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The dataset is publicly available without a license
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
In Section 2, 3, 4.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
In Section Ethics statement
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Yes, in Section 2 and Section 4.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** In Section 2, 3, 4.
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We are using pre-trained language models and it takes about two hours to run the experiment on google colab platform without a GPU
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We discuss experiment set up in subsection Experiments
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
we report percentage of preference for one context-congruent continuation over context-incongruent continuation C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
madhani-etal-2023-bhasa | Bhasa-Abhijnaanam: Native-script and romanized Language Identification for 22 {I}ndic languages | https://aclanthology.org/2023.acl-short.71 | We create publicly available language identification (LID) datasets and models in all 22 Indian languages listed in the Indian constitution in both native-script and romanized text. First, we create Bhasha-Abhijnaanam, a language identification test set for native-script as well as romanized text which spans all 22 Indic languages. We also train IndicLID, a language identifier for all the above-mentioned languages in both native and romanized script. For native-script text, it has better language coverage than existing LIDs and is competitive or better than other LIDs. IndicLID is the first LID for romanized text in Indian languages. Two major challenges for romanized text LID are the lack of training data and low-LID performance when languages are similar. We provide simple and effective solutions to these problems. In general, there has been limited work on romanized text in any language, and our findings are relevant to other languages that need romanized language identification. Our models are publicly available at \url{https://github.com/AI4Bharat/IndicLID} under open-source licenses. Our training and test sets are also publicly available at \url{https://huggingface.co/datasets/ai4bharat/Bhasha-Abhijnaanam} under open-source licenses. | # Bhasha-Abhijnaanam: Native-Script And Romanized Language Identification For 22 Indic Languages
Yash Madhani1 Mitesh M. Khapra2 **Anoop Kunchukuttan**3 AI4Bharat1,2,3IIT Madras1,2,3 Microsoft3 [email protected] [email protected] [email protected]
## Abstract
We create publicly available language identification (LID) datasets and models in all 22 Indian languages listed in the Indian constitution in both native-script and romanized text. First, we create *Bhasha-Abhijnaanam*, a language identification test set for native-script as well as romanized text which spans all 22 Indic languages. We also train *IndicLID*, a language identifier for all the above-mentioned languages in both native and romanized script.
For native-script text, it has better language coverage than existing LIDs and is competitive or better than other LIDs. IndicLID is the first LID for romanized text in Indian languages.
Two major challenges for romanized text LID
are the lack of training data and low-LID
performance when languages are similar. We provide simple and effective solutions to these problems. In general, there has been limited work on romanized text in any language, and our findings are relevant to other languages that need romanized language identification.
Our models are publicly available at https:
//github.com/AI4Bharat/IndicLID under open-source licenses. Our training and test sets are also publicly available at https://huggingface.co/datasets/
ai4bharat/Bhasha-Abhijnaanam under open-source licenses.
## 1 Introduction
In this work, we focus on building a language identifier for the 22 languages listed in the Indian constitution. With increasing digitization, there is a push to make NLP technologies like translation, ASR, conversational technologies, etc. (Bose, 2022) available as a public good at population scale
(Chandorkar, 2022). A good language identifier is required to help build corpora in low-resource languages. For such languages, language identification is far from a solved problem due to noisy web crawls, small existing datasets, and similarity to high-resource languages (Caswell et al., 2020).
Existing publicly available LID tools like CLD31, LangID2(Lui and Baldwin, 2011), FastText3(Joulin et al., 2016) and NLLB4(NLLB Team et al., 2022) have some shortcomings with respect to Indian languages. They do not cover all the above-mentioned 22 languages. In social media and chats, it is also common to use the roman script for most Indian languages leading to substantial user-generated content in roman script. However, none of the LIDs have any support for the detection of romanized Indian language text (except cld3 support for Latin Hindi). The widespread use of romanization implies that accurate romanized Language Identification models are a critical component in the NLP stack for Indian languages, given that this affects over 735 million internet users
(KPMG and Google, 2017). Therefore, our work on developing accurate and effective romanized Language Identification models has the potential to make a significant impact in the NLP space for Indian languages, particularly in the social media and chat application domains. Hence, we undertake the task of creating a LID for these 22 Indian languages. The main contributions of our work are as follows:
- We create *Bhasha-Abhijnaanam*5, a language identification test set for native-script as well as romanized text which spans 22 Indic languages.
Previous benchmarks for native script do not cover all these languages (NLLB Team et al., 2022; Roark et al., 2020). The Dakshina test set for romanized text covers only 11 languages and there are ambiguous instances in the test set like named entities that cannot be assigned to a particular language (Roark et al., 2020).
- We also train, *IndicLID*, an LID for all the above-1https://github.com/google/cld3 2https://github.com/saffsd/langid.py 3https://fasttext.cc/docs/en/language-identification.html 4https://github.com/facebookresearch/fairseq/tree/nllb\#lidmodel 5The word means language-identification in Sanskrit.
816 mentioned languages in both native and romanized script. For native-script training data, we sample sentences from diverse sources and oversample low-resource languages. IndicLID native-script model has better language coverage than existing LIDs and is competitive or better than other LIDs with 98% accuracy and at least 6 times better throughput.
- To the best of our knowledge, ours is one of the first large-scale efforts for romanized LID in any language, a task that has not received much attention. A major challenge for romanized text LID is the lack of romanized training data. We show that synthetic romanized training data created via transliteration can help train a reasonably good LID for romanized text. A simple linear classifier does not perform well for romanized text. Hence, we combine a simple but fast text classifier with a slower but more accurate classifier based on a pretrained language model to achieve a good trade-off between accuracy and speed.
Our findings are relevant to other languages that need LID for romanized text. We require native script data and a transliteration model to create the synthetic romanized data for the target language.
This romanized data serves as training data for the romanized LID.
## 2 Bhasha-Abhijnaanam Benchmark
We describe the creation of the BhashaAbhijnaanam LID benchmark for 22 Indian languages in native and roman script. Table 1 describes the statistics of the *Bhasha-Abhijnaanam* benchmark. We build upon existing benchmarks to fill in the coverage and quality gaps and cost-efficiently cover all languages.
## 2.1 Native Script Test Set.
We compile a native script test set comprising 19 Indian languages and 11 scripts from the FLORES-200 devtest (NLLB Team et al., 2022)
and Dakshina sentence test set (Roark et al., 2020).
We create native text test sets for the remaining three languages (*Bodo, Konkani, Dogri*) and one script (Manipuri in *Meetei Mayek* script)
not covered in these datasets. For these new languages we first sample the English sentences from Wikipedia and ask in-house, professional translators to translate the sentences to respective languages. This method ensured the quality and accuracy of our test samples, as well as minimizing
| Language | Script | Native | Roman |
|--------------|--------------|----------|---------|
| Assamese | Bengali | 1012 | 512 |
| Bangla | Bengali | 5606 | 4595 |
| Bodo | Devanagari | 1500 | 433 |
| Dogri | Devanagari | 1498 | 512 |
| Gujarati | Gujarati | 5797 | 4785 |
| Hindi | Devanagari | 5617 | 4606 |
| Kannada | Kannada | 5859 | 4848 |
| Kashmiri | Perso-Arabic | 2511 | 450 |
| Devanagari | 1012 | | |
| Konkani | Devanagari | 1500 | 444 |
| Maithili | Devanagari | 2512 | 439 |
| Malayalam | Malayalam | 5628 | 4617 |
| Manipuri | Bengali | 1012 | 442 |
| Meetei Mayek | 1500 | | |
| Marathi | Devanagari | 5611 | 4603 |
| Nepali | Devanagari | 2512 | 423 |
| Oriya | Oriya | 1012 | 512 |
| Punjabi | Gurmukhi | 5776 | 4765 |
| Sanskrit | Devanagari | 2510 | 448 |
| Santali | Ol Chiki | 2512 | 0 |
| Sindhi | Perso-Arabic | 5893 | 4881 |
| Tamil | Tamil | 5779 | 4767 |
| Telugu | Telugu | 5751 | 4741 |
| Urdu | Perso-Arabic | 6883 | 4371 |
any potential noise in the data.
## 2.2 Roman Script Test Set.
We propose a new benchmark test set to evaluate roman-script language identification for 21 Indian languages. Out of these, 11 languages are represented in the Dakshina romanized sentence test set (Roark et al., 2020), which comprises native script sentences from Wikipedia along with their romanization. However, this test set includes short sentences which are just named entities and English loan words which are not useful for romanized text LID evaluation. To address this issue, we manually validate the Dakshina test sets for the languages we are interested in and filter out about 7% of the sentences. Section 2.3 describes the details of the filtering process. To create a benchmark test set for the remaining 10 Indian languages, we sampled sentences from IndicCorp (Doddapaneni et al.,
Language **Total samples Valid samples %filtered**
![2_image_1.png](2_image_1.png)
![2_image_3.png](2_image_3.png)
Bengali 5001 4600 8.0183
Gujarati 5001 4789 4.2391
Hindi 5001 4616 7.6984 Kannada 5001 4849 3.0393 Malayalam 5001 4627 7.4785 Marathi 5001 4617 7.6784 Punjabi 5001 4782 4.3791
Sindhi 5001 4889 2.2395
Tamil 5001 4802 3.9792 Telugu 5001 4754 4.9390
Urdu 4881 4395 9.9569
2022) and asked annotators to write the same in roman script. We did not specify any transliteration guidelines and annotators were free to transliterate in the most natural way they deemed fit. We additionally asked annotators to skip the sentence if they find it invalid (wrong language, offensive, truncated, etc.).
## 2.3 Romanized Dakshina Testset Filtering
The Dakshina romanized sentence test set includes short sentences which are just named entities and English loan words which are not useful for romanized text LID evaluation. To address this issue, we manually validate the Dakshina test sets for the languages we are interested in. We first identified potentially problematic sentences from the romanized Dakshina test set by applying two constraints: (i)
sentences shorter than 5 words, and (ii) native LID
model is less confident about the native language sentence (prediction score less than 0.8). These sentences were then validated by native language annotators. The annotators were asked to read the roman sentences and determine whether they were named entities or sentences where they could not determine the language. Such entries were filtered out. About 7% of the sentences were filtered. Table 2 describes the filtering statistics.
## 3 Indiclid Model
IndicLID is a classifier specifically for Indic languages that can predict 47 classes (24 native-script classes and 21 roman-script classes plus English and Others). We create three classifier variants: a fast linear classifier, a slower classifier finetuned from a pre-trained LM, and an ensemble of the two models which trades off speed v/s accuracy.
## 3.1 Training Dataset Creation
Native-script training data. We compiled the training data sentences from various sources viz. In-
![2_image_0.png](2_image_0.png)
![2_image_2.png](2_image_2.png)
![2_image_4.png](2_image_4.png)
dicCorp (Doddapaneni et al., 2022), NLLB (NLLB
Team et al., 2022), Wikipedia, Vikaspedia 6and internal sources. To ensure a diverse and representative training dataset, we sampled 100k sentences per language-script combination in a balanced way across all these sources. We used oversampling for languages with less than 100k sentences. We tokenized and normalized the sentences using IndicNLP library 7(Kunchukuttan, 2020) with default settings.
Romanized training data. There is hardly any romanized corpora for Indian languages in the public domain8. Hence, we explored the use of transliteration for creating synthetic romanized data. We create romanized training data by transliterating the native script training data into roman script using the multilingual IndicXlit9transliteration model
(Indic-to-En version) (Madhani et al., 2022), The authors have provided results on the transliteration quality of the IndicXlit model. We rely on this analysis to ensure the quality of generated training data.
## 3.2 Linear Classifier
Linear classifiers using character n-gram features are widely used for LIDs (Jauhiainen et al., 2021).
We use FastText (Joulin et al., 2016) to train our fast, linear classifier. It is a lightweight and efficient linear classifier that is well-suited for handling large-scale text data. It utilizes character n-gram features which enables it to utilize subword information. This makes it particularly useful for dealing with rare words and allows it to discriminate between similar languages having similar spellings. We trained separate classifiers for native script (**IndicLID-FTN**) and roman script
(**IndicLID-FTR**). We chose 8-dimension wordvector models after experimentation as they maintain small model sizes without losing model accuracy (refer Appendix A for results).
## 3.3 Pretrained Lm-Based Classifier
For romanized text, we observed that linear classifiers do not perform very well. Hence, we also experimented with models having larger capacity. Particularly, we finetuned a pretrained LM on the romanized training dataset. We evaluated the following LMs: XLM-R (Conneau et al., 2020),
IndicBERT-v2 (Doddapaneni et al., 2022) and MuRIL (Khanuja et al., 2021). The last two LMs are specifically trained for Indian languages and MuRIL also incorporates synthetic romanized data in pre-training. Hyperparameters for finetuning are described in Appendix B. We used IndicBERTbased classifier as the LM-based classifier (henceforth referred to as **IndicLID-BERT**) since it was amongst the best-performing romanized text classifiers and had maximum language coverage.
## 3.4 Final Ensemble Classifier
Our final IndicLID classifier is an pipeline of multiple classifiers. Figure 1 shows the overall workflow of the IndicLID classifier. The pipeline works as described here: (1) Depending on the amount of roman script in the input text, we invoke either the native-text or romanized linear classifier. IndicLIDFTR is invoked for text containing >50% roman characters. (2) For roman text, if IndicLID-FTR is not confident about its prediction, we redirect the request to the IndicLID-BERT. We resort to this two-stage approach for romanized input to achieve a good trade-off between classifier accuracy and inference speed. The fast IndicLID-FTR's prediction is used if the model is confident about its prediction (probability of predicted class > 0.6 ), else the slower but more accurate IndicLID-BERT is invoked. This threshold provides a good trade-off
(See Appendix C for more details).
## 4 Results And Discussion
We discuss the performance of various models on the benchmark and analyze the results. To prevent any overlap between the test/valid and train sets, we excluded the Flores-200 test set (NLLB Team et al., 2022), Dakshina test set (Roark et al., 2020)
| Model | P | R | F1 | Acc | Throughput | Size |
|--------------------------------------------------------------------------------------------------|--------|-------|-------|-------|--------------|--------|
| IndicLID-FTN-8-dim (24) | 98.11 | 98.56 | 98.31 | 98.55 | 30,303 | 318M |
| Comparing our IndicLID-FTN model with CLD3 model (12) IndicLID-FTN-4-dim 99.43 98.40 98.89 98.33 | 47,619 | 208M | | | | |
| IndicLID-FTN-8-dim | 99.73 | 98.67 | 99.18 | 98.62 | 33,333 | 318M |
| CLD3 | 98.52 | 98.14 | 98.31 | 98.03 | 4,861 | - |
| Comparing our IndicLID-FTN model with NLLB model (20) IndicLID-FTN-4-dim 97.78 98.10 97.92 98.19 | 41,666 | 208M | | | | |
| IndicLID-FTN-8-dim | 98.13 | 98.59 | 98.34 | 98.56 | 29,411 | 318M |
| NLLB | 99.28 | 98.65 | 98.95 | 98.78 | 4,970 | 1.1G |
while sampling native train samples from various sources. Additionally, we removed the training samples from the benchmark samples when collecting sentences for the benchmark test set. We also made sure that there was no overlap between the test and valid sets. To create the romanized training set, we simply transliterated the native training set.
As the Dakshina test set (Roark et al., 2020) provided parallel sentences for the native and roman test sets, there was no overlap between the roman train and test sets.
## 4.1 Native Script Lid
We compare IndicLID-FTN with the NLLB model
(NLLB Team et al., 2022) and the CLD3 model.
As we can see in Table 3, the LID performance of IndicLID-FTN is comparable or better than other models. Our model is 10 times faster and 4 times smaller than the NLLB model. The model's footprint can be further reduced by model quantization
(Joulin et al., 2016) which we leave for future work.
## 4.2 Roman Script Lid
Table 4 presents the results of different model variants on the romanized test set (see Appendix D for language-wise results). IndicLID-BERT is significantly better than IndicLID-FTR, but the throughput decreases significantly. The ensemble model
(IndicLID) maintains the same LID performance as IndicLID-BERT with a 3x increase in the throughput over IndicLID-BERT. Further speedups in the model throughput can be achieved by creating distilled versions, which we leave for future work.
LID confusion analysis The confusion matrix for IndicLID is shown in Figure 2. We see that major confusions are between similar languages. Some
![4_image_1.png](4_image_1.png)
| Model | P | R | F1 | Acc Throughput | Size |
|---------------------------------------|-------------|-------------|-------|------------------|--------|
| IndicLID-FTR (dim-8) | 63.12 | 78.01 63.28 | 71.49 | 37,037 | 357 M |
| IndicLID-BERT (unfeeze-layer-1) 72.70 | 84.01 74.52 | 80.04 | 3 | 1.1 GB | |
| IndicLID (threshold-0.6) | 72.74 | 84.50 74.72 | 80.40 | 10 | 1.4 GB |
examples of such language clusters that can be observed are (1) Hindi and very close languages like Maithili, Urdu and Punjabi, (2) Konkani and Marathi, (3) Sindi and Kashmiri. Improving romanized LID between very similar languages is thus an important direction of improvement.
Impact of synthetic training data To understand the impact of synthetic training data, we generate a machine-transliterated version of the romanized test set using IndicXlit. We compare the LID accuracy on the original and synthetically generated test sets. Table 5 shows that the results on the synthetic test set are significantly better than the original test set (approaching accuracy levels in the 90s). The data characteristics of the synthetic test set are much closer to the training data than the original test set. Closing the training-test distribu-
![4_image_2.png](4_image_2.png)
Table 5: Comparison of results on Synthetic vs. original Romanized test sets for IndicLID model
![4_image_0.png](4_image_0.png)
tion gap (by representing original romanized data in the training data and/or improved generation of synthetic romanized data to reflect true data distribution) is critical to improving model performance.
The confusion matrix gives further insights into the impact of synthetic training data. Hindi is confused with languages like Nepali, Sanskrit, Marathi and Konkani using the same native script as Hindi
(Devanagari). Since a multilingual transliteration model with significant Hindi data was used to create the synthetic romanized training data, it may result in the synthetic romanized forms of these languages being more similar to Hindi than would be the case with original romanized data.
Impact of input length Figure 3 plots the LID
accuracy for various input length buckets. The LID
is most confused for short inputs (<10 words) after which the performance is relatively stable.
## 5 Conclusion
We introduce an LID benchmark and models for native-script and romanized text in 22 Indian languages. These tools will serve as a basis for building NLP resources for Indian languages, particularly extremely low-resource ones that are "leftbehind" in the NLP world today (Joshi et al., 2020).
Our work takes first steps towards LID of romanized text, and our analysis reveals directions for future work.
## Acknowledgements
We would like to thank the Ministry of Electronics and Information Technology of the Government of India for their generous grant through the Digital India Bhashini project. We also thank the Centre for Development of Advanced Computing for providing compute time on the Param Siddhi Supercomputer. We also thank Nilekani Philanthropies for their generous grant towards building datasets, models, tools and resources for Indic languages.
We also thank Microsoft for their grant to support research on Indic languages. We would like to thank Jay Gala and Ishvinder Sethi for their help in coordinating the annotation work. Most importantly we would like to thank all the annotators who helped create the Bhasha-Abhijnaanam benchmark.
## Limitations
The benchmark for language identification for the most part contains clean sentences (grammatically correct, single script, etc.). Data from the real world might be noisy (ungrammatical, mixed scripts, code-mixed, invalid characters, etc.). A better representative benchmark might be useful for such use cases. However, the use cases captured by this benchmark should suffice for the collection of clean monolingual corpora. This also represents a first step for many languages where no LID benchmark exists.
The use of synthetic training data seems to create a gap in performance due to divergence in train/test data distributions. Acquisition of original native romanized text and methods to generate better romanized text are needed.
Note that the romanized LID model does not support Dogri since the IndicXlit transliteration model does not support Dogri. However, since Dogri is written in the Devanagari script using the transliterator for Hindi which uses the same script might be a good approximation to generate synthetic training data. We will explore this in the future.
This work is limited to the 22 languages listed in the 8th schedule of the Indian constitution. Further work is needed to extend the benchmark to many more widely used languages in India (which has about 30 languages with more than a million speakers).
## Ethics Statement
For the human annotations on the dataset, the language experts are native speakers of the languages and from the Indian subcontinent. They were paid a competitive monthly salary to help with the task.
The salary was determined based on the skill set and experience of the expert and adhered to the norms of the government of our country. The dataset has no harmful content. The annotators were made aware of the fact that the annotations would be released publicly and the annotations contain no private information. The proposed benchmark builds upon existing datasets. These datasets and related works have been cited.
The annotations are collected on a publicly available dataset and will be released publicly for future use. The IndicCorp dataset which we annotated has already been checked for offensive content.
All the datasets created as part of this work will be released under a CC-0 license10 and all the code and models will be released under an MIT
license.11
## References
Arghanshu Bose. 2022. Explained: What is Bhashini and how it can bridge the gap between Indian languages. In The Times of India, 2 Sep 2022.
Isaac Caswell, Theresa Breiner, Daan van Esch, and Ankur Bapna. 2020. Language ID in the wild: Unexpected challenges on the path to a thousand-language web text corpus. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 6588–6608, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Aashish Chandorkar. 2022. UPI, CoWIN, ONDC: Public Digital Infrastructure Has Put India on the Fast Lane of Tech-led Growth. In News18, 28 May 2022.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Sumanth Doddapaneni, Rahul Aralikatte, Gowtham Ramesh, Shreya Goyal, Mitesh M Khapra, Anoop Kunchukuttan, and Pratyush Kumar. 2022. Towards Leaving No Indic Language Behind: Building Monolingual Corpora, Benchmark and Models for Indic Languages. *arXiv preprint arXiv:2212.05409*.
Tommi Jauhiainen, Tharindu Ranasinghe, and Marcos Zampieri. 2021. Comparing approaches to dravidian language identification. *arXiv preprint* arXiv:2103.05552.
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP
world. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics.
10https://creativecommons.org/publicdomain/
zero/1.0 11https://opensource.org/licenses/MIT
Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. FastText.zip: Compressing text classification models. *arXiv preprint arXiv:1612.03651*.
Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Kumar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, et al. 2021. Muril: Multilingual representations for indian languages. arXiv preprint arXiv:2103.10730.
KPMG and Google. 2017. Indian Languages -
Defining India's Internet. https://assets.
kpmg/content/dam/kpmg/in/pdf/2017/04/ Indian-languages-Defining-Indias-Internet.
pdf.
Anoop Kunchukuttan. 2020. The IndicNLP Library.
https://github.com/anoopkunchukuttan/
indic_nlp_library/blob/master/docs/
indicnlp.pdf.
Marco Lui and Timothy Baldwin. 2011. Cross-domain feature selection for language identification. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 553–561, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.
Yash Madhani, Sushane Parthan, Priyanka Bedekar, Ruchi Khapra, Vivek Seshadri, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M Khapra. 2022.
Aksharantar: Towards building open transliteration tools for the next billion users. *arXiv preprint* arXiv:2205.03018.
NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No Language Left Behind: Scaling Human-Centered Machine Translation. *arXiv* preprint arXiv:2207.04672.
Brian Roark, Lawrence Wolf-Sonkin, Christo Kirov, Sabrina J. Mielke, Cibu Johny, Isin Demirsahin, and Keith B. Hall. 2020. Processing South Asian Languages Written in the Latin Script: the Dakshina Dataset. In *Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020,*
Marseille, France, May 11-16, 2020, pages 2413–
2423. European Language Resources Association.
## A Hyperparameter Tuning For Roman Script Linear Classifier B Model Selection For Roman Script Lm-Based Classifier
| Dimension Precision | Recall F1-Score | Accuracy | Throughput | Model Size | | |
|-----------------------|-------------------|------------|--------------|--------------|-------|------|
| 4 | 60.01 | 74.56 | 61.09 | 67.52 | 50000 | 171M |
| 8 | 63.13 | 78.02 | 63.29 | 71.49 | 37037 | 357M |
| 16 | 63.67 | 78.33 | 64.32 | 71.58 | 30303 | 578M |
| 32 | 64.62 | 78.67 | 65.16 | 71.95 | 15625 | 1.6G |
| 64 | 64.54 | 78.58 | 65.10 | 71.93 | 14085 | 1.9G |
| 128 | 64.55 | 78.45 | 65.03 | 71.77 | 9901 | 3.3G |
| 256 | 64.60 | 78.54 | 65.13 | 71.89 | 7463 | 7.3G |
| 512 | 63.89 | 78.29 | 64.58 | 71.49 | 4608 | 11G |
| 768 | 64.37 | 78.63 | 65.07 | 72.04 | 3876 | 22G |
| 1024 | 64.30 | 78.53 | 65.07 | 71.94 | 3322 | 29G |
| Model | Precision | Recall | F1-Score | Accuracy |
|--------------------------------------|-------------|----------|------------|------------|
| XLMR (Conneau et al., 2020) | 63.19 | 70.92 | 59.49 | 65.15 |
| MuRIL (Khanuja et al., 2021) | 66.70 | 79.08 | 67.77 | 73.70 |
| IndicBERT (Doddapaneni et al., 2022) | 68.07 | 80.52 | 68.91 | 75.81 |
Model **Precision Recall F1-Score Accuracy** unfreezed-layer-1 72.70 84.01 74.53 80.04
unfreezed-layer-2 69.84 83.84 72.44 79.55
unfreezed-layer-4 69.53 83.44 72.12 79.47 unfreezed-layer-6 68.41 81.89 70.02 77.08 unfreezed-layer-8 67.46 81.88 68.42 76.04 unfreezed-layer-11 70.55 83.73 72.63 79.88
Table 6: IndicLID-FTR performance on BhashaAbhijnaanam roman script test set. IndicLID-FTR are hyper-tuned by fixing different dimensions. Throughput is number of sentence/second.
Table 7: Bhasha-Abhijnaanam roman script test set results on roman script Language models finetuned by freezing all the layers We train the IndicLID-FTR model using 100k samples. While deciding the configuration IndicLIDFTR model, we experimented with fixing the dimension of IndicLID-FTR model and tuning on the rest of the hyperparameters. As we can see from table 6 model size increases with the increase of IndicLID-FTR dimension. However, beyond 8 dimensions, there is not much improvement observed.
Therefore, we chose the model with 8 dimensions, taking into account the model size.
We experimented with three different pre-trained language models: IndicBERT (Doddapaneni et al., 2022), XLM-R (Conneau et al., 2020), and MuRIL
(Khanuja et al., 2021). In the initial experiment, we froze all the layers except for the last softmax layer and finetuned the model with our training Table 8: Bhasha-Abhijnaanam roman script test set results on IndicLID-BERT finetuned with unfreezing different numbers of layers
Thresholds **P R F1 Acc Throughput** threshold 0.1 63.13 78.02 63.29 71.49 50000 threshold 0.2 63.43 78.18 63.63 71.77 379 threshold 0.3 65.50 79.64 66.15 73.84 54 threshold 0.4 68.39 81.84 69.77 76.84 22
threshold 0.5 70.99 83.60 72.87 79.15 14
threshold 0.6 72.74 84.51 74.72 80.4 10 threshold 0.7 73.60 84.80 75.54 80.93 9 threshold 0.8 73.88 84.81 75.77 80.96 8 threshold 0.9 73.51 84.50 75.35 80.62 6
data. To fine-tune the language model, we added one softmax layer to the end of the model and used our roman script training data to finetune the model. The results for these experiments are shown in Table 7. We found that IndicBERT and MuRIL
performed similarly among these three models for our roman LID task. MuRIL leverages the advantage of roman text training data, while IndicBERT
was trained on the only native script but performed similarly. However, IndicBERT supports 24 Indian languages, while MuRIL only supports 17 Indian languages. Therefore, we selected IndicBERT due to its superior coverage and performance.
We then further experimented with IndicBERT by unfreezing 1, 2, 4, 6, 8, and 11 layers. The results and comparison of all the experiments are described in Table 8. We found that unfreezing 1 layer was enough for our task and that unfreezing more layers did not provide any additional benefit.
## C Analysis Of Speed/Accuracy Tradeoff
We experimented IndicLID with different thresholds. If the probability score is below a certain threshold we invoke a more powerful model IndicLID-BERT, otherwise, we go with IndicLIDFTR model prediction. IndicLID-FTR model is quite fast as compared to IndicLID-BERT model.
We can see a good trade-off between throughput and accuracy in table 9 as we increase the threshold.
As the threshold increases, the input is more likely to go towards the IndicLID-BERT model, as we are making the model less reliant on the IndicLID-FTR
model.
## D Language-Wise Analysis For Roman Script Classifiers
Table 10 illustrates the language-specific performance of IndicLID-FTR, IndicLID-BERT and IndicLID models in detail. As we can see IndicLID-
BERT has better representation than IndicLID-FTR for almost all the languages which leads better F1 score for IndicLID. However, for the languages of Sanskrit and Manipuri, the IndicLID-FTR model has a better representation than the IndicLID-BERT
model, which is an interesting finding that warrants further investigation in future studies.
| IndicLID-FTR (8 dim) | IndicLID-BERT (unfreeze 1) | IndicLID (threshold 0.6) | | | | | | | |
|------------------------|------------------------------|----------------------------|-----------|--------|-------|-----------|--------|-------|-------|
| Precision | Recall | F1 | Precision | Recall | F1 | Precision | Recall | F1 | |
| Assamese | 37.72 | 93.55 | 53.76 | 66.81 | 91.21 | 77.13 | 72.41 | 92.77 | 81.34 |
| Bangla | 76.63 | 94.10 | 84.47 | 97.12 | 88.14 | 92.41 | 94.94 | 93.95 | 94.44 |
| Bodo | 70.88 | 98.38 | 82.40 | 84.78 | 99.08 | 91.37 | 85.66 | 99.31 | 91.98 |
| Konkani | 24.62 | 95.72 | 39.17 | 38.35 | 99.32 | 55.33 | 40.90 | 97.75 | 57.67 |
| Gujarati | 89.52 | 78.70 | 83.76 | 95.88 | 85.20 | 90.23 | 95.16 | 86.69 | 90.73 |
| Hindi | 65.46 | 15.68 | 25.29 | 76.32 | 60.40 | 67.43 | 77.16 | 53.32 | 63.06 |
| Kannada | 89.66 | 96.41 | 92.91 | 95.79 | 95.71 | 95.75 | 95.29 | 96.78 | 96.03 |
| Kashmiri | 18.74 | 91.56 | 31.12 | 39.45 | 93.11 | 55.42 | 34.80 | 94.67 | 50.90 |
| Maithili | 07.81 | 38.95 | 13.01 | 29.00 | 41.69 | 34.21 | 21.97 | 43.74 | 29.25 |
| Malayalam | 89.75 | 94.46 | 92.04 | 92.19 | 95.32 | 93.73 | 91.33 | 95.36 | 93.30 |
| Manipuri | 64.84 | 98.87 | 78.32 | 50.06 | 98.42 | 66.36 | 58.85 | 99.32 | 73.91 |
| Marathi | 87.21 | 79.58 | 83.22 | 96.35 | 80.80 | 87.89 | 95.86 | 82.92 | 88.92 |
| Nepali | 19.55 | 82.51 | 31.61 | 43.25 | 93.85 | 59.21 | 36.94 | 93.62 | 52.98 |
| Oriya | 41.88 | 95.70 | 58.26 | 64.09 | 95.51 | 76.71 | 62.96 | 97.27 | 76.44 |
| Punjabi | 78.52 | 37.21 | 50.49 | 84.71 | 64.64 | 73.32 | 85.62 | 62.62 | 72.34 |
| Sanskrit | 49.32 | 96.43 | 65.26 | 32.55 | 99.33 | 49.04 | 36.88 | 99.11 | 53.75 |
| Sindhi | 80.00 | 61.05 | 69.25 | 86.39 | 71.91 | 78.49 | 87.88 | 72.51 | 79.46 |
| Tamil | 97.32 | 90.56 | 93.82 | 97.15 | 93.06 | 95.06 | 97.50 | 92.64 | 95.01 |
| Telugu | 94.24 | 87.68 | 90.84 | 95.25 | 88.76 | 91.89 | 95.89 | 89.50 | 92.58 |
| Urdu | 78.88 | 33.24 | 46.77 | 88.53 | 44.84 | 59.53 | 86.87 | 46.31 | 60.41 |
| Avg | 63.13 | 78.02 | 63.29 | 72.70 | 84.01 | 74.53 | 72.74 | 84.51 | 74.72 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations (after the conclusion)
✓ A2. Did you discuss any potential risks of your work?
Limitations (after the conclusion)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
We discussed this in Abstract and Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
B ✓ **Did you use or create scientific artifacts?**
We discussed this in Section 2 (data) and Section 3 (models).
✓ B1. Did you cite the creators of artifacts you used?
We cited them in Section 2 and 3.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We discuss this in the Ethics Statement (after Limitations)
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We discussed this in Sections 2 and 3.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data we used comes from public sources, so no PII is involved. The IndicCorp data we use has already been checked for offensive content. We mention this in the Ethics Statement (after Limitations).
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We discussed this in Section 2.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We discussed this in Section 2.
C ✓ **Did you run computational experiments?**
We discussed this in Section 3 and Section 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We discussed this in Section 3.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We discussed this in Section 3, Appendix B, Appendix C and Appendix D.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We discussed this in Section 4.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 2 and 3
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** We Discussed This In Section 2 And Appendix A.
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
We discussed this in Appendix A.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
We discussed this in Ethics Statement.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
We discussed this in Ethics Statement.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
In Ethics Statement |
fortier-dubois-rosati-2023-using | Using contradictions improves question answering systems | https://aclanthology.org/2023.acl-short.72 | This work examines the use of contradiction in natural language inference (NLI) for question answering (QA). Typically, NLI systems help answer questions by determining if a potential answer is entailed (supported) by some background context. But is it useful to also determine if an answer contradicts the context? We test this in two settings, multiple choice and extractive QA, and find that systems that incorporate contradiction can do slightly better than entailment-only systems on certain datasets. However, the best performances come from using contradiction, entailment, and QA model confidence scores together. This has implications for the deployment of QA systems in domains such as medicine and science where safety is an issue. | # Using Contradictions Improves Question Answering Systems
Étienne Fortier-Dubois Domenic Rosati scite.ai Dalhousie University
## Abstract
![0_Image_0.Png](0_Image_0.Png)
This work examines the use of *contradiction* in natural language inference (NLI) for question answering (QA). Typically, NLI systems help answer questions by determining if a potential answer is *entailed* (supported) by some background context. But is it useful to also determine if an answer contradicts the context? We test this in two settings, multiple choice and extractive QA, and find that systems that incorporate contradiction can do slightly better than entailment-only systems on certain datasets. However, the best performances come from using contradiction, entailment, and QA
model confidence scores together. This has implications for the deployment of QA systems in domains such as medicine and science where safety is an issue.
## 1 Introduction
Safety in NLP systems is unresolved, particularly in biomedical and scientific contexts where hallucination, overconfidence, and other problems are major obstacles to deployment (Ji et al., 2022; Kell et al., 2021). One active area of research to solve these issues is natural language inference (NLI)
(Li et al., 2022). NLI is the task of determining whether a hypothesis is true (*entailed*), false (*contradicted*), or undetermined (*neutral*) given some premise.
Current NLI systems typically focus only on entailment to verify hypotheses—they calculate the degree to which a hypothesis is supported by the premise. But the premise can provide another signal: contradiction. Regardless of how well a hypothesis is entailed by the context, it can also be more or less contradicted, which could affect whether it is accepted or rejected. Contradictions are an important signal indicating whether some statement might be unacceptable given a premise.
In some cases where we might not know if a statement is supported, we should still ensure we are rejecting statements that are outright contradicted.
We wondered if adding this signal to a question answering (QA) system might improve performance and safety. To this end, we propose a method that reformulates answers from the QA
system as hypotheses for NLI, calculates the entailment, contradiction, and neutrality of each hypothesis, and then selects the best one based on a combination of these results (Figure 1). We show that across 16 QA datasets (9 multiple choice and 7 extractive), the best approach is to use entailment, contradiction, and confidence scores together. Using only contradiction is roughly on par with, and sometimes better than, using only entailment.
## 1.1 Related Work
NLI for question answering has been explored by several authors in various settings; see Paramasivam and Nirmala (2021) for an overview.
One of these settings is **selective question answering for extractive QA**, where *selective* refers to abstention when the system is not confident enough in its answer (Kamath et al., 2020). Chen et al. (2021) have found that NLI systems are able to verify the predictions made by a QA system in this setting, but their result is limited to only selecting a top k% of answers. Moreover, they 827 do not provide an approach for improving overall performance, nor do they show the effect of incorporating contradiction directly (but do so indirectly by analyzing non-entailed passages).
In the related setting of **multiple choice QA and**
fact checking, Mishra et al. (2021) have explored the use of entailment, finding that NLI models do well at these tasks by themselves, but can perform even better when they are adapted to in-domain data and longer premises. Yet their method uses only a two-class NLI set up (entailed or not entailed), which doesn't tell us much about directly using the contradiction signal. Pujari and Goldwasser (2019) does incorporate the contradiction signal showing the power of contradiction to improve machine comprehension but does not analyze its effects separately from entailment.
Other QA settings in which NLI has been used include open domain (Harabagiu and Hickl, 2006)
and multi-hop (Trivedi et al., 2019). Thus far, approaches tend to focus on entailment. To our knowledge, our work is the first to directly assess using contradictions for QA isolated from entailment.
Outside of question answering, a domain that uses contradictions is **factual consistency**—the task of ensuring that a collection of utterances is faithful to a source document. Li et al. (2022) provide an overview. Typically, entailment is still the main focus, but Laban et al. (2022) propose an NLI-based method to ensure the consistency of a summary with a source document using contradiction and neutral scores in addition to entailment, beating out previous systems.
Other researchers have used contradictions to identify consistency errors across Wikipedia
(Schuster et al., 2022; Hsu et al., 2021) or generate credible character dialogue (Nie et al., 2021; Song et al., 2020).
## 2 Methods
We tested the effect of contradictions in two QA
settings and a total of sixteen question-answer datasets. Our approach is broadly similar to both Chen et al. (2021) and Mishra et al. (2021) in that we use most of the same datasets for evaluating NLI reranking for multiple choice QA and extractive QA. Unlike both, we incorporate contradiction directly as a signal for reranking answers.
Briefly, for each dataset, we used pretrained QA
models to produce answers and confidence scores for the dataset's questions. We refer to the confidence scores below as QA. We then trained QA2D
models (where QA2D stands for "question-answer to declarative") to turn the answers into the declarative hypothesis format required for NLI. For example, the question-answer pair "What is the most abundant metal in the Earth crust? Copper." might be rephrased as "The most abundant metal in the Earth crust is copper" (see Appendix D for more details).
With the question contexts as premises, we then used NLI models to classify every premisehypothesis pair into three classes, each with an associated score: entailed (E), contradicted (C),
and neutral (N). After that, we trained logistic regression calibration models to find which linear combination of the four scores—QA, E, C, and N—was best able to pick the answers accurately.
When evaluating performance, we applied the selective QA approach from Kamath et al. (2020)
to rank answers using combinations of the four scores, and then consider only those that the model was most confident in answering. We compared selecting the top 20% and 50%. In the multiple choice setting, it was also possible to rank all potential answers according to the four scores, unlike in the extractive QA setting where the QA model produced only one answer per question, so we evaluated performance with that approach as well (see appendix A for details).
## 3 Experimental Setting
In the multiple choice setting, we tested 9 datasets.
Two of them are in-domain, since the pretrained QA models we used were finetuned on them.
Specifically, we used a RoBERTa large model (Liu et al., 2019) finetuned on the RACE dataset (Lai et al., 2017), as well as two DeBERTa v3 variants, base and xsmall (He et al., 2021a), finetuned on the SciQ dataset (Welbl et al., 2017).
In the extractive QA setting, we used 7 datasets:
five from the MRQA 2019 task (Fisch et al., 2019), as well as SQuAD 2.0 (Rajpurkar et al., 2018) and SQuAD adversarial (Jia and Liang, 2017). The SQuAD model is the in-domain dataset: it was used to pretrain (Rajpurkar et al., 2016) the two QA
models we used, DistillBERT (Sanh et al., 2020)
and BERT-Large (Devlin et al., 2019). Like Chen et al. (2021), we used the Natural Questions dataset for calibration.
In both settings, all datasets contain the relevant context that can be used by the QA models to select answers. More detail on the datasets and QA models is available in appendices B and C respectively.
See appendices D, E, and F for details on the QA2D, NLI, and calibration models. Our models follow the setups described in Kamath et al. (2020),
Chen et al. (2021), and Mishra et al. (2021). The main interesting detail is that the calibration models were trained on a holdout set of 100 samples from a single domain, using logistic regression, as in Chen et al. (2021).
## 4 Results 4.1 Multiple Choice Setting
For most multiple choice datasets, the best accuracy—when ranking all potential answers—is attained when using a calibrated model combining QA confidence, entailment, and contradiction
(**QA+E+C** in Table 1). Only for the in-domain case
(RACE-C) does the uncalibrated RoBERTa-RACE
model perform on par with that. Using QA scores combined with either entailment (**QA+E**) or contradiction (**QA+C**) achieves similar performance, with contradiction winning by a small margin:
84.33% average accuracy compared to 84.31%.
To inspect these trends further, we performed a correlation analysis of the NLI classes and QA confidence scores with the correct answer (appendix G). We found that besides QA confidence, it is the contradiction score that has the strongest correlation with the correct answer. The analysis also showed that the neutral class score (N) had almost no effect, which is why it is omitted in all results.
When using the selective QA approach and evaluating only the 20% of 50% most confident answers, the best performance is attained with the QA+C combination (Table 2). This model is the only one that beats just using the QA confidence score on average. It is stronger than **QA+E+C** and QA+E for both coverage percentages.
Contradiction alone, without QA confidence scores (C), also beats both entailment alone (E)
and entailment with contradiction (E+C) for both coverages. These results match our intuition that the less contradicted an answer, the more likely it is correct, even in cases where there is uncertainty about its entailment.
## 4.2 Extractive Qa Setting
Similar results occur when evaluating the extractive QA datasets with 20% and 50% selective coverage
(Table 3). The **QA+C** model does better than QA
alone, and C alone does better than E+C or E alone, indicating the importance of the contradiction signal here too. However, entailment seems to matter more for extractive QA, as the best F1 score overall was from **QA+E** in the 20% coverage case, and QA+E+C in the 50% case.
## 5 Discussion
Contradiction with background context is a useful signal that NLP systems can use to infer answers to questions. This is not necessarily a superior strategy to using entailment, but our results show that combining these two signals can improve performance beyond what QA models can achieve on their own. These results are interesting because using contradictions comes with potential benefits for the safety of NLP systems and, as a result, their deployment in domains such as medicine or science.
Namely, that there are many potential cases where we are not sure if a statement is entailed directly by a background context but we may be sure that the statement is not refuted by a background context.
In two-class NLI settings where we focus only on entailment, neutral and contradiction are collapsed together and we don't have this guarantee.
## 6 Limitations
Our work comes with some limitations. It is uncertain whether our results in two specific settings, multiple choice and extractive QA, would extend to more general settings for NLI, although the use of contradictions for factual consistency by Laban et al. (2022) suggests that they could. Additionally, 3-class NLI is not sufficient to capture all the natural language relations that might be needed to verify an answer. As such more challenging datasets in other settings and more granular NLI
settings should be attempted.
Another limitation involves answer ranking and the associated computational cost. The main reason we did not test answer ranking in extractive QA is that we did not generate diverse outputs, but another reason is that such a procedure grows prohibitively expensive as the domain becomes more open. In a fully open domain, ranking would require a quadratic evaluation for each context passage against each reformulated answer candidate (Schuster et al., 2022). Future work should look at comparison approaches that amortize this cost, such as NLI-based dense passage retrieval
(Reimers and Gurevych, 2019).
QA Model Cosmos DREAM MCS MCS2 MCT QASC RACE RACE-C SciQ *Average* SciQ-base 18.46 43.80 61.99 63.71 44.76 93.41 30.97 27.39 95.28 53.30 SciQ-small 25.46 48.26 60.28 66.04 59.76 90.60 35.56 30.62 98.09 57.18
QA 64.22 82.56 89.70 86.98 90.48 98.16 76.93 **69.80** 97.96 84.08
QA+E+C 64.72* 83.19* 90.06* 87.59* 91.43* 98.60 77.53* 69.80* **98.21 84.57**
QA+E 64.32 82.85* 89.92* 87.29* 91.07 98.49* 77.18 69.66 98.09 84.31
QA+C **64.82** 82.75* 89.88* 87.29* 90.83 98.38 77.16 **69.80** 98.09 84.33
Table 1: *Multiple choice setting*. Accuracy scores (best per column in **bold**, second best underlined, statistical significance (pairwise students t-test) is indicated by asterix) after answer ranking with the mnli-large NLI model.
The top three rows show the accuracy of using only the QA models' confidence score; "QA" refers to the scores of the RoBERTa-RACE model, which was used for calibration. The bottom rows add the entailment and/or contradiction scores to the RoBERTa-RACE score. For other NLI models, and for just E, C, and E+C without calibration with RoBERTa-RACE, see Table 8 in the appendix.
Dataset QA +E+C QA+C QA+E E+C E C QA
20% CosmosQA 77.55 **91.12** 76.88 69.18 68.34 83.25 88.61
DREAM 98.28 **98.77** 98.28 96.32 96.32 96.81 98.28
MCScript **99.82** 99.46 **99.82** 99.64 99.64 99.46 **99.82**
MCScript-2.0 99.58 **99.72** 99.45 99.17 99.03 97.37 99.58 MCTest 100 99.40 **100 100 100** 99.40 98.81 QASC **100 100 100 100 100 100 100** RACE 94.93 96.69 94.72 92.44 92.24 90.17 **98.24**
RACE-C 88.73 92.96 89.44 85.21 85.92 86.62 **93.66**
SciQ **100 100 100 100 100 100 100**
Average 95.43 **97.57** 95.40 93.55 93.50 94.79 97.45
50% CosmosQA 80.29 **81.70** 76.94 75.80 70.64 80.63 76.47
DREAM 95.10 **96.86** 94.90 93.63 93.63 93.63 96.67 MCScript 98.57 98.64 98.28 98.00 97.93 97.14 **98.78** MCScript-2.0 96.40 **98.23** 95.84 94.68 94.40 96.01 98.01 MCTest 99.52 **99.76** 99.52 99.05 99.05 99.76 99.52
QASC **100 100 100** 99.78 99.78 99.78 100
RACE 90.11 92.68 89.99 87.71 87.38 85.23 **93.88** RACE-C 85.11 84.83 85.39 78.37 78.37 77.25 **87.36**
SciQ **100 100 100 100 100** 99.74 100 Average 93.90 **94.74** 93.43 91.89 91.24 92.13 94.52
Dataset QA+E+C QA+C QA+E E+C E C QA
20% BioASQ 85.04 83.10 **85.06** 74.22 74.22 75.47 82.99
HotpotQA 86.62 85.89 **86.69** 80.60 80.60 79.82 85.33
Natural Questions 91.84 **92.18** 91.68 79.89 79.87 82.09 90.98 SQuAD 98.26 98.76 92.37 98.17 92.48 90.88 **99.04**
SQuAD-adv **43.99** 43.57 43.98 43.74 43.60 42.81 39.83
SQuAD2 37.64 36.07 37.56 37.43 37.31 **37.68** 30.52 TriviaQA **81.33** 80.36 81.21 65.53 65.25 69.13 80.68 Average 74.96 74.19 **74.99** 67.68 67.62 68.27 72.77
50% BioASQ **76.13** 75.51 76.04 71.49 71.49 72.97 75.49
HotpotQA **79.37** 78.95 79.30 77.43 77.43 77.31 78.74
Natural Questions **84.53** 83.24 84.48 74.96 74.93 78.62 82.47 SQuAD 96.98 97.01 96.97 91.58 91.52 91.19 **97.00**
SQuAD-adv 41.80 41.49 41.16 42.76 **42.79** 42.03 40.26
SQuAD2 29.41 28.77 28.45 **34.43** 34.14 34.39 26.18 TriviaQA 74.30 74.23 **74.37** 65.05 64.93 68.08 74.21
Average **68.93** 68.46 68.68 65.39 65.32 66.37 67.76
## References
Jifan Chen, Eunsol Choi, and Greg Durrett. 2021. Can NLI Models Verify QA Systems' Predictions? In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3841–3854, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018.
Transforming Question Answering Datasets Into Natural Language Inference Datasets. Technical Report arXiv:1809.02922, arXiv. ArXiv:1809.02922 [cs]
type: article.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension. In *Proceedings of the 2nd Workshop* on Machine Reading for Question Answering, pages 1–13, Hong Kong, China. Association for Computational Linguistics.
Sanda Harabagiu and Andrew Hickl. 2006. Methods for Using Textual Entailment in Open-Domain Question Answering. In *Proceedings of the 21st International* Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 905–912, Sydney, Australia. Association for Computational Linguistics.
Pengcheng He, Jianfeng Gao, and Weizhu Chen.
2021a. DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with GradientDisentangled Embedding Sharing. Number:
arXiv:2111.09543 arXiv:2111.09543 [cs].
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021b. DeBERTa: Decodingenhanced BERT with Disentangled Attention. Number: arXiv:2006.03654 arXiv:2006.03654 [cs].
Cheng Hsu, Cheng-Te Li, Diego Saez-Trumper, and Yi-Zhan Hsu. 2021. WikiContradiction: Detecting Self-Contradiction Articles on Wikipedia. Technical Report arXiv:2111.08543, arXiv. ArXiv:2111.08543 [cs] type: article.
Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages
2391–2401, Hong Kong, China. Association for Computational Linguistics.
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of Hallucination in Natural Language Generation. Number:
arXiv:2202.03629 arXiv:2202.03629 [cs].
Robin Jia and Percy Liang. 2017. Adversarial Examples for Evaluating Reading Comprehension Systems.
In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics.
Amita Kamath, Robin Jia, and Percy Liang. 2020. Selective Question Answering under Domain Shift. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5684–
5696, Online. Association for Computational Linguistics.
Gregory Kell, Iain Marshall, Byron Wallace, and Andre Jaun. 2021. What Would it Take to get Biomedical QA Systems into Practice? In *Proceedings of the 3rd* Workshop on Machine Reading for Question Answering, pages 28–41, Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. QASC: A
Dataset for Question Answering via Sentence Composition. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8082–8090. Number:
05.
Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-Visiting NLIbased Models for Inconsistency Detection in Summarization. *Transactions of the Association for Computational Linguistics*, 10:163–177.
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding Comprehension Dataset From Examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785–
794, Copenhagen, Denmark. Association for Computational Linguistics.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2019. Albert: A lite bert for self-supervised learning of language representations.
Wei Li, Wenhao Wu, Moye Chen, Jiachen Liu, Xinyan Xiao, and Hua Wu. 2022. Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods. Number:
arXiv:2203.05227 arXiv:2203.05227 [cs].
Yichan Liang, Jianheng Li, and Jian Yin. 2019. A
New Multi-choice Reading Comprehension Dataset for Curriculum Learning. In Proceedings of The Eleventh Asian Conference on Machine Learning, pages 742–757. PMLR. ISSN: 2640-3498.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT
Pretraining Approach. Number: arXiv:1907.11692 arXiv:1907.11692 [cs].
Anshuman Mishra, Dhruvesh Patel, Aparna Vijayakumar, Xiang Lorraine Li, Pavan Kapanipathi, and Kartik Talamadupula. 2021. Looking Beyond SentenceLevel Natural Language Inference for Question Answering and Text Summarization. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1322–1336, Online. Association for Computational Linguistics.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A New Benchmark for Natural Language Understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885–4901, Online. Association for Computational Linguistics.
Yixin Nie, Mary Williamson, Mohit Bansal, Douwe Kiela, and Jason Weston. 2021. I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1699–1713, Online. Association for Computational Linguistics.
Simon Ostermann, Ashutosh Modi, Michael Roth, Stefan Thater, and Manfred Pinkal. 2018. MCScript: A
Novel Dataset for Assessing Machine Comprehension Using Script Knowledge. In *Proceedings of* the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association
(ELRA).
Simon Ostermann, Michael Roth, and Manfred Pinkal.
2019. MCScript2.0: A Machine Comprehension Corpus Focused on Script Events and Participants.
In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019),
pages 103–117, Minneapolis, Minnesota. Association for Computational Linguistics.
Aarthi Paramasivam and S. Jaya Nirmala. 2021. A survey on textual entailment based question answering.
Journal of King Saud University - Computer and Information Sciences.
Rajkumar Pujari and Dan Goldwasser. 2019. Using natural language relations between answer choices for machine comprehension. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4010–4015, Minneapolis, Minnesota.
Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. *Journal of Machine Learning Research*,
21(140):1–67.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know What You Don't Know: Unanswerable Questions for SQuAD. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence Embeddings using Siamese BERTNetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 193–203, Seattle, Washington, USA. Association for Computational Linguistics.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. Number: arXiv:1910.01108 arXiv:1910.01108 [cs].
Tal Schuster, Sihao Chen, Senaka Buthpitiya, Alex Fabrikant, and Donald Metzler. 2022. Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters. Number: arXiv:2204.07447 arXiv:2204.07447 [cs].
Haoyu Song, Wei-Nan Zhang, Jingwen Hu, and Ting Liu. 2020. Generating Persona Consistent Dialogues by Exploiting Natural Language Inference. *Proceedings of the AAAI Conference on Artificial Intelligence*,
34(05):8878–8885. Number: 05.
Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A Challenge Data Set and Models for Dialogue-Based Reading Comprehension. *Transactions of the Association* for Computational Linguistics, 7:217–231. Place:
Cambridge, MA Publisher: MIT Press.
Harsh Trivedi, Heeyoung Kwon, Tushar Khot, Ashish Sabharwal, and Niranjan Balasubramanian. 2019.
Repurposing Entailment for Multi-Hop Question Answering Tasks. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 2948–2958, Minneapolis, Minnesota.
Association for Computational Linguistics.
Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017.
Crowdsourcing Multiple Choice Science Questions. In *NUT@EMNLP*.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. HuggingFace's Transformers: State-of-the-art Natural Language Processing. Number: arXiv:1910.03771 arXiv:1910.03771 [cs].
## A Answer Ranking Procedure
In the multiple choice setting, we performed an answer ranking procedure to pick the answer to a given question among a set of alternative answers N, using both NLI class scores and QA confidence scores. (This is distinct from the selection procedure for the top 20% or 50% of answers we used in both settings.)
Similar to Harabagiu and Hickl (2006), answers are ranked based on the highest probability from the calibration model σ, given a linear combination of the QA or NLI scores given an answer n ∈ N
answer set. When a single feature is used, such as an NLI class or the QA score, no calibration is made and σ is simply the identity of the confidence score. In the case of contradiction only, σ is the inverse of the contradiction confidence score, indicating the least contradicted answer is being selected. Formally, our approach can be described as:
## Argmax
N
## Σ(Qan; Nlin)
where QAn is the QA model confidence score for answer n, and NLIn represents the various NLI
class scores for n.
We did not use this approach in extractive QA,
because we found that asking the model for the top K = 4 answer produced almost the same four answer alternatives with slightly different spans each time.
## B Datasets
Tables 4 (multiple choice) and 5 (extractive QA)
outline the datasets we used. Additional details such as train size and preprocessing steps are available in the references provided. When space doesn't allow CosmosQA is aliased to Cosmos, MCScript to MCS, MCScript-2.0 to MCS2, and MCTest to MCT. The only preprocessing step we performed was to filter out questions where no context passage is provided. Validation splits (as opposed to test splits) are used in the CosmosQA and QASC cases, since context passages or gold standard answers are not available for these datasets.
## C Qa Models
Table 6 outlines the pretrained QA models that we used and the datasets they are trained on. All these models are publicly available on the Hugging Face hub under the locations listed. Where space doesn't allow, RoBERTa-RACE is aliased as RACE.
We trained the two DeBERTa-v3 models (xsmall and base) as shown in Table 7. They were trained using the Hugging Face trainer API (Wolf et al., 2020) with an Adam optimizer at a learning rate of 5.60e-05 with weight decay of 0.01. All models and inference were performed on 1 Tesla P100 GPU. Full instructions on reproducibility as well as trained models are provided in the publicly available code, including directions to weights and biases to inspect the training runs, full parameter set, and evaluation suites.
## D Qa2D Models
A QA2D model reformulates a question-answer pair to a declarative statement (Demszky et al.,
2018). As noted in Chen et al. (2021) and Mishra et al. (2021), the QA2D reformulation is critical to using NLI models in QA since the proposed answer needs to match the format of NLI. We trained a T5-small model (Raffel et al., 2020) on the dataset proposed by Demszky et al. (2018) for QA2D since we found almost no noticeable differences in performance in larger models. This used the same setup as the DeBERTa-v3 models xsmall and base
(see Table 7).
Dataset Split Size Reference
CosmosQA validation 2985 Huang et al. (2019)
DREAM test 2041 Sun et al. (2019) MCScript test 2797 Ostermann et al. (2018) MCScript-2.0 test 3610 Ostermann et al. (2019) MCTest test 840 Richardson et al. (2013) QASC validation 926 Khot et al. (2020) RACE test 4934 Lai et al. (2017)
RACE-C test 712 Liang et al. (2019) SciQ test 884 Welbl et al. (2017)
Dataset Size Reference
BioASQ 1504 Fisch et al. (2019)
TriviaQA 7785 HotpotQA 5901 SQuAD 10506 Natural Questions 12836
SQuAD2 11871 Rajpurkar et al. (2018) SQuAD-adv 5347 Jia and Liang (2017)
Table 5: Extractive QA datasets used. Validation sets are used on the SQuAD2.0 and SQuAD adversarial datasets.
MRQA 2019 dev sets are used for the other five datasets.
Unlike Chen et al. (2021), we found that regardless of size, these QA2D models struggled with long questions or questions with complex syntax and would often leave the answer out of the statement. In order to solve this, constrained decoding that required the answer to be in the statement was tried. However, this often produced ungrammatical or nonsensical statements. We settled with the following heuristic to postprocess QA2D outputs: If less than 50% of the tokens in the answer were in the statement then we appended the answer to the end of the statement. 50% was used to account for rephrasing the answer or swapping pronouns. While some statements resulted in answer redundancy, this was better than having hypotheses which left out the answer.
Future work on QA2D should focus on how these models can be used outside of the domains in the dataset provided by Demszky et al. (2018).
Finally it is important to note that erroneous QA2D
outputs could effect the quality of the whole pipeline see Chen et al. (2021) for a more detailed analysis of this.
## E Nli Models
NLI is used to classify whether the reformulated answer is contradicted, entailed, or neutral with respect to a context passage. We used the whole context, as Schuster et al. (2022) and Mishra et al.
(2021) demonstrated that long premises still performed adequate though not as well as sentencelength premises. Using the whole context avoids needing to use decontextualization as is required in Chen et al. (2021).
We used two DeBERTa-based models (He et al.,
2021b) trained on the MNLI dataset (Williams et al., 2018) (called mnli-base and mnli-large) and an ALBERT model (Lan et al., 2019) trained on the ANLI dataset in addition to various other NLI
datasets (called albert-anli) (Nie et al., 2020). Table 6 contains the Hugging Face references to the NLI
models After inference, the confidence scores are used for answer selection and performance evaluation.
## E.1 Model Size And Approach Performance Analysis
Table 8 mirrors Table 1 in the main text, but shows the accuracy results for uncalibrated E, C, and E+C
| Hugging Face | Name |
|----------------------------------------------------------|--------------|
| LIAMF-USP/roberta-large-finetuned-RACE | RoBERTa-RACE |
| bert-large-uncased-whole-word-masking-finetuned-squad | BERT-Large |
| distilbert-base-uncased-distilled-squad | DistillBERT |
| ynie/albert-xxlarge-v2-snli_mnli_fever_anli_R1_R2_R3-nli | albert-anli |
| microsoft/deberta-base-mnli | mnli-base |
| microsoft/deberta-v2-xxlarge-mnli | mnli-large |
Table 6: Pretrained QA and NLI models used.
Model Dataset Epochs Score t5-small Demszky et al. (2018) 20 Rogue1 90.73 deberta-v3-xsmall Welbl et al. (2017) 6 Accuracy 93.99
deberta-v3-base Welbl et al. (2017) 6 Accuracy 91.79
Table 7: The models we trained for or setups with evaluation scores and number of epochs trained.
QA Model Cosmos DREAM MCS MCS2 MCT QASC RACE RACE-C SciQ *Average*
SciQ-base 18.46 43.80 61.99 63.71 44.76 93.41 30.97 27.39 95.28 53.31 SciQ-small 25.46 48.26 60.28 66.04 59.76 90.60 35.56 30.62 98.09 57.19 RACE 64.22 82.56 89.70 86.98 90.48 98.16 76.93 69.80 97.96 84.09 mnli-large E+C 44.36 80.94 85.52 84.99 90.60 96.44 64.29 51.40 92.47 76.77 E 36.18 79.03 86.02 79.72 89.88 95.90 62.14 49.72 91.96 74.50
C 59.26 78.98 83.12 84.43 89.29 92.76 62.74 47.05 91.58 76.58
mnli-base QA + E + C 64.32 82.66 89.63 87.01 90.71 98.27 76.95 69.80 98.09 84.16 QA + E 64.25 82.66 89.63 86.98 90.71 98.27 76.95 69.80 97.96 84.14
QA + C 64.29 82.56 89.63 87.01 90.60 98.16 76.93 69.80 97.96 84.1 E + C 33.03 62.27 76.76 72.11 68.57 92.66 45.16 34.41 88.01 63.66
E 27.81 62.47 79.37 71.94 68.81 92.66 43.48 34.41 88.01 63.22 C 43.45 59.19 70.18 69.97 67.50 81.86 41.81 32.58 87.37 61.55 albert-anli QA + E + C 64.19 82.56 89.70 87.06 90.48 98.16 76.93 69.80 97.96 84.09 QA + E 64.19 82.56 89.70 87.06 90.60 98.16 76.93 69.80 97.96 84.11 QA + C 64.22 82.56 89.70 86.98 90.48 98.16 76.93 69.80 97.96 84.09 E + C 35.71 68.20 79.55 73.88 77.50 91.79 49.05 39.47 90.82 67.33 E 33.67 68.35 79.91 73.19 77.38 91.90 49.07 39.19 90.94 67.07 C 45.16 63.74 73.58 72.71 73.33 77.86 46.34 38.20 87.24 64.24
Table 8: Accuracy scores in the multiple choice setting for various NLI models used. Calibration was with the RoBERTA-RACE model.
in the main mnli-large model, as well as the results with the other NLI models, mnli-base and albertanli. Table 9 shows selective QA accuracy in the multiple choice setting where answer selection is done through ranking before we rank answers for selective QA. Selective QA on extractive QA using DistillBERT (table 10) shows that **QA+E+C** does best in all cases and contradiction only does second best at 50% coverage.
## F Calibration Models
Like Kamath et al. (2020) and Chen et al. (2021)
we developed a set of calibration models in order to perform answer ranking. A calibration model is trained on a set of posterior probabilities from downstream models to predict whether an answer is correct.
To compare the effect of using different combinations of NLI class confidence scores, we trained a logistic regression model on linear combinations of the following features: QA indicates that the QA model confidence score is being used, E indicates the entailment score, C indicates the contradiction score, and N indicates the neutral score.
Like in Chen et al. (2021), all calibration models are trained on a holdout set of 100 samples from a single domain using logistic regression which predicts, given the confidence scores of the downstream models, whether the answer is correct. A
multi-domain calibration approach like in Kamath et al. (2020) was not used since the focus was a minimum experiment to test the viability of leveraging different NLI classifications.
## F.1 Regression Analysis
To illustrate the characteristics of the calibration models, we present a regression analysis for the multiple choice setting (Table 11). The results indicate that as the mnli model gets larger, the calibration model uses its NLI confidence scores more.
Importantly, entailment coefficients are stronger than contradiction coefficients in all cases.
## G Correlation Analysis
Since we are using the NLI and QA model scores to construct the setups above, it is useful to know how these factors correlate with the correct answer. Table 13 shows how each NLI class correlates both by score and by actual classification
(score > 50%) as compared against QA model confidence score. The multiple choice analysis shows answers from the RoBERTa-RACE model and the extractive QA analysis shows answers from the BERT-large model trained on SQuAD. The correlation analysis presents Spearman rank correlations.
What we see is that in the multiple choice setting, the confidence score has a strong correlation with the correct answer, which makes sense given the confidence score is a softmax over the multiple choice classes. Extractive QA confidence scores have a much weaker correlation and tend to have less correlation than entailment has with the correct answer. Despite the results presented above, contradiction only has a notable correlation with the correct answer when the score is used rather than the classification. This is a point in favor of our approach of using confidence scores for NLI
rather than classifications.
Interestingly, in the extractive QA case, the neutral class is more negatively correlated when selecting for contradiction when using classification.
Our conjecture would be that in the extractive QA case, we don't have much to compare against.
When looking at the per dataset correlations for the multiple choice setting (Table 12), we see that in most cases, other than the QA confidence scores, the contradiction scores have the strongest correlations with the correct answer out of any NLI
class and neutral, as we would expect, tends to have very weak correlations. We do not present the per dataset correlation for extractive QA as they are very weak, which we again hypothesize comes from having no answers to compare with.
Dataset QA+E+C QA+E QA+C E+C E C QA
20% CosmosQA 77.55 67.17 83.25 20.10 27.47 67.50 **88.61**
DREAM **98.28** 96.32 96.81 81.13 91.91 93.87 **98.28** MCScript **99.82 99.64** 99.46 93.02 98.93 96.96 **99.82** MCScript-2.0 **99.58** 99.03 97.37 92.24 97.37 95.01 **99.58** MCTest **100 100** 99.40 85.12 97.02 97.02 98.81
QASC **100 100 100** 97.30 100 99.46 100 RACE 94.93 92.13 90.17 62.73 76.71 75.05 **98.24** RACE-C 88.73 85.21 86.62 71.13 74.65 69.01 **93.66** SciQ **100 100 100** 82.05 100 96.15 100 Avg 95.43 93.28 94.79 76.09 84.90 87.78 **97.45**
50% CosmosQA 80.29 70.78 **80.70** 32.17 34.72 64.88 76.47
DREAM 95.10 93.63 93.63 85.20 89.41 88.33 **96.67** MCScript **98.57** 97.85 97.14 94.71 95.99 92.70 **98.78** MCScript-2.0 96.40 94.46 96.07 91.02 91.75 91.69 **98.01** MCTest **99.52** 98.81 **99.76** 91.43 95.24 96.19 **99.52** QASC **100 99.78 99.78** 98.27 98.70 98.49 100 RACE 90.11 87.22 85.23 67.89 71.70 68.18 **93.88** RACE-C 85.11 78.09 77.25 66.57 66.85 55.06 **87.36** SciQ **100 100 99.74** 89.03 96.43 96.43 100
Avg 93.90 91.18 92.14 79.59 82.31 83.55 **94.52**
Dataset QA+E+C QA+E QA+C E+C E C QA
20% BioASQ 70.97 70.41 71.55 74.07 74.07 **74.34** 68.99
HotpotQA **73.44** 73.08 70.88 71.59 71.51 70.41 69.41
Natural Questions **85.59** 85.29 85.45 78.46 78.46 80.53 83.27 SQuAD 96.22 96.45 95.77 83.15 83.09 81.37 **97.15** SQuAD-adv 40.39 39.75 39.49 40.07 39.56 **40.59** 31.98
SQuAD2 35.46 35.24 33.64 36.36 36.13 **36.66** 25.95
TriviaQA **64.96** 64.68 64.55 52.67 52.09 52.56 63.98
Avg **66.72** 66.41 65.90 62.34 62.13 62.35 62.96
50% BioASQ 65.96 65.92 64.37 63.53 63.53 **66.95** 64.79
HotpotQA 64.42 64.21 63.65 65.88 65.85 **66.91** 62.81
Natural Questions 72.28 71.99 70.82 67.54 67.51 **74.18** 69.95
SQuAD 92.56 **92.57** 92.34 81.86 82.21 80.95 92.54
SQuAD-adv 33.69 32.90 33.45 **38.74** 38.22 38.52 31.89
SQuAD2 26.68 25.70 26.00 **32.95** 32.61 32.83 23.52
TriviaQA 58.40 **58.41** 58.25 51.43 51.18 52.99 58.25
Avg **59.14** 58.81 58.41 57.42 57.30 59.05 57.68
| QA Model | NLI Model | Combination | Confidence | Entailment | Contradiction | Acc |
|------------|-------------|---------------|--------------|--------------|-----------------|-------|
| SciQ | mnli-base | QA + C | 4.13 | -1.06 | 0.99 | |
| QA + E | 3.90 | 1.37 | 0.99 | | | |
| QA + E + C | 3.83 | 1.22 | -0.76 | 0.99 | | |
| E + C | 2.56 | -1.47 | 0.86 | | | |
| mnli-large | QA + C | 3.98 | -1.32 | 0.99 | | |
| QA + E | 3.78 | 1.55 | 0.99 | | | |
| QA + E + C | 3.65 | 1.31 | -0.97 | 0.99 | | |
| E + C | 2.63 | -1.72 | 0.91 | | | |
| RACE | mnli-base | QA + C | 3.04 | -0.15 | 0.89 | |
| QA + E | 3.03 | 0.27 | 0.89 | | | |
| QA + E + C | 3.02 | 0.26 | -0.14 | 0.89 | | |
| E + C | 0.73 | -0.46 | 0.75 | | | |
| mnli-large | QA + C | 2.97 | 0.00 | -0.81 | 0.89 | |
| QA + E | 2.91 | 0.98 | 0.89 | | | |
| QA + E + C | 2.85 | 0.92 | -0.75 | 0.89 | | |
| E + C | 1.76 | -1.12 | 0.78 | | | |
| Contradiction | Entailment | Neutral | | | | | |
|-----------------|--------------|-----------|-------|-------|-------|-------|-------|
| Dataset | QA | Score | Class | Score | Class | Score | Class |
| CosmosQA | 0.53 | -0.34 | -0.17 | 0.05 | -0.01 | 0.21 | 0.16 |
| DREAM | 0.72 | -0.57 | -0.35 | 0.54 | 0.50 | -0.11 | -0.13 |
| MCScript | 0.80 | -0.59 | -0.42 | 0.59 | 0.50 | -0.04 | -0.08 |
| MCScript2 | 0.77 | -0.50 | -0.32 | 0.41 | 0.37 | -0.04 | -0.05 |
| MCTest | 0.73 | -0.65 | -0.47 | 0.64 | 0.69 | -0.20 | -0.15 |
| QASC | 0.57 | -0.54 | -0.28 | 0.55 | 0.67 | -0.50 | -0.26 |
| RACE | 0.65 | -0.37 | -0.20 | 0.35 | 0.34 | -0.11 | -0.11 |
| RACE-C | 0.59 | -0.24 | -0.13 | 0.18 | 0.25 | -0.09 | -0.11 |
| SciQ | 0.75 | -0.69 | -0.47 | 0.68 | 0.67 | -0.42 | -0.19 |
Table 13: Correlation analysis (Spearman rank correlation) in the multiple choice and extractive QA settings.
RoBERTa-RACE is the QA model used for multiple choice QA scores and BERT-large is used for the extractive QA
scores.
| Contradiction | Entailment | Neutral | QA | | |
|-----------------|--------------|-----------|-------|-------|------|
| multiple choice | Score | -0.47 | 0.37 | -0.06 | 0.71 |
| Class | -0.28 | 0.38 | -0.06 | | |
| extractive QA | Score | -0.16 | 0.31 | -0.12 | 0.19 |
| Class | -0.15 | 0.39 | -0.29 | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
peng-etal-2023-token | Token-Level Self-Evolution Training for Sequence-to-Sequence Learning | https://aclanthology.org/2023.acl-short.73 | Adaptive training approaches, widely used in sequence-to-sequence models, commonly reweigh the losses of different target tokens based on priors, e.g. word frequency. However, most of them do not consider the variation of learning difficulty in different training steps, and overly emphasize the learning of difficult one-hot labels, making the learning deterministic and sub-optimal. In response, we present Token-Level Self-Evolution Training (SE), a simple and effective dynamic training method to fully and wisely exploit the knowledge from data. SE focuses on dynamically learning the under-explored tokens for each forward pass and adaptively regularizes the training by introducing a novel token-specific label smoothing approach. Empirically, SE yields consistent and significant improvements in three tasks, i.e. machine translation, summarization, and grammatical error correction. Encouragingly, we achieve averaging +0.93 BLEU improvement on three machine translation tasks. Analyses confirm that, besides improving lexical accuracy, SE enhances generation diversity and model generalization. | # Token-Level Self-Evolution Training For Sequence-To-Sequence Learning
Keqin Peng1∗
, Liang Ding2∗**, Qihuang Zhong**3 Yuanxin Ouyang1†
, Wenge Rong1, Zhang Xiong1, **Dacheng Tao**4 1Beihang University 2Zhejiang University 3Wuhan University 4The University of Sydney
{keqin.peng,oyyx,w.rong,xiongz}@buaa.edu.cn [email protected], {liangding.liam,dacheng.tao}@gmail.com
## Abstract
Adaptive training approaches, widely used in sequence-to-sequence models, commonly reweigh the losses of different target tokens based on priors, e.g. word frequency. However, most of them do not consider the variation of learning difficulty in different training steps, and overly emphasize the learning of difficult one-hot labels, making the learning deterministic and sub-optimal. In response, we present Token-Level Self-Evolution Training (SE), a simple and effective dynamic training method to fully and wisely exploit the knowledge from data. SE focuses on dynamically learning the under-explored tokens for each forward pass and adaptively regularizes the training by introducing a novel token-specific label smoothing approach. Empirically, SE yields consistent and significant improvements in three tasks, i.e. machine translation, summarization, and grammatical error correction. Encouragingly, we achieve averaging +0.93 BLEU improvement on three machine translation tasks. Analyses confirm that, besides improving lexical accuracy, SE enhances generation diversity and model generalization.
## 1 Introduction
Sequence-to-sequence learning (Seq2Seq) with neural networks (Sutskever et al., 2014) has advanced the state-of-the-art in various NLP tasks, e.g. translation (Bahdanau et al., 2015; Vaswani et al., 2017), summarization (Cheng and Lapata, 2016), and grammatical error correction (Yuan and Briscoe, 2016). Generally, Seq2Seq models are trained with the cross-entropy loss, which equally weighs the training losses of different target tokens.
However, due to the token imbalance nature (Piantadosi, 2014) and the truth that different tokens contribute differently to the sentence meaning (Church and Hanks, 1990; Chen et al., 2020),
Figure 1: An example to illustrate the **changing token**
![0_image_0.png](0_image_0.png)
difficulties in different training steps in WMT'14 EnDe. The token "abschließen/ Sache" is hard/ easy to learn at 50K while the trend is totally reversed at 100K.
several works are developed to reweigh the tokenlevel training loss according to explicit (e.g. frequency) or implicit (uncertainty estimated by offthe-shelf language models) priors (Gu et al., 2020; Xu et al., 2021; Zhang et al., 2022a). For example, Gu et al. (2020) proposed two heuristic criteria based on word frequency to encourage the model to learn from larger-weight low-frequency tokens. Zhang et al. (2022a) introduce target-context-aware metric based on an additional target-side language model to adjust the weight of each target token.
Despite some success, there are still limitations in these adaptive training approaches. First, most of them predetermine the difficult tokens and fix such prior to guiding the training. However, in our preliminary study, we find the hard-to-learn tokens are dynamically changing during training, rather than statically fixed. As shown in Figure 1, as the training progress goes, although the sentence-level loss is nicely converging, the difficult token is changing from "*abschließen*" to "*Sache*" in terms of the token-level loss. Second, these adaptive training methods overly emphasize fitting the difficult tokens' one-hot labels by reweighing the loss, which empirically may cause overfitting and limit the generalization (Norouzi et al., 2016; Szegedy et al.,
2016; Xiao et al., 2019; Miao et al., 2021). Also, a more recent study (Zhai et al., 2023) provides
∗Keqin and Liang contributed equally. †Corresponding Author.
841 theoretical evidence to support that reweighting is not that effective to improve the generalization.
Correspondingly, we design a simple and effective *Token-Level Self-Evolution Training* (SE) strategy to encourage Seq2Seq models to learn from difficult words that are dynamically selected by the model itself. Specifically, SE contains two stages:
❶*self-questioning* and ❷*self-evolution training*. In the first stage, the Seq2Seq models dynamically select the hard-to-learn tokens based on the tokenlevel losses, then we encourage the Seq2Seq models to learn from them in the second stage, where, rather than adopting reweighing, we introduce a novel *token-specific label smoothing* approach to generate easily digestible soft label, which considers both the ground truth and model's prediction.
Experiments across tasks, language pairs, data scales, and model sizes show that SE consistently and significantly outperforms both the vanilla Seq2Seq model and the re-implemented advanced baselines. Analyses confirm that besides improved lexical accuracy, SE generates diverse and humanlike generations with better model generalization.
## 2 Methodology
Preliminary Sequence-to-sequence (Seq2Seq)
learning aims to maximize the cross-entropy (CE)
loss of the log-likelihood of each target word in y = {y1*, . . . , y*N }, conditioned on source x, where the optimization treats all tokens equally:
$${\mathcal{L}}_{\mathrm{CE}}(\theta)=-\sum_{j=1}^{N}\log p(y_{j}|\mathbf{y}_{<j},\mathbf{x};\theta)\quad\quad(1)$$
However, due to the different learning difficulties of each token, it is sub-optimal to treat all tokens equally (Gu et al., 2020). To address this limitation, a series of token-level adaptive training objectives were adopted to re-weight the losses of different target tokens (Xu et al., 2021; Zhang et al., 2022a).
The common goal of these methods is to facilitate the model training by fully exploiting the informative but underexplored tokens.
However, our preliminary study shows that the hard tokens are dynamically changing (see Figure 1) in different training steps (or model structures), thus it is sub-optimal to employ static token priors (e.g. frequency) during training. Also, recent studies (Zhai et al., 2023) in the ML community theoretically show that reweighting is not that effective to improve the generalization. Based on the above evidence, we present the self-evolution learning
(SE) mechanism to encourage the model to adaptively and wisely learn from the informative yet under-explored tokens dynamically determined by the model itself (Stage❶ in §2.1), with an easy-tolearn label distribution (Stage❷ in §2.1). A similar work to ours is Hahn and Choi (2019). However, their method mainly considers the situation where the predicted answer is incorrect but close to the golden answer, while our method focuses on all dynamic hard tokens.
## 2.1 Token-Level Self-Evolution Learning
❶ **Self-questioning Stage.** The goal is to select the hard-to-learn tokens that are questioned by the Seq2Seq model itself during training dynamics. Previously, these difficult tokens are predetermined by external models or specific statistical metrics. However, inspired by the finding of dynamic change of difficult tokens during the training stage as shown in Figure 1 and the finding that the trained model contains useful information (Li and Lu, 2021), e.g. synonym, we propose to straightforwardly leverage the behavior of the model to dynamically select target tokens. In practice, we first calculate the token-level CE loss, denoted as
{l1, l2, ..., ln}, for each token for each forward pass.
Then we set a loss threshold Γ and select the tokens whose losses exceed Γ as the target tokens, i.e., D = {ti|li > Γ} where i ∈ N = {1, 2*, ..., n*}.
❷ **Self-evolution Training Stage.** After selecting the difficult tokens, we encourage the model to carefully learn from them. Given the theoretical shortage (Zhai et al., 2023) and potentially caused overfitting or overconfidence problem (Miao et al.,
2021) of reweighting and deliberately learning from difficult tokens, we propose to strengthen the learning from these tokens with a newly designed *Token-specific Label Smoothing* (TLS) approach. Specifically, motivated by the effect of label smoothing (LS) regularization (Szegedy et al.,
2016), we combine the ground truth pi and the model's prediction pˆito form a new soft label pei for the i-th token. Then we use pe to guide the difficult tokens D, while leaving label-smoothing CE
loss for the other tokens. It is worth noting that we also apply the traditional label smoothing technique to pˆito activate the information in the predicted distribution. Analogous to human learning, it is often easier for humans to grasp new things described by their familiar knowledge (Reder et al., 2016),
| Model | WMT16 En→Ro | WMT14 En→De | WMT14 En→Fr |
|---------------------------------------|----------------|----------------|----------------|
| Transformer (Vaswani et al., 2017) | 35.11 | 27.08 | 40.65 |
| + Freq-Exponential (Gu et al., 2020) | 35.86 (+0.75) | 27.60 (+0.52) | 41.05 (+0.40) |
| + Freq-Chi-Square (Gu et al., 2020) | 35.74 (+0.63) | 27.51 (+0.43) | 40.99 (+0.34) |
| + D2GPo (Li et al., 2020) | 35.89 (+0.78) | 27.66 (+0.58) | 41.05 (+0.40) |
| + BMI-adaptive (Xu et al., 2021) | 35.89 (+0.78) | 27.65 (+0.57) | 41.10 (+0.45) |
| + MixCrossEntropy (Li and Lu, 2021) | 35.88 (+0.74) | 27.61 (+0.53) | 41.07 (+0.42) |
| + CBMI-adaptive (Zhang et al., 2022a) | 35.90 (+0.79) | 27.69 (+0.61) | 41.13 (+0.48) |
| + SPL (Wan et al., 2020) | 35.92 (+0.81) | 27.88 (+0.80) | 41.30 (+0.65) |
| + Self-Evolution (ours) | 36.02 (+0.91)† | 28.02 (+0.94)† | 41.60 (+0.95)† |
Table 1: **BLEU scores (%) on three translation tasks spanning different data scales**, i.e. 0.6M, 4.5M, 36M. "†"
indicates a statistically significant difference from the powerful Transformer baseline (p < 0.05).
| Ro-En | XSUM | GEC | | | | | |
|----------|--------|-------|------|-------|--------|------|-------|
| BLEU | RG-1 | RG-2 | RG-L | Prec. | Recall | F0.5 | |
| Baseline | 37.3 | 43.2 | 19.8 | 34.0 | 59.1 | 39.8 | 53.9 |
| + SE | 37.7† | 43.8 | 20.4 | 34.7† | 58.9 | 46.2 | 55.8† |
Table 2: **Performance on more tasks** including translation, summarization, and grammar error correction, upon larger model BART (Lewis et al., 2020).
therefore the new soft label fused both accurate ground truth and model's self-distribution is easily digestible. Mathematically, for difficult tokens ti, peiis formulated as:
$$\widetilde{p_{i}}=(p_{i}+\hat{p_{i}})/2.$$
$$(2)$$
pei = (pi + ˆpi)/2. (2)
Then we calculate the losses of difficult tokens and the others, and combine the two losses:
$$L=-(\sum_{i}\tilde{p_{i}}\cdot log(\hat{p_{i}})+\sum_{j}p_{j}\cdot log(\hat{p_{j}})),\tag{3}$$ where $i\in D$ and $j\in N\setminus D$.
## 3 Evaluation
Machine Translation on three widely-used benchmarks (Ding et al., 2020, 2021c, 2022): smallscale WMT16 English-Romanian (En-Ro; 0.6M),
medium-scale WMT14 English-German (En-De; 4.5M), and large-scale WMT14 English-French
(En-Fr; 36.0M). We implement the baselines and our approach under Transformer-base settings. We follow the previous adaptive training approach (Gu et al., 2020) to pretrain with the cross-entropy loss with N steps, and further finetune the same steps with different adaptive training objectives, including **Freq-Exponential** (Gu et al., 2020), **Freq-ChiSquare** (Gu et al., 2020), **D2GPo** (Li et al., 2020),
BMI-adaptive (Xu et al., 2021), **MixCrossEntropy** (Li and Lu, 2021), **CBMI-adaptive** (Zhang et al., 2022a), and SPL (Wan et al., 2020). For N,
we adopt 100K and 30K for larger datasets, e.g.
En-De and En-Fr, and small dataset, i.e. En-Ro, respectively. We empirically adopt 32K tokens per batch for large datasets, the learning rate warms up to 1e-7 for 10K steps, and then decays 90K,
while for small dataset En-Ro, The learning rate warms up to 1e-7 for 4K steps, and then decays 26K steps. All the experiments are conducted on 4 NVIDIA Tesla A100 GPUs. The SacreBLEU (Post, 2018) was used for evaluation. Besides translation, we also follow previous works (Liu et al., 2021b; Zhong et al., 2022; Zhang et al., 2022b) to validate the universality of our method on more sequenceto-sequence learning tasks, e.g., summarization and grammatical error correction.
Text Summarization on XSUM corpus (0.2M).
We follow fairseq (Ott et al., 2019) to preprocess the data and train the model, then finetune them for the same steps. We evaluated with the ROUGE (Lin, 2004), i.e. R-1, R-2, and R-L.
Grammatical Error Correction on CoNLL14
(1.4M). We follow Chollampatt and Ng (2018) to preprocess the data and train the model, then finetune them for the same steps. The MaxMatch (M2)
scores (Dahlmeier and Ng, 2012) were used for evaluation with precision, recall, and F0.5 values.
## 3.1 Main Results
SE brings gains across language pairs and scales.
Results on machine translation across different data sizes ranging from 0.6M to 36M in Table 1 show that our SE-equipped Transformer "+ SelfEvolution (ours)" 1) considerably improves the performance by averaging +0.92 BLEU points; 2) out-
| Valid Loss Scale | | | | |
|--------------------|------|------|-----|------|
| 0-1 | 1-2 | 2-3 | >3 | |
| Transformer | 63.3 | 10.5 | 6.7 | 19.5 |
| + SE | 65.6 | 9.5 | 5.8 | 19.1 |
| Method | WMT22 De⇒En | | | |
|-------------|---------------|-------|------|------|
| BLEU | ∆ | COMET | ∆ | |
| Transformer | 29.98 | - | 45.1 | |
| +SE | 30.38 | +0.4 | 46.3 | +1.2 |
Table 4: **Performance on extremely large dataset**
WMT22 De-En (236M).
performs previous competitive method "+ CBMIadaptive" by up to +0.47 BLEU points on large dataset WMT14 En-Fr. These results demonstrate the effectiveness and universality of our SE. SE brings gains across tasks and backbone sizes.
Table 2 lists the performance on more tasks, including translation, summarization, and grammar error correction, upon large pretrained backbone -
BART (Lewis et al., 2020), which has above 600M
parameters. Compared to a stronger baseline, our SE significantly and incrementally improves the generation quality in all tasks, i.e. +0.4 BLEU, +
0.7 RG-L, and + 1.9 F0.5, respectively, showing our SE is robustly applicable to general scenarios.
## Se Works Well On Extremely Large Dataset. To
further verify the effectiveness of SE on extremely large dataset, we conducted an experiment on WMT22 De-En processed by Zan et al. (2022b),
which contains 236M training examples. The results in Table 4 show that our method can achieve
+0.4 and +1.2 improvement in BLEU and COMET
respectively, which proves that our SE also works on extremely large datasets.
## 3.2 Analysis
We provide some insights to better understand the effectiveness of our approach. The ablation of important modules and parameters is in Appendix A. SE learns better token representation. To verify whether our method helps learn better tokens representation, we conduct analysis on WMT14 EnDe from learning loss and fine-grained generation
![3_image_0.png](3_image_0.png)
perspectives, respectively.
First, we count the token ratios distributed in different cross-entropy loss scales in Table 3 following Zan et al. (2022a). Cross-entropy is a good indicator to quantify the distance between the predicted distribution and the ground truth in the valid dataset, and a lower value means a more similar distribution. As shown, our method improves the low-loss token ratios by +2.3%, indicating SE helps the model **learn better token representations by**
reducing the token uncertainty. In addition, we follow Ding et al. (2021a); Liu et al. (2021a) to break the translation down into different granularities and measure their fined-grained performance.
In particular, we calculate1the F-measure of words by different frequency buckets and BLEU scores of buckets of different lengths in Figure 2. We see SE achieves better performance in all frequencies and sentence buckets, demonstrating our method can *improve the performance of different granularities*.
SE encourages diverse generations. Lacking generation diversity is a notorious problem for Seq2Seq learning tasks (Sun et al., 2020; Lin et al., 2022). Benefiting from better exploring the model's prediction with corrected soft labels, SE is expected to improve generation diversity. We follow Wang et al. (2022) to examine this by analyzing the performance in an additional multiplereference test of WMT'14 En-De (Ott et al., 2018).
We choose additional references for each of the 500 test sentences taken from the original test. Table 5 shows SE consistently outperforms the baseline with the average improvement being 0.9/1.0 BLEU, which indicates that **our SE can effectively**
generate diverse results.
SE enhances model generalization. Benefiting from better hard token exploration, SE-equipped Transformers are expected to own better generalizations. We examine it by testing on domain shift 1Using compare-mt (Neubig et al., 2019).
| Ref. | Avg. | Top | | |
|-------------|--------|-------------|------|-------------|
| Transformer | +SE | Transformer | +SE | |
| #1 | 42.5 | 43.7 (+1.2) | 44.9 | 45.7 (+0.8) |
| #2 | 28.6 | 29.3 (+0.7) | 30.2 | 31.2 (+1.0) |
| #3 | 31.2 | 32.1 (+0.9) | 33.2 | 34.4 (+1.2) |
| #4 | 28.1 | 28.8 (+0.7) | 29.6 | 30.5 (+0.9) |
| Mean | 32.6 | 33.5 (+0.9) | 34.5 | 35.5 (+1.0) |
Table 5: **Multi-reference** performance. 'Avg./ Top" means the averaging/ most-matching performance.
| Model | Law | Med. | Kor. | Sub. | Avg. |
|-------------|-------|--------|--------|--------|--------|
| Transformer | 41.2 | 30.9 | 7.4 | 14.5 | 23.5 |
| +SE | 42.6† | 32.3† | 7.8† | 15.0† | 24.4 |
scenarios following Ding et al. (2021b). In particular, we evaluate WMT14 En-De models over four out-of-domain test sets (Müller et al., 2020) in Table 6 and find that SE improves the translation by averaging +0.9 BLEU points, showing a **better**
lexical generalization ability.
SE encourages human-like generations. We design two types of evaluation on WMT14 En-Fr:
1) AUTOMATIC EVALUATION with **COMET** (Rei et al., 2020) and **BLEURT** (Sellam et al., 2020),
which have a high-level correlation with human judgments. 2) HUMAN EVALUATION with three near-native French annotators who hold DALF C2 certificate2. Specifically, for human evaluation, we randomly sample 50 sentences from the test set to evaluate the translation **adequacy** and **fluency**,
scoring 1∼5. For adequacy, 1 represents irrelevant to the source while 5 means semantically equal.
For fluency, 1 means unintelligible while 5 means fluent and native. Table 7 shows the automatic and human evaluation results, where we find that our SE indeed achieves human-like translation.
## 4 Conclusion
In this paper, we propose a self-evolution learning mechanism to improve seq2seq learning, by exploiting the informative-yet-underexplored tokens dynamically. SE follows two stages, i.e. selfquestioning and self-evolution training, and can be used to evolve any pretrained models with a sim-2http://www.delfdalf.fr/dalf-c2-en.html ple recipe: continue train with SE. We empirically demonstrated the effectiveness and universality of SE on a series of widely-used benchmarks, covering low, medium, high, and extremely-high data volumes.
In the future, besides generation tasks, we would like to verify the effectiveness of SE on language understanding tasks (Wu et al., 2020; Zhong et al.,
2023). Also, it will be interesting to design SEinspired instruction tuning or prompting strategy like Lu et al. (2023) to enhance the performance of large language models, e.g. ChatGPT3, which after all have already been fully validated on lots of conditional generation tasks (Hendy et al., 2023; Jiao et al., 2023; Peng et al., 2023; Wu et al., 2023).
## Limitations
| AUTOMATIC EVAL. | HUMAN EVAL. | | | |
|-------------------|---------------|----------|---------|------|
| COMET | BLEURT | Adequacy | Fluency | |
| Transformer | 61.6 | 68.6 | 4.32 | 4.58 |
| + SE | 63.7 | 69.5 | 4.50 | 4.68 |
Our work has several potential limitations. First, we determine the threshold Γ by manual selection, which may limit the performance of Seq2Seq models, it will make our work more effective and elegant if we dynamically select the threshold. Second, besides the improvement on three widely used tasks, we believe that there are still other abilities, like code generation, of Seq2Seq models that can be improved by our method, which are not fully explored in this work.
## Ethics Statement
We take ethical considerations very seriously and strictly adhere to the ACL Ethics Policy. This paper focuses on effective training for sequence-tosequence learning. The datasets used in this paper are publicly available and have been widely adopted by researchers. We ensure that the findings and conclusions of this paper are reported accurately and objectively.
## Acknowledgement
We are grateful to the anonymous reviewers and the area chair for their insightful comments and suggestions.
3https://chat.openai.com/
## References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In *ICLR*.
Kehai Chen, Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2020. Content word aware neural machine translation. In ACL.
Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In ACL.
Shamil Chollampatt and Hwee Tou Ng. 2018. A multilayer convolutional encoder-decoder neural network for grammatical error correction. In *AAAI*.
Kenneth Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography.
CL.
Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In *NAACL*.
Liang Ding, Longyue Wang, Xuebo Liu, Derek F Wong, Dacheng Tao, and Zhaopeng Tu. 2021a. Progressive multi-granularity training for non-autoregressive translation. In *Findings of ACL*.
Liang Ding, Longyue Wang, Xuebo Liu, Derek F Wong, Dacheng Tao, and Zhaopeng Tu. 2021b. Rejuvenating low-frequency words: Making the most of parallel data in non-autoregressive translation. In ACL.
Liang Ding, Longyue Wang, Xuebo Liu, Derek F
Wong, Dacheng Tao, and Zhaopeng Tu. 2021c. Understanding and improving lexical choice in nonautoregressive translation. In *ICLR*.
Liang Ding, Longyue Wang, Shuming Shi, Dacheng Tao, and Zhaopeng Tu. 2022. Redistributing lowfrequency words: Making the most of monolingual data in non-autoregressive translation. In ACL.
Liang Ding, Longyue Wang, and Dacheng Tao. 2020.
Self-attention with cross-lingual position representation. In ACL.
Shuhao Gu, Jinchao Zhang, Fandong Meng, Yang Feng, Wanying Xie, Jie Zhou, and Dong Yu. 2020. Tokenlevel adaptive training for neural machine translation.
In *EMNLP*.
Sangchul Hahn and Heeyoul Choi. 2019. Selfknowledge distillation in natural language processing.
In *RANLP*.
Amr Hendy, Mohamed Abdelrehim, et al. 2023. How good are gpt models at machine translation? a comprehensive evaluation. *arXiv preprint*.
Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good translator? a preliminary study. *arXiv preprint*.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL.
Haoran Li and Wei Lu. 2021. Mixed cross entropy loss for neural machine translation. In *ICML*.
Zuchao Li, Rui Wang, et al. 2020. Data-dependent gaussian prior objective for language generation. In ICLR.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out.
Huan Lin, Baosong Yang, Liang Yao, Dayiheng Liu, Haibo Zhang, Jun Xie, Min Zhang, and Jinsong Su.
2022. Bridging the gap between training and inference: Multi-candidate optimization for diverse neural machine translation. In *Findings of NAACL*.
Xuebo Liu, Longyue Wang, Derek F Wong, Liang Ding, Lidia S Chao, Shuming Shi, and Zhaopeng Tu. 2021a.
On the copying behaviors of pre-training for neural machine translation. In *Findings of ACL*.
Xuebo Liu, Longyue Wang, Derek F Wong, Liang Ding, Lidia S Chao, and Zhaopeng Tu. 2021b. Understanding and improving encoder layer fusion in sequenceto-sequence learning. In *ICLR*.
Qingyu Lu, Baopu Qiu, Liang Ding, Liping Xie, and Dacheng Tao. 2023. Error analysis prompting enables human-like translation evaluation in large language models: A case study on chatgpt. arXiv preprint.
Mengqi Miao, Fandong Meng, Yijin Liu, Xiao-Hua Zhou, and Jie Zhou. 2021. Prevent the language model from being overconfident in neural machine translation. In ACL.
Mathias Müller, Annette Rios, and Rico Sennrich. 2020.
Domain robustness in neural machine translation. In AMTA, Virtual.
Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, and Xinyi Wang. 2019. compare-mt:
A tool for holistic comparison of language generation systems. In *NAACL*.
Mohammad Norouzi, Samy Bengio, Zhifeng Chen, et al.
2016. Reward augmented maximum likelihood for neural structured prediction. In *NeurIPS*.
Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In *ICML*.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *NAACL Demonstration*.
Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, and Dacheng Tao. 2023. Towards making the most of chatgpt for machine translation. *arxiv preprint*.
Steven T Piantadosi. 2014. Zipf's word frequency law in natural language: A critical review and future directions. *Psychonomic bulletin & review*.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In WMT.
Lynne M Reder, Xiaonan L Liu, Alexander Keinath, and Vencislav Popov. 2016. Building knowledge requires bricks, not sand: The critical role of familiar constituents in learning. Psychonomic bulletin &
review.
Ricardo Rei, Craig Stewart, Ana C. Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In *EMNLP*.
Thibault Sellam, Dipanjan Das, and Ankur P. Parikh.
2020. BLEURT: learning robust metrics for text generation. In ACL.
Zewei Sun, Shujian Huang, Hao-Ran Wei, Xinyu Dai, and Jiajun Chen. 2020. Generating diverse translation by manipulating multi-head attention. In *AAAI*.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
Sequence to sequence learning with neural networks. In *NeurIPS*.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In *CVPR*.
Ashish Vaswani, Noam Shazeer, et al. 2017. Attention is all you need. In *NeurIPS*.
Yu Wan, Baosong Yang, et al. 2020. Self-paced learning for neural machine translation. In *EMNLP*.
Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, and Michael R.
Lyu. 2022. Understanding and improving sequenceto-sequence pretraining for neural machine translation. In ACL.
Di Wu, Liang Ding, Fan Lu, and Jian Xie. 2020. Slotrefine: A fast non-autoregressive model for joint intent detection and slot filling. In *EMNLP*.
Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, and Michael Lyu. 2023. Chatgpt or grammarly?
evaluating chatgpt on grammatical error correction benchmark. *arXiv preprint*.
Fengshun Xiao, Yingting Wu, Hai Zhao, Rui Wang, and Shu Jiang. 2019. Dual skew divergence loss for neural machine translation. *CoRR*.
Yangyifan Xu, Yijin Liu, Fandong Meng, Jiajun Zhang, Jinan Xu, and Jie Zhou. 2021. Bilingual mutual information based adaptive training for neural machine translation. In ACL.
Zheng Yuan and Ted Briscoe. 2016. Grammatical error correction using neural machine translation. In NAACL.
Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, and Dacheng Tao. 2022a. On the complementarity between pre-training and random-initialization for resource-rich machine translation. In *COLING*.
Changtong Zan, Keqin Peng, Liang Ding, et al. 2022b.
Vega-mt: The jd explore academy machine translation system for wmt22. In WMT.
Runtian Zhai, Chen Dan, J Zico Kolter, and Pradeep Kumar Ravikumar. 2023. Understanding why generalized reweighting does not improve over ERM. In ICLR.
Songming Zhang, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jian Liu, and Jie Zhou. 2022a. Conditional bilingual mutual information based adaptive training for neural machine translation. In ACL.
Zheng Zhang, Liang Ding, Dazhao Cheng, Xuebo Liu, Min Zhang, and Dacheng Tao. 2022b. Bliss: Robust sequence-to-sequence learning via self-supervised input representation. *arXiv preprint*.
Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao. 2022. E2s2: Encoding-enhanced sequence-to-sequence pretraining for language understanding and generation. *arXiv preprint*.
Qihuang Zhong, Liang Ding, Keqin Peng, Juhua Liu, Bo Du, Li Shen, Yibing Zhan, and Dacheng Tao.
2023. Bag of tricks for effective language model pretraining and downstream adaptation: A case study on glue. *arXiv preprint*.
## A Appendix
Parameter Analysis on Γ As stated in §2.1, we use the loss threshold Γ to dynamically select the hard-to-learn tokens. Here, we analyze the influence of different Γ in detail. In practice, we train the Transformer models with different Γ
(in {3,4,5,6}) and evaluate the performance of the WMT14 En-De test set. Table 8 lists the performance of different Γ. The results of Table 8 show that SE is stable and insensitive to Γ *within a certain range*. Noting that we select Γ = 5 for all experiment settings based on the results in Table 8.
![6_image_0.png](6_image_0.png)
Table 8: Parameter analysis of Γ on WMT14 En-De.
## Ablation Study
Metric. In this work, we use the loss-based metric to dynamically select the hard-to-learn tokens. To validate the effectiveness of the metric, we use a simple adaptive training method ("+ ADD")
that adds 1 to the weighting term of loss of the hard-to-learn tokens. The results on WMT16 EnRo are shown in Table 9, the simple Add method can achieve +0.3 BLEU improvement compared to the baseline model, which proves that *our proposed* self-questioning stage indeed mines informative difficult tokens. Also, we can observe that learning these dynamic difficult tokens with our SE framework ("+ SE") could outperform "+ ADD" by +0.6 BLUE points, demonstrating *the superiority of our* token-specific label smoothing approach.
| Baseline | + ADD | + SE | |
|------------|---------|--------|------|
| BLEU | 35.1 | 35.4 | 36.0 |
Table 9: Ablation performance of our SE. on Metric.
Learning objective. As stated in §2.1, our learning objective is the combination of the ground truth and the model's prediction. To validate the effectiveness of predicted distribution, we conduct ablation experiments on WMT16 En-Ro and WMT14 En-De. The results in Table 10 show that adding the predicted distribution will consistently improve the model's performance, which proves the effectiveness of the predicted distribution.
| Method | BLEU | |
|------------------------|--------|-------|
| EN⇒DE | EN⇒Ro | |
| Transformer | 27.08 | 35.11 |
| SE | 28.02 | 36.02 |
| -w/o predicted results | 27.89 | 35.71 |
Table 10: Ablation performance of our SE. on learning objective.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The last section of the paper.
✗ A2. Did you discuss any potential risks of your work?
Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The abstract and the introduction section.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 3.2
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
yoon-etal-2023-gradient | Gradient Ascent Post-training Enhances Language Model Generalization | https://aclanthology.org/2023.acl-short.74 | In this work, we empirically show that updating pretrained LMs (350M, 1.3B, 2.7B) with just a few steps of Gradient Ascent Post-training (GAP) on random, unlabeled text corpora enhances its zero-shot generalization capabilities across diverse NLP tasks. Specifically, we show that GAP can allow LMs to become comparable to 2-3x times larger LMs across 12 different NLP tasks. We also show that applying GAP on out-of-distribution corpora leads to the most reliable performance improvements. Our findings indicate that GAP can be a promising method for improving the generalization capability of LMs without any task-specific fine-tuning. | # Gradient Ascent Post-Training Enhances Language Model Generalization
Dongkeun Yoon1∗Joel Jang1∗Sungdong Kim1, 2 **Minjoon Seo**1 1KAIST 2 NAVER AI Lab [email protected], {joeljang,minjoon}@kaist.ac.kr [email protected]
## Abstract
In this work, we empirically show that updating pretrained LMs (350M, 1.3B, 2.7B) with just a few steps of Gradient Ascent Post-training
(GAP) on random, unlabeled text corpora enhances its zero-shot generalization capabilities across diverse NLP tasks. Specifically, we show that GAP can allow LMs to become comparable to 2-3x times larger LMs across 12 different NLP tasks. We also show that applying GAP on out-of-distribution corpora leads to the most reliable performance improvements. Our findings indicate that GAP can be a promising method for improving the generalization capability of LMs without any task-specific finetuning 1.
## 1 Introduction
Recently, Language Models (LMs) pretrained on a vast amount of text corpora have shown to be capable of performing diverse downstream NLP
tasks in a zero-shot manner (Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2022; Zhang et al., 2022) or through in-context learning (Brown et al., 2020; Min et al., 2022) without any gradient updates. This paradigm has been preferred over task-specific fine-tuning (Devlin et al., 2019),
which requires considerable amount of labeled data for the given target task.
Motivated by the positive effect of gradient ascent during fine-tuning (Foret et al., 2021), in this work, we explore whether adapting pretrained LMs with Gradient Ascent Post-training (GAP) on random, unlabeled text corpora can bring any benefits in terms of enhancing its generalization capabilities of performing diverse downstream NLP tasks in a zero-shot or few-shot manner *without* the need for task-specific training data.
Specifically, we apply just a few steps of gradient ascent to OPT LMs (Zhang et al., 2022) using
∗ Equal Contribution 1Code and full results for individual GAP runs are available at https://github.com/kaist-lklab/GAP
![0_image_0.png](0_image_0.png)
randomly sampled text sequences from 3 different corpora from the Pile (Gao et al., 2021) with varying degree of familiarity between the LM and the corpus. Experimental results show that this simple approach achieves performance gains across 12 downstream NLP tasks: 4 dialogue tasks and 8 classification tasks. We observe that applying GAP
with out-of-distribution data, specifically code data that OPT was not explicitly trained on, results in the most reliable performance gain.
Our main contributions can be summarized into two folds:
- We empirically show that GAP is a promising generalization enhancement technique as it is
(1) effective, as evidenced by multiple benchmark results; (2) simple & efficient, requiring maximum 15 steps of parameter update; (3)
851 versatile, as it can be applied easily to any pretrained LMs and does not necessitate taskspecific fine-tuning.
- We show analysis of what makes GAP work by splitting the corpora into three groups according to the LMs' degree of familiarity with the data. We observe that performing GAP
with the most unfamiliar (out-of-distribution)
data results in the most reliable performance gain.
## 2 Related Works
Task-Specific Gradient Ascent Deep neural network models exhibiting poor generalization due to converging at sharp local minima is a well-known phenomenon in literature (Keskar et al., 2017; Izmailov et al., 2018; Cha et al., 2021; Chen et al.,
2022). To address this issue, Foret et al. (2021)
introduce Sharpness-Aware Minimization (SAM),
an algorithm that performs both gradient ascent as well as gradient descent during task-specific fine-tuning to avoid sharp local minima, improving performance. The effectiveness of SAM has motivated several studies to apply them to LMs and report meaningful improvements in performance.
Bahri et al. (2022) have shown that applying SAM when fine-tuning various scales of T5 LMs (Raffel et al., 2020) on multiple downstream tasks results in a substantial performance gains.
Similarly, Kaddour et al. (2022) also explore SAM
across computer vision, natural language processing, and graph representation learning tasks, further bolstering its efficiency.
While SAM was proposed as a robust fine-tuning methodology that targets convergence on supervised dataset, we instead explore the benefits gradient ascent can bring *without* task-specific labeled data for generic LMs.
Task-Agnostic Gradient Ascent In a recent study, Jang et al. (2022) investigate the use of gradient ascent for addressing privacy risks in LMs.
The main objective of the work is utilizing gradient ascent to *unlearn* specific token sequences; surprisingly, they report unexpected performance gains in some cases. Our work can be seen as a direct extension of this phenomenon where our main objective is to enhance the generalization capabilities instead of forgetting specific data to ensure privacy.
## 3 Gradient Ascent Post-Training (Gap)
In this section, we give a formal definition of GAP.
Specifically, given an LM with parameters w and a sequence of tokens x = (x1*, ..., x*N ), GAP is defined as:
$$w_{t+1}=w_{t}+\alpha\nabla f_{w_{t}}(\mathbf{x})\qquad\qquad(1)$$ $$f_{w_{t}}(\mathbf{x})=-\sum_{n=1}^{N}\log(p_{w_{t}}(x_{n}|x_{<n}))\qquad(2)$$
where t represents the gradient ascent iteration, α denotes the learning rate, x<n indicates the token sequence (x1*, ..., x*n−1) and pwt(xn|x<n) represents the likelihood of predicting the next token, xn, given the previous token sequence as an input to an LM with parameter wt.
Markedly, GAP solely utilizes gradient ascent and does not actively facilitate convergence, as it updates the model parameters to maximize (1)
the language modeling loss function (2). We propose GAP as an unsupervised methodology that can bring significant performance gains even without curated fine-tuning data.
## 4 Experiments 4.1 Experimental Setup
Baseline Models and Evaluation Datasets We use OPT (350M, 1.3B, 2.7B, 6.7B) LMs (Zhang et al., 2022) as the baseline LMs. We observe the effect GAP has on their generalization capabilities which is measured via evaluation on 12 different downstream NLP tasks; we use Wizard of Wikipedia (Dinan et al., 2019), Empathetic Dialogues (Rashkin et al., 2019), Blended Skill Talk (Smith et al., 2020) and WizInt (Komeili et al., 2022) to evaluate generative capabilities, Hellaswag (Zellers et al., 2019) to assess linguistic reasoning abilities, Winogrande (Sakaguchi et al., 2021) and COPA (Brassard et al.,
2022) to measure commonsense reasoning abilities, and ARC-Easy (Clark et al., 2018), ARCChallenge (Clark et al., 2018), PIQA (Bisk et al.,
2020), MathQA (Amini et al., 2019) and PubmedQA (Jin et al., 2019) to measure the scientific reasoning abilities. The exact prompts used for each task are provided in Appendix A.
Random Unlabeled Data We apply GAP on text snippets from three different corpora, which all originate from the Pile (Gao et al., 2021) training set: (1) Training Data Extraction Challenge
(TDEC)2, (2) Common Crawl (CC) and (3) Github
(Git.). We choose these corpora in order to observe the effect of the LMs' degree of familiarity with the data. Training Data Extraction Challenge includes examples from the Pile that are identified to be easy-to-extract from GPT-Neo LMs (Black et al.,
2022), mainly due to high levels of duplication. We assume these examples are also relatively easier-toextract from OPT LMs as they were also pretrained on subset of the Pile, indicating the highest level of familiarity / memorization. We consider OPT
LMs to be familiar (in-domain) to Common Crawl, as it was included in their pretraining corpora. As OPT LMs were not explicitly trained on the Github corpora we consider OPT to be unfamiliar (out-ofdistribution) with Github. Examples of the random unlabeled data are provided in Appendix D.
Configurations For each of the 3 LM sizes
[350M, 1.3B, 2.7B], we sample a total of 300 text samples (each 200 token lengths long) for applying GAP, with 100 samples taken from each of the three corpora. For each run, a single text sample is used, ultimately resulting in 300 runs of GAP per LM size. Therefore, a single epoch of a GAP run comprises of a single gradient ascent step with batch size set to 1. The number of maximum epochs is set to 15 and we report the validation score from the best-performing epoch, as preliminary experiments showed gradient ascent past 15 steps mostly resulted in performance degradation.
Due to computational constraints we sample the validation data to a maximum of 320 samples per dataset for all of the 12 evaluation datasets. For further exploration of GAP as a methodology, we use the checkpoints with the best validation scores and evaluate the LMs on the test datasets for the 4 dialogue tasks. We do not separately report the test evaluation results for classification datasets since most of them require direct submission to the task website. For a single run, we use one Nvidia 40GB
A100 GPU. Further details regarding the experimental configurations (e.g. optimizer, learning rate, etc.) are provided in Appendix B.
## 4.2 Dialogue Tasks
Main Results As shown in Figure 1 in Section 1, GAP substantially enhances the average validation performance on the 4 dialogue tasks, with median F1-score of 1.3B LMs outperforming the 2https://github.com/google-research/lm-extractionbenchmark
| Model | F1 | MAUVE | Diversity | Length |
|---------|------|---------|-------------|----------|
| 350M | 11.4 | 44.3 | 74.0 | 11.8 |
| + GAP | 12.5 | 67.2 | 87.3 | 14.4 |
| 1.3B | 13.5 | 48.2 | 82.8 | 11.4 |
| + GAP | 14.0 | 69.5 | 86.7 | 13.8 |
| 2.7B | 13.8 | 51.3 | 86.9 | 11.3 |
| + GAP | 14.7 | 73.0 | 93.1 | 14.5 |
| 6.7B | 14.5 | 51.1 | 88.3 | 11.9 |
| Comparison | Metric | Win | Loss | Tie |
|-------------------|----------|-------|--------|-------|
| C | 43%† | 17% | 40% | |
| Ours vs. Baseline | F | 36%† | 15% | 49% |
| I | 40%† | 17% | 43% | |
| C | 41% | 37% | 22% | |
| Ours vs. Human | F | 33% | 30% | 37% |
| I | 23% | 50%† | 27% | |
2.7B LM baseline, and some 1.3B LMs even able to match the performance of the 6.7B LM baseline 3. We report the average test F1 score as well as MAUVE (Pillutla et al., 2021), diversity (Su et al., 2022), and generation length of our best validation checkpoints for each model size (excluding outliers) in comparison to the baseline LMs in Table 1 4.
Results show a substantial improvement in all of the metrics, F1 Score, MAUVE, and generation length, with our 1.3B and 2.7B LM checkpoints even outperforming the larger LM baselines. This result is significant considering that no task-specific dataset is used. Examples of text generation for the dialogue tasks are provided in Appendix E.
Human Evaluation We also evaluate and compare the qualitative quality of generated responses of the baseline LMs and the LMs adapted with GAP
3Detailed numerical data for the median values is available in C.
4Explanation of how MAUVE and diversity is measured is provided in Appendix B.
![3_image_1.png](3_image_1.png)
side-by-side. For this, we sample 100 contexts from the WizInt (Komeili et al., 2022) dataset and generate the corresponding responses with the 2.7B LM baseline and 2.7B LM + GAP denoted as *Ours*.
Then, we compare the generated response pairs from the LMs from the perspective of three metrics: coherence, fluency, and informativeness (Su et al., 2022). We ask human evaluators to select the better response from each pair with respect to each metrics 5. We find our GAP-enhanced LM shows significant strengths in all the metrics compared to its baseline (Table 2). Moreover, our LM shows comparable performance to human upper bounds
(gold response) except for informativeness.
## 4.3 Classification Tasks
The average validation performances of the 8 classification tasks when performing GAP on the OPT
LMs are shown in Figure 2. While GAP fails to provide consistent improvements for 350M LMs and 2.7B LMs, mostly resulting in a degradation of performance as shown by the median performance underperforming the baselines, the LMs show considerable performance gains in some cases for the larger LMs. This result suggests that although GAP
does not show steady improvement of generalization for the classification tasks unlike the dialogue 5Further study details are in Appendix F.
![3_image_0.png](3_image_0.png)
| Model | All | Git. | CC | TDEC |
|---------|-------|--------|------|--------|
| 350M | 12.3 | 12.6 | 11.9 | 12.3 |
| 1.3B | 13.7 | 13.8 | 13.6 | 13.5 |
| 2.7B | 14.1 | 14.3 | 14.2 | 13.9 |
tasks, it does show some potential for improvement considering that some runs did result in substantial improvements. We leave choosing the right text samples to perform GAP on for a consistent performance enhancement on classification tasks for future work.
## 4.4 Analysis Of Gap
Figure 3 shows the average performance of the 300 GAP runs for the 350M LMs (zoomed-in version of Figure 1). To observe the effect of LMs' familiarity to the unlabeled data, we plot the dots with different symbols with respect to the corpus. Interestingly, samples from the unfamiliar corpus (Github)
results in significant improvements, mostly achieving higher scores than the median score. Consistent findings are also evident in Table 3, with Github achieving the highest median F1 scores across all model sizes. This suggests that future applications of GAP can be applied more efficiently by mostly using unfamiliar (out-of-distribution) text. Additional figures for other LM sizes are available in Appendix C.
## 5 Conclusion
In this work, we introduce GAP, a novel method of improving the generalization capability of LMs without any task-specifc data by sampling random text and performing gradient ascent for a few steps.
We show that our approach is (1) simple to use,
(2) effective in making more robust LMs, and (3)
has much room for improvements for future work when scaling the number of GAP runs (e.g. >300)
and choosing specific text samples (e.g. out-ofdistribution text) to perform GAP on. Thus, we urge the community to consider GAP when prompting off-the-shelf pretrained LMs for performing diverse downstream NLP tasks.
## Limitations
While we show that applying GAP can result in a significant improvement in the generalization capability of LMs, especially for dialogue tasks, we are only able to show 300 GAP runs for each LM size in this work. We leave scaling the number of GAP
runs, as well as selecting *specific* text samples to perform GAP on for future work. Furthermore, a separate validation set of the tasks at interest are needed in order to choose the best checkpoint when performing GAP. Future work may look for other task-agonostic cues such as language modeling loss to determine the best checkpoint to use for inference.
## Acknowledgements
This work was partly supported by KAIST-NAVER
Hypercreative AI Center (80%) and Institute of Information & communications Technology Planning
& Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00113, Developing a Sustainable Collaborative Multi-modal Lifelong Learning Framework, 20%).
## References
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics.
Dara Bahri, Hossein Mobahi, and Yi Tay. 2022.
Sharpness-aware minimization improves language model generalization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7360–
7371.
Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physical commonsense in natural language. In *Proceedings of the* AAAI conference on artificial intelligence, volume 34, pages 7432–7439.
Sidney Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, Usvsn Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: An opensource autoregressive language model. In Proceedings of BigScience Episode \#5 - Workshop on Challenges & Perspectives in Creating Large Language Models, pages 95–136, virtual+Dublin. Association for Computational Linguistics.
Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. 2022. COPA-SSE: Semi-structured explanations for commonsense reasoning. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3994–4000, Marseille, France. European Language Resources Association.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Junbum Cha, Sanghyuk Chun, Kyungjae Lee, HanCheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. 2021. Swad: Domain generalization by seeking flat minima. In *Advances in Neural Information Processing Systems*, volume 34, pages 22405–
22418. Curran Associates, Inc.
Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong.
2022. When vision transformers outperform resnets without pre-training or strong data augmentations. In International Conference on Learning Representations.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. *ArXiv*,
abs/1803.05457.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations.
Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. 2021. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The pile: An 800gb dataset of diverse text for language modeling.
Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018.
Averaging weights leads to wider optima and better generalization.
Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. 2022. Knowledge unlearning for mitigating privacy risks in language models.
Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language* Processing (EMNLP-IJCNLP), pages 2567–2577.
Jean Kaddour, Linqing Liu, Ricardo Silva, and Matt J.
Kusner. 2022. When do flat minima optimizers work?
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang.
2017. On large-batch training for deep learning: Generalization gap and sharp minima. In *International* Conference on Learning Representations.
Diederik P. Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization.
Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022.
Internet-augmented dialogue generation. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 8460–8478, Dublin, Ireland. Association for Computational Linguistics.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837.
Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. In *NeurIPS*.
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models:
Methods, analysis & insights from training gopher.
arXiv preprint arXiv:2112.11446.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 5370–5381, Florence, Italy. Association for Computational Linguistics.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. *Communications of the ACM*, 64(9):99–106.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207.
Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2021–2030, Online. Association for Computational Linguistics.
Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, and Nigel Collier. 2022. A contrastive framework for neural text generation. In *Advances* in Neural Information Processing Systems.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? *arXiv preprint* arXiv:1905.07830.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pretrained transformer language models.
## A Task Prompts
Table 4 shows the prompts we use for each of the 12 benchmark dataset to enable zero-shot/few-shot learning. For dialogue tasks (Wizard of Wikipedia, Blended Skill Talks, Empathetic Dialogues, WizInt), we use the prompts used by Zhang et al.
(2022).
## B **Details Of Experimental Configurations**
In this section, we give further details of our main experimental setting of performing GAP. We use Adam optimizer (Kingma and Ba, 2014) with a constant learning rate of 5e-5 with no weight decay and no dropout.
For the dialogue tasks, we adopt the settings of Zhang et al. (2022) and prompt the LM with alternating "User 1:" and "User 2:" lines of dialogue (examples shown in Appendix A). To generate tokens, we employ greedy decoding method and set a maximum generation length of 32 tokens.
For the classification tasks, we use a *verbalizer* method by selecting the output option with higher log-likelihood following Brown et al. (2020); Sanh et al. (2021). We use unigram F1 score as our main metric for the dialogue generation tasks and accuracy for the classification tasks.
For the diverse metrics used for evaluation on the test sets of the 4 dialogue tasks, MAUVE (Pillutla et al., 2021) compares the text representation of the LM generated-response to human-written text, higher values indicate greater similarity to human-written text. Diversity metric (Su et al.,
2022) measures token-level repetition, with higher values indicating greater diversity and less repetition in the generated text.
## C Full Results
Tables 5 and 6 show the median validation score of all 300 GAP runs. For classification tasks, the median values do not show significant improvements. However for dialogue tasks, GAP shows considerable improvements across all tasks.
Tables 7, 8, 9 and 10 show the individual test performance for each dialogue dataset. The four dialogue datasets are: Blended Skill Talks (BST),
Empathetic Dialogues (ED), Wizard of Wikipedia
(WoW) and **WizInt**. Our models demonstrate superior performance compared to their same sized baselines on every metrics in all four task.
Figures 4 and 5 represent the familiarity analysis results for 1.3B and 2.7B sized models, respectively. For both 1.3B and 2.7B models, data sampled from the out-of-domain corpora (Github)
results in reliable performance gains. For the bigger sized models, in-domain corpora (CC) also results in competitive performance gains, suggesting larger sized morels are more robust to GAP data selection.
## D Examples Of Random Data
Table 11 shows examples of the random data we use to apply GAP to OPT LMs. Specifically, they are the best performing data for each model size.
## E Examples Of Dialogue Generation Outputs
Table 12 shows some examples of text generated by baseline models and our models trained with GAP. Notice that our models generate diverse and interesting text while also maintaining coherence to the given dialogue history.
## F Details Of Human Evaluation
We conduct the human evaluation on Amazon Mechanical Turk (AMT). An example of the interface shown to the workers is shown in Figure 6.
Specifically, we recruit three different annotators for each comparison pair with a compensation of 1$ per instance. We include brief instructions on the evaluation including descriptions of three metrics. Then, we ask the workers to compare each generated (or ground-truth for human baseline) response pair with the given dialogue context. We evaluate 200 samples in total, including 100 for the OPT baseline and 100 for the human upper bounds. The Fleiss kappa among the workers is calculated as 0.36, which indicates moderate-level agreements. We also test the significance between the comparing systems via a bootstrap test with 100,000 samplings.
| Dataset | Prompt |
|-------------------------------------------|--------------------------------------------------------------|
| PIQA | {goal} [option] |
| ARC-Easy/Challenge | {question} [option] |
| COPA | {premise} [option] |
| HellaSwag | {input} [option] |
| Winogrande | {sentence} [option] |
| MathQA | {problem} [option] |
| PubmedQA | Question: {problem} \nAnswer: [option] |
| Wizard of Wikipedia, Blended Skill Talks, | User 1: {turn}\nUser 2: {turn}\nUser 1: {turn}\n ... User 2: |
| Empathetic Dialogues, WizInt | |
Model Avg. **BST ED WoW WizInt**
350M 11.77 11.88 10.17 12.05 13.00
+ GAP 12.31 **12.45 10.64 12.37 13.78**
1.3B 12.98 14.04 12.35 11.68 13.85
+ GAP 13.60 **14.45 12.58 12.37 15.02**
2.7B 13.54 13.18 12.42 12.86 **15.69**
+ GAP 14.09 **13.90 13.03 13.76** 15.65
6.7B 14.51 14.93 13.71 14.24 15.18
Table 5: **Validation F1-score** of OPT baselines and median **validation F1-score** of all GAP runs, measured on four dialogue datasets: Blended Skill Talks (BST), Empathetic Dialogues (ED), Wizard of Wikipedia (WoW) and WizInt.
Table 6: **Validation accuracy** of OPT baselines and median **validation accuracy** of all GAP runs, measured on classification datasets.
| Model | Avg. | ARCChall. | ARCEasy | Hellaswag | MathQA | PIQA | PubmedQA COPA | Winogrande | |
|---------|--------|-------|-------|-------|----------|--------|-------|-------|-------|
| 350M | 45.76 | 11.64 | 45.63 | 35.94 | 21.88 | 67.50 | 54.37 | 69.00 | 53.13 |
| + GAP | 45.84 | 19.32 | 45.63 | 36.88 | 21.25 | 67.50 | 53.75 | 69.00 | 53.44 |
| 1.3B | 50.63 | 24.07 | 56.25 | 39.38 | 22.81 | 69.38 | 58.44 | 76.00 | 58.75 |
| + GAP | 50.91 | 24.75 | 56.25 | 40.00 | 23.13 | 70.00 | 58.44 | 76.00 | 58.75 |
| 2.7B | 51.77 | 26.78 | 57.50 | 41.87 | 21.25 | 72.50 | 58.44 | 78.00 | 57.81 |
| + GAP | 51.73 | 26.78 | 57.50 | 41.87 | 21.25 | 72.19 | 58.44 | 78.00 | 57.81 |
| 6.7B | 54.39 | 32.20 | 61.87 | 45.63 | 21.25 | 75.94 | 58.44 | 77.00 | 62.81 |
Model **BST ED WoW WizInt**
350M 11.18 10.43 13.24 10.92
+ GAP **12.68 11.38 13.89 12.13**
1.3B 14.26 12.51 14.38 13.01
+ GAP **14.83 12.74 15.18 13.37**
2.7B 14.00 13.09 14.40 13.58
+ GAP **15.12 13.71 15.40 14.45**
6.7B 15.04 13.79 15.19 13.92
Table 7: Test **F1-score** of our best performing GAP
models and OPT baselines on each dialogue datasets.
350M 48.73 31.01 53.58 43.91
+ GAP **74.87 62.29 82.37 82.55**
1.3B 52.6 53.0 40.8 46.2
+ GAP **74.7 54.5 76.4 72.44**
2.7B 59.8 49.4 55.4 40.6
+ GAP **82.2 51.3 86.7 71.5**
6.7B 55.7 43.4 56.3 48.8
Model **BST ED WoW WizInt**
Table 8: Test **MAUVE** of our best performing GAP
models and OPT baselines on each dialogue datasets.
Table 9: Test **diversity** of our best performing GAP
models and OPT baselines on each dialogue datasets.
Model **BST ED WoW WizInt**
350M 10.91 10.65 13.4 12.23
+ GAP **13.23 13.26 15.86 15.35**
1.3B 10.69 11.18 11.95 11.72
+ GAP **12.89 12.49 15.05 14.8**
2.7B 10.4 10.72 12.39 11.58
+ GAP **13.09 13.98 15.83 15.21**
6.7B 11.25 10.89 13.36 12.22
Table 10: Test **generation length** of our best performing GAP models and OPT baselines on each dialogue datasets.
![8_image_0.png](8_image_0.png)
| Model | BST | ED | WoW | WizInt |
|---------|-------|-------|-------|----------|
| 350M | 69.29 | 85.01 | 62.64 | 79.34 |
| + GAP | 83.22 | 91.79 | 82.96 | 91.09 |
| 1.3B | 82.62 | 84.43 | 81.07 | 83.23 |
| + GAP | 86.78 | 88.99 | 84.33 | 86.64 |
| 2.7B | 85.36 | 91.09 | 82.04 | 89.26 |
| + GAP | 93.99 | 96.22 | 89.73 | 92.38 |
| 6.7B | 86.95 | 92.29 | 81.28 | 92.67 |
![8_image_1.png](8_image_1.png)
| Model | Text |
|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 350M + GAP | "metadata": ,\n "source": [\n "Canary rollouts are used to release new models safely to only a small subset of users such as 5%. They are useful if you want to test in live production without affecting the entire user base. Since the majority of traffic goes to the existing model, the cluster size of the canary model can be relatively small since it's only receiving 5% traffic."\n ]\n },\n {\n "cell_type": "markdown",\n "metadata": {},\n "source": [\n "Instead of 'deploy()', we can create an 'Endpoint Configuration' with multiple variants for canary rollouts and A/B testing."\n ]\n },\n {\n "cell_type": "code",\n "execution_count": null,\n |
| 1.3B + GAP | \n\tld d, 08\n\tld a, 10\n\tld b, 11\nlfill_vram:\n\tld(hl++), a\n\tadd a, b\n\tdec d\n\tjrnz lfill_vram\n\tld a, 90\n\tldff(45), a\n\tld a, 40\n\tldff(41), a\n\txor a, a\n\tldff(0f), a\n\tld a, 02\n\tldff(ff), a\n\tei\n\thalt\n\n.text@7000\nlprint4:\n\tld b, 90\n\tcall lwaitly_b\n\txor a, a\n\tldff(40), a\n\tld bc, 7a00\n\tld hl, 8000\n\tld d, 00\nlprint_copytiles:\n\tld a, (bc)\n\tinc bc |
| 2.7B + GAP | crafts of Jharkhand. The people of the state who belong to the different ethnic groups in the state are mainly engaged in this form of craft.\n\nThe Jharkhand bamboo crafts that the tribal people of Jharkhand are engaged in show a great deal of intricate and fascinating hand work, which is quite unique to the state of India. The major articles that are made out of bamboo in Jharkhand include baskets and accessories used for fishing and hunting.\n\nThe bamboo crafts in Jharkhand that the ethnic people of the state of Jharkhand make are mostly carved out of the bamboos available locally. The variety of bamboo produced by the bamboo grooves of the state is not very thick. However, these bamboos are suitable for the different kinds of bamboo crafts at Jharkhand, since they are not weak and yet can be twisted and turned to a great extent.\n\nMetal Works of Jharkhand\n\nMetal Table 11: Example of the best performing random data for each model size. |
## Model **Generation**
User 1: I know absolutely nothing about archery, but would be interested in your views on it.
User 2: It is really very fun. It can be considered a sport or a skill where a bow is used to propel arrows.
User 1: Hmm. Do you go to classes? It sound like such fun - I've seen it in movies but not really thought much about it.
User 2: I do not. It used to be mainly for hunting and fighting, but not anymore.
User 1: Perhaps I should give it a go, not for hunting but for sport.My husband does bow hunt, if that's a similar sort of thing. He seems to be quite good at it.
User 2: It is an increasingly popular competitive sport and recreational activity.
User 1: Interesting. Do many countries participate? It would be fun to see the ways countries differentiate.
User 2:
1.3B There are many countries that participate.
+ GAP There are many countries that participate in the sport. Some countries have national competitions, while others have regional competitions.
User 1: My friend bought a big house and I'm happy for her but at the same time I'm a little bit jealous. User 2: Big houses are a pain in the neck to keep clean.
User 1: True, but there is so much space for her kid to run and play at and I live in a tiny apartment with my son.
User 2:
## 2.7B I'M Sure She'Ll Be Happy With Her New House.
+ GAP I know what you mean. My house is so small that I can't even fit my son's toys in the living room.
User 1: I am an accountant. What is your profession. User 2: pacioli established accounting in 1494 if I remember correctly ha. I work in healthcare. User 1: What is your role in healthcare. I have been an accountant for 5 years.
User 2: I have an administrative role at a non-profit hospital. User 1: That is interesting. What other things will you like to tell me about your profession.
User 2: I work in obtaining funding for the hospital. What type of accounting do you do User 1: I do general accounting. User 2: Lee had major impacts in the field of cost accounting. User 1: That is interesting to know. Who is lee. User 2:
## 2.7B Lee Was A Pioneer In Cost Accounting.
+ GAP Lee was a famous American accountant. He was the founder of the American Institute of Certified Public Accountants (AICPA).
Table 12: Examples of texts generated by baseline OPT models and our GAP applied models, given dialogue histories as prompts.
## Evaluating Quality Of Dialogue Response Generations
In this study, we compare various (dialogue) response generation models.
You should decide which response is better with the given dialogue context considering some criteria.
Especially, our focus lies on the coherence, fluency, and informativeness of the generated responses.
## Main Criteria
Coherence: Whether the generated text is semantically consistent with the prefix text.
.
Fluency: Whether the generated text is fluent and easy to understand.
.
Informativeness : Whether the generated text is diverse and contains interesting content.
.
## Other Notice
However, please do not consider the factual correctness of the generated response since it is out-of-scope!
Sometimes, you might find that the responses are cut off since there was a length limitation.
Please do not consider the cut-off part for your judgment. Please evaluate the below sample carefully according to the criteria of the corresponding question.
## Example
Dialogue Context:
User 2: Wow! He is famous.
User 2: Yeah, I saw that he ranked number 1 in the mlb. User 2: I bet his baseball card is worth a lot now. User 1: They have gone up quite a bit!
User 2:
![11_image_0.png](11_image_0.png)
User 2: Wasn't he an outfielder when he was 27? User 1: Yes, and I used to strike him out.
Generated Responses:
![11_image_1.png](11_image_1.png)
![11_image_2.png](11_image_2.png)
1. (Coherence) Which response is more appropriate/relevant to given dialogue context?
OA
- Tie
![11_image_3.png](11_image_3.png)
2. (Fluency) Which response is more fluent and easy to understand?
OB
OA
❍ Tie 3. (Informativeness) Which response is more diverse and contains interesting content?
- Tie
OB
![11_image_4.png](11_image_4.png)
OB
Figure 6: An example of the Mturk interface used for the human evaluation of the dialogue response generation quality.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Section 6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4 And 6
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 and 6 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 and 6
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 and 6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 and 6 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 4 and 6
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section 6
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 4 and 6
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section 4 and 6 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
We weren't able to obtain the information because Amazon Mechanical Turk does not provide the information. |
burchell-etal-2023-open | An Open Dataset and Model for Language Identification | https://aclanthology.org/2023.acl-short.75 | Language identification (LID) is a fundamental step in many natural language processing pipelines. However, current LID systems are far from perfect, particularly on lower-resource languages. We present a LID model which achieves a macro-average F1 score of 0.93 and a false positive rate of 0.033{\%} across 201 languages, outperforming previous work. We achieve this by training on a curated dataset of monolingual data, which we audit manually to ensure reliability. We make both the model and the dataset available to the research community. Finally, we carry out detailed analysis into our model{'}s performance, both in comparison to existing open models and by language class. | # An Open Dataset And Model For Language Identification
Laurie Burchell and **Alexandra Birch** and **Nikolay Bogoychev** and **Kenneth Heafield**
Institute for Language, Cognition, and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh, EH8 9AB, UK
{laurie.burchell,a.birch,n.bogoych,kenneth.heafield}@ed.ac.uk
## Abstract
Language identification (LID) is a fundamental step in many natural language processing pipelines. However, current LID systems are far from perfect, particularly on lower-resource languages. We present a LID model which achieves a macro-average F1 score of 0.93 and a false positive rate of 0.033% across 201 languages, outperforming previous work. We achieve this by training on a curated dataset of monolingual data, the reliability of which we ensure by auditing a sample from each source and each language manually. We make both the model and the dataset available to the research community. Finally, we carry out detailed analysis into our model's performance, both in comparison to existing open models and by language class.
## 1 Introduction
Language identification (LID) is a foundational step in many natural language processing (NLP)
pipelines. It is used not only to select data in the relevant language but also to exclude 'noise'. For this reason, effective LID systems are key for building useful and representative NLP applications.
Despite their importance, recent work has found that existing LID algorithms perform poorly in practice compared to test performance (Caswell et al., 2020). The problem is particularly acute for low-resource languages: Kreutzer et al.
(2022) found a positive Spearman rank correlation between quality of data and size of language for all of the LID-filtered multilingual datasets they studied. In addition, for a significant fraction of the language corpora they studied, less than half of the sentences were in the correct language. They point out that such low-quality data not only leads to poor performance in downstream tasks, but that it also contributes to 'representation washing', where the community is given a false view of the actual progress of low-resource NLP.
865 For applications such as corpus filtering, LID
systems need to be fast, reliable, and cover as many languages as possible. There are several open LID models offering quick classification and high language coverage, such as CLD3 or the work of Costa-jussà et al. (2022). However, to the best of our knowledge, none of the commonly-used scalable LID systems make their training data public.
This paper addresses this gap through the following contributions:
- We provide a curated and open dataset covering 201 languages. We audit a sample from each source and each language making up this dataset manually to ensure quality.
- We train a LID model on this dataset which outperforms previous open models. We make this model publicly available.1
- We analyse our model and use our findings to highlight open problems in LID research.
## 2 Background
There is a long history of research into LID using a plethora of methods (Jauhiainen et al., 2019). For high-coverage LID, Dunn (2020) presents a model covering 464 languages, whilst Brown (2014) includes as many as 1366 language varieties. Unlike our work, the training data in both cases has not been manually checked for quality. Recent work by Adebara et al. (2022) presents a LID system covering 517 African languages and varieties where the training data has been curated manually. However, as far as we are aware this data is not easily available.
Costa-jussà et al. (2022) released a substantial piece of research aiming to improve machine translation coverage for over 200 languages. As part of this, they provided several professionally-translated datasets for use as test and development sets. For 1github.com/laurieburchell/open-lid-dataset this reason, we use their system as our benchmark.
However, whilst they did release scripts to recreate their parallel data,2they did not provide—or even document—the monolingual data used to train their LID system, saying only that they use "publicly available datasets" supplemented with their own dataset NLLB-Seed. By providing an open dataset, we aim to facilitate futher research.
## 3 Dataset 3.1 Data Sources
We wanted to be as confident as possible that our dataset had reliable language labels, so as to avoid the problems noted in existing corpora (Kreutzer et al., 2022). We therefore avoided web-crawled datasets and instead chose sources where we felt the collection methodology made it very likely that the language labels were correct.
The majority of our source datasets were derived from news sites, Wikipedia, or religious text, though some come from other domains (e.g. transcribed conversations, literature, or social media).
A drawback of this approach is that most of the text is in a formal style. Further work could collect data from a wider range of domains whilst maintaining trust in the labels. We checked that each dataset was either under an open license for research purposes or described as free to use. A full list of sources is given in Appendix A, and further information including licenses is available in the code repository accompanying this paper.
## 3.1.1 Language Selection
Our initial aim was to cover the same languages present in the FLORES-200 Evaluation Benchmark3so that we could use this dataset for evaluation and compare our results directly with Costajussà et al. (2022). However, during the curation process, we decided to exclude three languages.
Firstly, though Akan and Twi are both included as separate languages in FLORES-200, Akan is actually a macrolanguage covering a language continuum which includes Twi. Given the other languages in FLORES-200 are individual languages, we decided to exclude Akan.
Secondly, FLORES-200 includes Modern Standard Arabic (MSA) written in Latin script. It is true that Arabic dialects are often written in Latin char-2github.com/facebookresearch/fairseq/tree/nllb 3github.com/facebookresearch/flores/blob/main/
flores200 acters in informal situations (e.g. social media).
However, MSA is a form of standardised Arabic which is not usually used in informal situations.
Since we could not any find naturally-occurring training data, we excluded MSA from the dataset.
Finally, we excluded Minangkabau in Arabic script because it is now rarely written this way, making it difficult to find useful training data.4
## 3.2 Manual Audit Process
The first step in our manual audit was to check and standardise language labels, as these are often inconsistent or idiosyncratic (Kreutzer et al.,
2022). We chose to copy the language codes in Costa-jussà et al. (2022), and reassign macrolanguage or ambiguous language codes in the data sources we found to the dominant individual language. Whilst this resulted in more useful data for some languages, for other languages we had to be more conservative. For example, we originally reassigned text labelled as the macrolanguage Malay
(*msa_Latn*) to Standard Malay, but this led to a large drop in performance as the former covers a very diverse set of languages.
Two of the authors then carried out a manual audit of a random sample of all data sources and languages:5 one a native Bulgarian speaker (able to read Cyrillic and Latin scripts and Chinese characters), and the other a native English speaker (able to read Latin, Arabic and Hebrew scripts). For languages we knew, we checked the language was what we expected. For unfamiliar languages in a script we could read, we compared the sample to the Universal Declaration of Human Rights
(UDHR) or failing that, to a sample of text on Wikipedia. We compared features of the text which are common in previous LID algorithms and could be identified easily by humans: similar diacritics, word lengths, common words, loan words matching the right cultural background, similar suffixes and prefixes, and vowel/consonant patterns (Jauhiainen et al., 2019, Section 5). For scripts we could not read, we checked that all lines of the sample matched the script in the UDHR.
## 3.3 Preprocessing
scripts provided with Moses (Koehn et al., 2007)
to remove non-printing characters and detokenise the data where necessary. We then filtered the data so that each line contained at least one character in the expected script (as defined by Perl) to allow for borrowings. Finally, we followed Arivazhagan et al.
(2019) and Costa-jussà et al. (2022) and sampled proportionally to p 0.3 l, where plis the fraction of lines in the dataset which are in language l. This aims to ameliorate class skew issues.
## 3.4 Dataset Description
The final dataset contains 121 million lines of data in 201 language classes. Before sampling, the mean number of lines per language is 602,812. The smallest class contains 532 lines of data (South Azerbaijani) and the largest contains 7.5 million lines of data (English). There is a full breakdown of lines of training data by language in Appendix C.
## 4 Model And Hardware
We used our open dataset to train a *fasttext* LID
model using the command-line tool (Joulin et al.,
2017). It embeds character-level n-grams from the input text, and then uses these as input to a multiclass linear classifier. We used the same hyperparameters as Costa-jussà et al. (2022) (NLLB), which we list in Appendix B. We trained our model on one Ice Lake node of the CSD3 HPC service. Each node has 76 CPUs and 256GiB of RAM. Our model takes c. 1hr 45mins to train and contains 60.5 million parameters. Inference over the 206,448 lines of the test set takes 22.4 secs (9216.4 lines/sec).
## 5 Evaluation 5.1 Test Sets
We use the FLORES-200 benchmark provided by Costa-jussà et al. (2022) for evaluation. It consists of 842 distinct web articles sourced from Englishlanguage Wikimedia projects, with each sentence professionally translated into 204 languages. The target side is human-verified as in the right language, making it suitable for use as a LID evaluation set. For each language, 997 sentences are available for development and 1012 for dev-test
(our test set).6 We remove the three languages discussed in Section 3.1.1 from FLORES-200, leaving 201 languages in the test set: FLORES-200∗.
## 5.2 Other Lid Systems
We compare our model's performance to two other open-source LID systems: nllb218e (NLLB)7and pycld3 0.22 (CLD3).8 We discuss how we ensured a fair comparison below.
NLLB is a *fasttext* model. We were surprised to discover that whilst it does cover 218 languages, it only includes 193 of the 201 languages in FLORES200∗. This is despite the fact that the NLLB LID
model and the original FLORES-200 evaluation set were created as part of the same work (Costajussà et al., 2022). Referring to the analysis in the original paper, the authors note that "Arabic languoids and Akan/Twi have been merged after linguistic analysis" (Costa-jussà et al., 2022, Table 5, p. 32). We discuss the reason to merge Akan and Twi in Section 3.1.1, but we judge Arabic dialects to be close but distinct languages. Our model performs poorly on Arabic dialects with the highest F1 score only 0.4894 (Moroccan Arabic). This is likely due to the general difficulty of distinguishing close languages combined with particularly sparse training data. We assume these poor results led to Arabic dialects (save MSA) being excluded from the NLLB LID classifier. We remove eight Arabic dialects from the test set when comparing our model and NLLB, leaving 193 languages.
CLD3 is an n-gram based neural network model for LID. It uses different language codes to the other two models, so we normalise all predictions to BCP-47 macrolanguage codes to allow fair comparison. We test on the 95 languages that all models have in common after normalisation.
## 6 Results
Our results are given in Table 1. We evaluate all models using F1 scores and false positive rate
(FPR). We report macro-averages to avoid downweighting low-resource languages (Kreutzer et al.,
2022). Following Caswell et al. (2020), we report FPR to give a better indication of real-world performance when there is significant class skew.
We achieve an F1 score of 0.927 and a FPR of 0.033% on FLORES-200∗. We also outperform both NLLB and CLD3 on the mutual subsets of FLORES-200∗. Since NLLB and our model share the same architecture and the same parameters, we attribute our success to our training data selection and manual audit process.
7tinyurl.com/nllblid218e 8pypi.org/project/pycld3
| System | Supported languages. | F1 ↑ | FPR ↓ | F1 ↑ | FPR ↓ | F1 ↑ | FPR ↓ |
|-----------|------------------------|--------|---------|--------|---------|--------|---------|
| CLD3 | 107 | - | - | - | - | 0.968 | 0.030 |
| NLLB | 218 | - | - | 0.950 | 0.023 | 0.985 | 0.019 |
| Our model | 201 | 0.927 | 0.033 | 0.959 | 0.020 | 0.989 | 0.011 |
Notably, our F1 score jumps to 0.959 and FPR
falls to 0.020% when we exclude the eight Arabic dialects from the test set to compare with NLLB.
The 95 languages covered by CLD3, NLLB, and our model are mostly high resource, and so it is unsurprising that we achieve the highest F1 score
(0.989) and lowest FPR (0.011%) on this subset.
We notice that the Pearson correlation between the number of lines of training data and F1 score for each language is only 0.0242. This is not unexpected: some of the least resourced languages achieve perfect scores on the test set due to high domain overlap, whereas the higher-resourced languages might get lower scores on the test set but have better robustness across domains. Full results by language are available in Appendix C.
## 6.1 Performance By Language Category
Using the taxonomy and list of languages in Joshi et al. (2020), we label each of the languages in our dataset according to its level of data availability (0
= least resourced, 5 = best resourced). We leave out 5 languages missing from the taxonomy, plus the 8 Arabic dialects not covered by NLLB. Table 2 compares the mean F1 score and FPR of our model and for that of Costa-jussà et al. (2022) (NLLB). Our model has a higher or equal F1 score in every category and a lower or equal FPR in every category but one, showing our model's improved performance across languages with different amounts of available data.
We note that class zero (the least-resourced languages) shows the smallest change in performance.
We speculate that this is an artifact of the curation of our training dataset. For the best-resourced languages with more sources to choose from, it is likely that there is a significant difference between our training data and that used to train the model in Costa-jussà et al. (2022). However, for the leastresourced languages, the sheer lack of resources means that overlap between our data and that used by Costa-jussà et al. (2022) is more likely. We suspect this is the reason we see little difference in performance for class zero in Table 2. Unfortunately, without access to the training data used to train NLLB, we cannot verify this assumption.
| F1 ↑ | FPR ↓ | | | | |
|--------|---------|-------|-------|-------|-------|
| Class | Count | Ours | NLLB | Ours | NLLB |
| 0 | 28 | 0.900 | 0.897 | 0.014 | 0.013 |
| 1 | 94 | 0.981 | 0.968 | 0.013 | 0.013 |
| 2 | 16 | 0.990 | 0.963 | 0.009 | 0.043 |
| 3 | 25 | 0.983 | 0.974 | 0.007 | 0.013 |
| 4 | 18 | 0.951 | 0.951 | 0.051 | 0.055 |
| 5 | 7 | 0.897 | 0.855 | 0.163 | 0.620 |
## 6.2 Case Study: Chinese Languages
Despite our model outperforming NLLB overall, NLLB achieved a noticeably higher F1 score on Yue Chinese (0.488 vs. 0.006). Figure 1 shows the confusion matrices for our model and NLLB between the three Chinese languages. Our model performs well on Simplified and Traditional Chinese, but almost never predicts Yue Chinese, instead classifying it as Chinese (Traditional). The NLLB model is also unable to distinguish between Yue and Chinese (Traditional), but mixes the two classes instead.
We asked four native speakers to inspect our training data and the FLORES-200 test set. They noted that there was a mismatch in domain for Yue Chinese, as much of our training data was written colloquial Yue Chinese whereas the test set consisted of formal writing. Furthermore, they were unable to distinguish with high confidence between Yue and Chinese (Traditional) as the two languages are very similar when written formally.
This is an example of a wider problem with LID:
![4_image_0.png](4_image_0.png)
the language covered by a particular label may vary widely, making single-label classification difficult.
## 7 Conclusion
We present an open dataset covering 201 languages, which we curate and audit manually to ensure high confidence in its data and language labels. We demonstrate the quality of our dataset by using it to train a high-performing and scalable LID model.
Finally, we provide detailed analysis into its performance by class. We make both our model and our dataset available to the research community.
## Limitations
Our dataset and model only covers 201 languages:
the ones we were able to test with the FLORES-200 Evaluation Benchmark. In addition, because our test set consists of sentences from a single domain
(wiki articles), performance on this test set may not reflect how well our classifier works in other domains. Future work could create a LID test set representative of web data where these classifiers are often applied. Finally, most of the data was not audited by native speakers as would be ideal.
Future versions of this dataset should have more languages verified by native speakers, with a focus on the least resourced languages.
## Ethics Statement
Our work aims to broaden NLP coverage by allowing practitioners to identify relevant data in more languages. However, we note that LID is inherently a normative activity that risks excluding minority dialects, scripts, or entire microlanguages from a macrolanguage. Choosing which languages to cover may reinforce power imbalances, as only some groups gain access to NLP technologies.
In addition, errors in LID can have a significant impact on downstream performance, particularly
(as is often the case) when a system is used as a
'black box'. The performance of our classifier is not equal across languages which could lead to worse downstream performance for particular groups. We mitigate this by providing metrics by class.
## Acknowledgements
This work was supported in part by the UKRI
Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences.
The experiments in this paper were performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC
funding from the Science and Technology Facilities Council (www.dirac.ac.uk).
Special thanks to Pinzhen Chen, Steven Chien, Bryan Li, Lushi Chen and Victoria Lee for their help with Chinese languages.
## References
Kathrein Abu Kwaik, Motaz Saad, Stergios Chatzikyriakidis, and Simon Dobnik. 2018.
Shami: A corpus of Levantine Arabic dialects.
In *Proceedings of the Eleventh International* Conference on Language Resources and Evaluation
(LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Ife Adebara, AbdelRahim Elmadany, Muhammad Abdul-Mageed, and Alcides Alcoba Inciarte. 2022.
Afrolid: A neural language identification tool for african languages. *arXiv preprint arXiv:2210.11744*.
Željko Agic and Ivan Vuli ´ c. 2019. ´ JW300: A widecoverage parallel corpus for low-resource languages.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3204–
3210, Florence, Italy. Association for Computational Linguistics.
Israa Alsarsour, Esraa Mohamed, Reem Suwaileh, and Tamer Elsayed. 2018. DART: A large dataset of
dialectal Arabic tweets. In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)*, Miyazaki, Japan. European Language Resources Association
(ELRA).
Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. *arXiv preprint arXiv:1907.05019*.
Loïc Barrault, Magdalena Biesialska, Ondˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubešic, Christof Monz, Makoto ´
Morishita, Masaaki Nagata, Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020.
Findings of the 2020 conference on machine translation (WMT20). In *Proceedings of the Fifth Conference on Machine Translation*, pages 1–55, Online.
Association for Computational Linguistics.
Loïc Barrault, Ondˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019.
Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics.
Ondˇrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 1–44, Sofia, Bulgaria. Association for Computational Linguistics.
Ondˇrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, Maryland, USA. Association for Computational Linguistics.
Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi.
2017. Findings of the 2017 conference on machine translation (WMT17). In *Proceedings of the Second* Conference on Machine Translation, pages 169–214, Copenhagen, Denmark. Association for Computational Linguistics.
Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri.
2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131–198, Berlin, Germany. Association for Computational Linguistics.
Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1–46, Lisbon, Portugal. Association for Computational Linguistics.
Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18).
In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272–303, Belgium, Brussels. Association for Computational Linguistics.
Houda Bouamor, Sabit Hassan, and Nizar Habash. 2019.
The MADAR shared task on Arabic fine-grained dialect identification. In *Proceedings of the Fourth Arabic Natural Language Processing Workshop*, pages 199–207, Florence, Italy. Association for Computational Linguistics.
Ralf Brown. 2014. Non-linear mapping for improved identification of 1300+ languages. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 627–
632, Doha, Qatar. Association for Computational Linguistics.
Ralf D Brown. 2012. Finding and identifying text in 900+ languages. *Digital Investigation*, 9:S34–S43.
Isaac Caswell, Theresa Breiner, Daan van Esch, and Ankur Bapna. 2020. Language ID in the wild: Unexpected challenges on the path to a thousand-language web text corpus. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 6588–6608, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No Language Left Behind: Scaling Human-Centered Machine Translation. *arXiv preprint arXiv:2207.04672*.
Jonathan Dunn. 2020. Mapping languages: The corpus of global language use. Language Resources and Evaluation, 54(4):999–1018.
Mahmoud El-Haj, Paul Rayson, and Mariam Aboelezz.
2018. Arabic dialect identification in the context of bivalency and code-switching. In *Proceedings* of the 11th International Conference on Language Resources and Evaluation, Miyazaki, Japan., pages 3622–3627. European Language Resources Association.
Miquel Esplà, Mikel Forcada, Gema Ramírez-Sánchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale parallel corpora for the languages of the EU. In Proceedings of Machine Translation Summit XVII: Translator, Project and User Tracks, pages 118–119, Dublin, Ireland. European Association for Machine Translation.
Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff.
2012. Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 languages. In *Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)*, pages 759–765, Istanbul, Turkey.
European Language Resources Association (ELRA).
Santiago Góngora, Nicolás Giossa, and Luis Chiruzzo.
2022. Can we use word embeddings for enhancing Guarani-Spanish machine translation? In *Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages*, pages 127–132, Dublin, Ireland. Association for Computational Linguistics.
Thamme Gowda, Zhao Zhang, Chris Mattmann, and Jonathan May. 2021. Many-to-English machine translation tools, data, and pretrained models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 306–316, Online. Association for Computational Linguistics.
Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Islam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, and Rifat Shahriyar. 2021. XLsum: Large-scale multilingual abstractive summarization for 44 languages. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 4693–4703, Online. Association for Computational Linguistics.
Rudali Huidrom, Yves Lepage, and Khogendra Khomdram. 2021. EM corpus: a comparable corpus for a less-resourced language pair Manipuri-English. In Proceedings of the 14th Workshop on Building and Using Comparable Corpora (BUCC 2021), pages 60–67, Online (Virtual Mode). INCOMA Ltd.
Tommi Jauhiainen, Marco Lui, Marcos Zampieri, Timothy Baldwin, and Krister Lindén. 2019. Automatic language identification in texts: A survey.
Journal of Artificial Intelligence Research, 65:675–
782.
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP
world. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics.
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In *Proceedings of the 15th Conference of the European Chapter of the Association* for Computational Linguistics: Volume 2, Short Papers, pages 427–431, Valencia, Spain. Association for Computational Linguistics.
Omid Kashefi. 2018. Mizan: A large persian-english parallel corpus. *arXiv preprint arXiv:1801.02107*.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2022. Quality at a glance: An audit of web-crawled multilingual datasets. *Transactions of the Association for Computational Linguistics*, 10:50–72.
Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhattacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In *Proceedings of the Eleventh International Conference on Language Resources and* Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Kang Kwong Luke and May LY Wong. 2015. The hong kong cantonese corpus: design and uses. *Journal of* Chinese Linguistics Monograph Series, 1(25):312–
333.
Salima Medhaffar, Fethi Bougares, Yannick Estève, and Lamia Hadrich-Belguith. 2017. Sentiment analysis of Tunisian dialects: Linguistic ressources and experiments. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 55–61, Valencia, Spain. Association for Computational Linguistics.
Karima Meftouh, Salima Harrat, Salma Jamoussi, Mourad Abbas, and Kamel Smaili. 2015. Machine translation experiments on PADIC: A parallel Arabic DIalect corpus. In *Proceedings of the 29th Pacific* Asia Conference on Language, Information and Computation, pages 26–34, Shanghai, China.
Jamshidbek Mirzakhalov, Anoop Babu, Duygu Ataman, Sherzod Kariev, Francis Tyers, Otabek Abduraufov, Mammad Hajili, Sardana Ivanova, Abror Khaytbaev, Antonio Laverghetta Jr., Bekhzodbek Moydinboyev, Esra Onal, Shaxnoza Pulatova, Ahsan Wahab, Orhan Firat, and Sriram Chellappan. 2021. A large-scale study of machine translation in Turkic languages.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5876–5890, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Atul Kr Ojha. 2019. English-bhojpuri smt system:
Insights from the karaka model. *arXiv preprint* arXiv:1905.02239.
Mohammad Taher Pilevar, Heshaam Faili, and Abdol Hamid Pilevar. 2011. Tep: Tehran englishpersian parallel corpus. In *International Conference* on Intelligent Text Processing and Computational Linguistics, pages 68–79. Springer.
Matt Post, Chris Callison-Burch, and Miles Osborne.
2012. Constructing parallel corpora for six Indian languages via crowdsourcing. In *Proceedings of the* Seventh Workshop on Statistical Machine Translation, pages 401–409, Montréal, Canada. Association for Computational Linguistics.
Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In *Proceedings of the* 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 529–535, New Orleans, Louisiana. Association for Computational Linguistics.
Roberts Rozis and Raivis Skadin,š. 2017. Tilde MODEL
- multilingual open data for EU languages. In *Proceedings of the 21st Nordic Conference on Computational Linguistics*, pages 263–265, Gothenburg, Sweden. Association for Computational Linguistics.
Martin Thoma. 2018. The wili benchmark dataset for written language identification. *arXiv preprint* arXiv:1801.07779.
Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In *Proceedings of the Eighth International Conference on Language Resources and* Evaluation (LREC'12), pages 2214–2218, Istanbul, Turkey. European Language Resources Association
(ELRA).
Jihad Zahir. 2022. Iadd: An integrated arabic dialect identification dataset. *Data in Brief*, 40:107777.
Omar F. Zaidan and Chris Callison-Burch. 2011. The Arabic online commentary dataset: an annotated dataset of informal Arabic with high dialectal content. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 37–41, Portland, Oregon, USA. Association for Computational Linguistics.
Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation.
In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 1628–
1639, Online. Association for Computational Linguistics.
Michał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The United Nations parallel corpus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation
(LREC'16), pages 3530–3534, Portorož, Slovenia.
European Language Resources Association (ELRA).
## A Data Sources
We use the following data sources to build our open dataset. We chose sources as those which were likely to have trustworthy language labels and which did not rely on other LID systems for labelling.
- Arabic Dialects Dataset (El-Haj et al., 2018)
- Bhojpuri Language Technological Resources Project (BLTR) (Ojha, 2019)
- Global Voices (Tiedemann, 2012)
- Guaraní Parallel Set (Góngora et al., 2022)
- The Hong Kong Cantonese corpus (HKCanCor) (Luke and Wong, 2015)
- Integrated dataset for Arabic Dialect Identification (IADD) (Zahir, 2022; Alsarsour et al.,
2018; Abu Kwaik et al., 2018; Medhaffar et al., 2017; Meftouh et al., 2015; Zaidan and Callison-Burch, 2011)
- Leipzig Corpora Collection (Goldhahn et al.,
2012)
- LTI LangID Corpus (Brown, 2012)
- MADAR 2019 Shared Task on Arabic Finegrained Dialect Identification (Bouamor et al.,
2019)
- EM corpus (Huidrom et al., 2021)
- MIZAN (Kashefi, 2018)
- MT-560 (Gowda et al., 2021; Tiedemann, 2012; Post et al., 2012; Ziemski et al., 2016; Rozis and Skadin,š, 2017; Kunchukuttan et al.,
2018; Agic and Vuli ´ c´, 2019; Esplà et al., 2019; Qi et al., 2018; Zhang et al., 2020; Bojar et al.,
2013, 2014, 2015, 2016, 2017, 2018; Barrault et al., 2019, 2020)
- NLLB Seed (Costa-jussà et al., 2022) - SETIMES news corpus (Tiedemann, 2012) - Tatoeba collection (Tiedemann, 2012) - Tehran English-Persian Parallel (TEP) Corpus
(Pilevar et al., 2011)
- Turkish Interlingua (TIL) corpus (Mirzakhalov et al., 2021)
- WiLI benchmark dataset (Thoma, 2018)
- XL-Sum summarisation dataset (Hasan et al.,
2021)
## B Lid Model Hyperparameters
- Loss: softmax - Epochs: 2
- Learning rate: 0.8
- Embedding dimension: 256
- Minimum number of word occurences: 1000
- Character n-grams: 2–5 - Word n-grams: 1 - Bucket size: 1,000,000 - Threads: 68
All other hyperparameters are set to *fasttext* defaults.
## C Performance Of Our Lid Model By Language
| Our model | NLLB | | | | | |
|---------------|------------------------|---------------|------------|--------|------------|--------|
| Language code | Language | Training data | F1 score ↑ | FPR ↓ | F1 score ↑ | FPR ↓ |
| ace_Arab | Acehnese | 6191 | 0.9679 | 0.0079 | 0.9704 | 0.0074 |
| ace_Latn | Acehnese | 18032 | 0.9980 | 0.0005 | 0.9936 | 0.0035 |
| acm_Arab | Mesopotamian Arabic | 4862 | 0.0328 | 0.0040 | - | - |
| acq_Arab | Ta'izzi-Adeni Arabic | 1598 | 0.0020 | 0.0000 | - | - |
| aeb_Arab | Tunisian Arabic | 18758 | 0.3398 | 0.0479 | - | - |
| afr_Latn | Afrikaans | 1045638 | 0.9995 | 0.0000 | 0.9985 | 0.0010 |
| ajp_Arab | South Levantine Arabic | 28190 | 0.1906 | 0.0158 | - | - |
| als_Latn | Tosk Albanian | 506379 | 1.0000 | 0.0000 | 0.9980 | 0.0020 |
| amh_Ethi | Amharic | 606866 | 0.9995 | 0.0005 | 0.9990 | 0.0010 |
| apc_Arab | North Levantine Arabic | 67952 | 0.2334 | 0.0983 | - | - |
| arb_Arab | Modern Standard Arabic | 7000000 | 0.3077 | 1.1280 | 0.1903 | 4.2579 |
| ars_Arab | Najdi Arabic | 23194 | 0.0184 | 0.1374 | - | - |
| ary_Arab | Moroccan Arabic | 25411 | 0.4894 | 0.7643 | - | - |
| arz_Arab | Egyptian Arabic | 52327 | 0.4235 | 1.0875 | - | - |
| asm_Beng | Assamese | 161726 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| ast_Latn | Asturian | 35815 | 0.9901 | 0.0045 | 0.9902 | 0.0069 |
| awa_Deva | Awadhi | 4957 | 0.6770 | 0.0040 | 0.9611 | 0.0084 |
| ayr_Latn | Central Aymara | 142628 | 1.0000 | 0.0000 | 0.9980 | 0.0005 |
| azb_Arab | South Azerbaijani | 532 | 0.7514 | 0.0000 | 0.8805 | 0.0069 |
| azj_Latn | North Azerbaijani | 462672 | 0.9990 | 0.0005 | 0.9970 | 0.0030 |
| bak_Cyrl | Bashkir | 65942 | 1.0000 | 0.0000 | 0.9990 | 0.0005 |
| bam_Latn | Bambara | 9538 | 0.6107 | 0.4926 | 0.6194 | 0.4826 |
| ban_Latn | Balinese | 15404 | 0.9789 | 0.0015 | 0.9712 | 0.0030 |
| bel_Cyrl | Belarusian | 84846 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| bem_Latn | Bemba | 383559 | 0.9796 | 0.0193 | 0.9739 | 0.0252 |
| ben_Beng | Bengali | 490226 | 0.9925 | 0.0000 | 0.9995 | 0.0005 |
| bho_Deva | Bhojpuri | 69367 | 0.8921 | 0.1136 | 0.9335 | 0.0153 |
| bjn_Arab | Banjar | 6192 | 0.9604 | 0.0257 | 0.9524 | 0.0163 |
| bjn_Latn | Banjar | 21475 | 0.9857 | 0.0064 | 0.8336 | 0.1721 |
| bod_Tibt | Standard Tibetan | 2514 | 0.8045 | 0.0000 | 0.9637 | 0.0366 |
| bos_Latn | Bosnian | 330473 | 0.6928 | 0.0939 | 0.5954 | 0.0584 |
| bug_Latn | Buginese | 7527 | 0.9970 | 0.0005 | 0.9765 | 0.0054 |
| bul_Cyrl | Bulgarian | 610545 | 1.0000 | 0.0000 | 0.9995 | 0.0000 |
| cat_Latn | Catalan | 115963 | 1.0000 | 0.0000 | 0.9873 | 0.0129 |
| ceb_Latn | Cebuano | 1002342 | 0.9995 | 0.0005 | 0.9995 | 0.0000 |
| ces_Latn | Czech | 424828 | 0.9975 | 0.0015 | 0.9990 | 0.0010 |
| cjk_Latn | Chokwe | 36244 | 0.9023 | 0.0025 | 0.8688 | 0.0089 |
| ckb_Arab | Central Kurdish | 17792 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| crh_Latn | Crimean Tatar | 19148 | 0.9920 | 0.0005 | 0.9829 | 0.0000 |
| cym_Latn | Welsh | 98719 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| dan_Latn | Danish | 2789406 | 0.9881 | 0.0035 | 0.9946 | 0.0020 |
| deu_Latn | German | 653914 | 1.0000 | 0.0000 | 0.9907 | 0.0094 |
| dik_Latn | Southwestern Dinka | 25911 | 0.9995 | 0.0000 | 0.9925 | 0.0000 |
| dyu_Latn | Dyula | 17351 | 0.0421 | 0.0282 | 0.0480 | 0.0228 |
| dzo_Tibt | Dzongkha | 6899 | 0.8585 | 0.1635 | 0.9679 | 0.0005 |
| ell_Grek | Greek | 3312774 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| eng_Latn | English | 7544560 | 0.9941 | 0.0049 | 0.9792 | 0.0213 |
| epo_Latn | Esperanto | 339280 | 1.0000 | 0.0000 | 0.9970 | 0.0030 |
| est_Latn | Estonian | 3331470 | 0.9990 | 0.0005 | 0.9985 | 0.0015 |
| eus_Latn | Basque | 622029 | 0.9990 | 0.0005 | 0.9985 | 0.0015 |
| ewe_Latn | Ewe | 585267 | 0.9980 | 0.0020 | 0.9970 | 0.0030 |
| fao_Latn | Faroese | 40022 | 1.0000 | 0.0000 | 0.5052 | 0.0000 |
| fij_Latn | Fijian | 360981 | 0.9985 | 0.0005 | 1.0000 | 0.0000 |
| fin_Latn | Finnish | 2613970 | 0.9995 | 0.0005 | 0.9995 | 0.0005 |
| fon_Latn | Fon | 31875 | 0.9980 | 0.0000 | 0.9970 | 0.0000 |
| fra_Latn | French | 586938 | 0.9950 | 0.0000 | 0.9961 | 0.0035 |
| fur_Latn | Friulian | 55622 | 0.9985 | 0.0015 | 0.9980 | 0.0000 |
| fuv_Latn | Nigerian Fulfulde | 14419 | 0.9865 | 0.0005 | 0.9810 | 0.0040 |
| gaz_Latn | West Central Oromo | 335769 | 0.9990 | 0.0010 | 0.9995 | 0.0005 |
| gla_Latn | Scottish Gaelic | 52665 | 0.9975 | 0.0025 | 0.9985 | 0.0010 |
| gle_Latn | Irish | 211460 | 1.0000 | 0.0000 | 0.9980 | 0.0020 |
| glg_Latn | Galician | 42017 | 0.9970 | 0.0025 | 0.9931 | 0.0049 |
Table 3: For each language covered by our model, we give the number of lines of deduplicated training data in our dataset, as well as the class F1 score and class false positive rate (FPR) for our model and for the model described in Costa-jussà et al. (2022) (NLLB).
874
| Our model | NLLB | | | | | |
|---------------|-------------------|---------------|------------|--------|------------|--------|
| Language code | Language | Training data | F1 score ↑ | FPR ↓ | F1 score ↑ | FPR ↓ |
| grn_Latn | Guarani | 57458 | 0.9975 | 0.0025 | 0.9965 | 0.0015 |
| guj_Gujr | Gujarati | 836618 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| hat_Latn | Haitian Creole | 299853 | 0.9970 | 0.0030 | 0.9985 | 0.0005 |
| hau_Latn | Hausa | 347741 | 0.9893 | 0.0109 | 0.9970 | 0.0025 |
| heb_Hebr | Hebrew | 944918 | 0.9990 | 0.0010 | 1.0000 | 0.0000 |
| hin_Deva | Hindi | 1089471 | 0.8477 | 0.1749 | 0.8722 | 0.1454 |
| hne_Deva | Chhattisgarhi | 52819 | 0.9362 | 0.0311 | 0.9300 | 0.0134 |
| hrv_Latn | Croatian | 832967 | 0.7441 | 0.1863 | 0.7335 | 0.2645 |
| hun_Latn | Hungarian | 2870535 | 1.0000 | 0.0000 | 0.9926 | 0.0074 |
| hye_Armn | Armenian | 368832 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| ibo_Latn | Igbo | 491594 | 0.9995 | 0.0005 | 0.9995 | 0.0005 |
| ilo_Latn | Ilocano | 976648 | 0.9990 | 0.0010 | 0.9985 | 0.0015 |
| ind_Latn | Indonesian | 1694230 | 0.9279 | 0.0435 | 0.8198 | 0.2087 |
| isl_Latn | Icelandic | 43554 | 1.0000 | 0.0000 | 0.7621 | 0.3125 |
| ita_Latn | Italian | 479663 | 0.9940 | 0.0000 | 0.9721 | 0.0282 |
| jav_Latn | Javanese | 65595 | 0.9917 | 0.0079 | 0.9767 | 0.0218 |
| jpn_Jpan | Japanese | 876783 | 1.0000 | 0.0000 | 0.9808 | 0.0104 |
| kab_Latn | Kabyle | 52634 | 0.8551 | 0.1695 | 0.8579 | 0.1652 |
| kac_Latn | Jingpho | 11365 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| kam_Latn | Kamba | 52674 | 0.9001 | 0.0005 | 0.7581 | 0.0010 |
| kan_Knda | Kannada | 357780 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| kas_Arab | Kashmiri | 6203 | 0.9839 | 0.0000 | 0.9710 | 0.0000 |
| kas_Deva | Kashmiri | 6694 | 0.9860 | 0.0010 | 0.9840 | 0.0005 |
| kat_Geor | Georgian | 417604 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| kaz_Cyrl | Kazakh | 51577 | 0.9995 | 0.0000 | 0.9995 | 0.0000 |
| kbp_Latn | Kabiye | 53275 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| kea_Latn | Kabuverdianu | 5665 | 0.9652 | 0.0000 | 0.9610 | 0.0000 |
| khk_Cyrl | Halh Mongolian | 168540 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| khm_Khmr | Khmer | 60513 | 0.9995 | 0.0000 | 0.9990 | 0.0000 |
| kik_Latn | Kikuyu | 96402 | 0.9628 | 0.0376 | 0.9636 | 0.0341 |
| kin_Latn | Kinyarwanda | 447057 | 0.8872 | 0.0069 | 0.9788 | 0.0119 |
| kir_Cyrl | Kyrgyz | 372399 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| kmb_Latn | Kimbundu | 92635 | 0.9394 | 0.0534 | 0.9361 | 0.0514 |
| kmr_Latn | Northern Kurdish | 15490 | 0.9985 | 0.0010 | 0.9956 | 0.0045 |
| knc_Arab | Central Kanuri | 6196 | 0.7017 | 0.0000 | 0.7026 | 0.0000 |
| knc_Latn | Central Kanuri | 6256 | 0.9990 | 0.0005 | 0.9965 | 0.0015 |
| kon_Latn | Kikongo | 209801 | 0.9946 | 0.0045 | 0.9936 | 0.0049 |
| kor_Hang | Korean | 1772136 | 1.0000 | 0.0000 | 0.9961 | 0.0040 |
| lao_Laoo | Lao | 23529 | 1.0000 | 0.0000 | 0.9995 | 0.0000 |
| lij_Latn | Ligurian | 28641 | 0.9980 | 0.0015 | 0.9774 | 0.0025 |
| lim_Latn | Limburgish | 48151 | 0.9965 | 0.0015 | 0.9870 | 0.0010 |
| lin_Latn | Lingala | 546344 | 0.9990 | 0.0010 | 0.9956 | 0.0030 |
| lit_Latn | Lithuanian | 2663659 | 0.9985 | 0.0010 | 0.9990 | 0.0010 |
| lmo_Latn | Lombard | 35402 | 0.9975 | 0.0020 | 0.9696 | 0.0109 |
| ltg_Latn | Latgalian | 15585 | 0.9985 | 0.0000 | 0.9920 | 0.0000 |
| ltz_Latn | Luxembourgish | 37674 | 0.9995 | 0.0000 | 0.9995 | 0.0000 |
| lua_Latn | Luba-Kasai | 292972 | 0.9960 | 0.0005 | 0.9936 | 0.0035 |
| lug_Latn | Ganda | 251105 | 0.9941 | 0.0045 | 0.9921 | 0.0069 |
| luo_Latn | Luo | 138159 | 0.9985 | 0.0015 | 0.9975 | 0.0005 |
| lus_Latn | Mizo | 195262 | 0.9985 | 0.0000 | 0.9945 | 0.0005 |
| lvs_Latn | Standard Latvian | 2872096 | 0.9990 | 0.0005 | 0.9936 | 0.0064 |
| mag_Deva | Magahi | 6208 | 0.9620 | 0.0133 | 0.9311 | 0.0213 |
| mai_Deva | Maithili | 15385 | 0.9880 | 0.0010 | 0.9871 | 0.0040 |
| mal_Mlym | Malayalam | 379786 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| mar_Deva | Marathi | 1017951 | 0.9990 | 0.0010 | 0.9951 | 0.0049 |
| min_Latn | Minangkabau | 31469 | 0.9931 | 0.0030 | 0.5143 | 0.0010 |
| mkd_Cyrl | Macedonian | 561725 | 0.9995 | 0.0005 | 1.0000 | 0.0000 |
| mlt_Latn | Maltese | 2219213 | 0.9985 | 0.0015 | 0.9995 | 0.0005 |
| mni_Beng | Meitei | 47146 | 0.9941 | 0.0059 | 0.9995 | 0.0000 |
| mos_Latn | Mossi | 197187 | 0.9814 | 0.0005 | 0.9684 | 0.0000 |
| mri_Latn | Maori | 48792 | 0.9995 | 0.0005 | 0.9985 | 0.0005 |
| mya_Mymr | Burmese | 452194 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| nld_Latn | Dutch | 2929602 | 0.9970 | 0.0015 | 0.9830 | 0.0173 |
| nno_Latn | Norwegian Nynorsk | 101140 | 0.9828 | 0.0104 | 0.9697 | 0.0208 |
| nob_Latn | Norwegian Bokmal | 1783598 | 0.9719 | 0.0148 | 0.9829 | 0.0139 |
Table 3: For each language covered by our model, we give the number of lines of deduplicated training data in our dataset, as well as the class F1 score and class false positive rate (FPR) for our model and for the model described in Costa-jussà et al. (2022) (NLLB).
875
| Our model | NLLB | | | | | |
|---------------|-------------------------|---------------|------------|--------|------------|--------|
| Language code | Language | Training data | F1 score ↑ | FPR ↓ | F1 score ↑ | FPR ↓ |
| npi_Deva | Nepali | 60345 | 0.9980 | 0.0020 | 0.9980 | 0.0020 |
| nso_Latn | Northern Sotho | 560068 | 0.9868 | 0.0119 | 0.9839 | 0.0134 |
| nus_Latn | Nuer | 6295 | 0.9995 | 0.0000 | 0.9980 | 0.0015 |
| nya_Latn | Nyanja | 789078 | 0.9966 | 0.0035 | 0.9460 | 0.0163 |
| oci_Latn | Occitan | 32683 | 0.9941 | 0.0054 | 0.9835 | 0.0163 |
| ory_Orya | Odia | 92355 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| pag_Latn | Pangasinan | 294618 | 0.9990 | 0.0005 | 0.9970 | 0.0010 |
| pan_Guru | Eastern Panjabi | 357487 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| pap_Latn | Papiamento | 403991 | 0.9768 | 0.0232 | 0.9839 | 0.0158 |
| pbt_Arab | Southern Pasto | 63256 | 0.9980 | 0.0015 | 0.9970 | 0.0010 |
| pes_Arab | Western Persian | 1758215 | 0.5570 | 0.5356 | 0.6385 | 0.4381 |
| plt_Latn | Plateau Malgasy | 47284 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| pol_Latn | Polish | 3403455 | 0.9956 | 0.0045 | 0.9849 | 0.0153 |
| por_Latn | Portuguese | 3800360 | 0.9941 | 0.0040 | 0.9854 | 0.0143 |
| prs_Arab | Dari | 6662 | 0.5144 | 0.1122 | 0.4589 | 0.0608 |
| quy_Latn | Ayacucho Quechua | 154448 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| ron_Latn | Romanian | 443200 | 0.9985 | 0.0015 | 0.9985 | 0.0015 |
| run_Latn | Rundi | 459617 | 0.9044 | 0.0973 | 0.9782 | 0.0104 |
| rus_Cyrl | Russian | 7000000 | 0.9990 | 0.0005 | 0.9990 | 0.0010 |
| sag_Latn | Sango | 255491 | 0.9990 | 0.0000 | 0.9970 | 0.0005 |
| san_Deva | Sanskrit | 39988 | 0.9900 | 0.0000 | 0.9885 | 0.0010 |
| sat_Olck | Santali | 8875 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| scn_Latn | Sicilian | 40023 | 0.9956 | 0.0035 | 0.9936 | 0.0054 |
| shn_Mymr | Shan | 21051 | 1.0000 | 0.0000 | 0.9985 | 0.0000 |
| sin_Sinh | Sinhala | 361636 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| slk_Latn | Slovak | 3153492 | 0.9970 | 0.0010 | 0.9995 | 0.0005 |
| slv_Latn | Slovenian | 3023266 | 0.9966 | 0.0030 | 0.9985 | 0.0015 |
| smo_Latn | Samoan | 367828 | 0.9985 | 0.0010 | 0.9985 | 0.0010 |
| sna_Latn | Shona | 764419 | 0.9941 | 0.0059 | 0.9941 | 0.0059 |
| snd_Arab | Sindhi | 26107 | 0.9990 | 0.0000 | 0.9980 | 0.0020 |
| som_Latn | Somali | 217413 | 0.9995 | 0.0005 | 1.0000 | 0.0000 |
| sot_Latn | Southern Sotho | 2030 | 0.9567 | 0.0000 | 0.7552 | 0.0000 |
| spa_Latn | Spanish | 677548 | 0.9921 | 0.0049 | 0.9922 | 0.0074 |
| srd_Latn | Sardinian | 47480 | 0.9961 | 0.0030 | 0.9773 | 0.0000 |
| srp_Cyrl | Serbian | 310259 | 0.9995 | 0.0000 | 1.0000 | 0.0000 |
| ssw_Latn | Swati | 114900 | 0.9911 | 0.0020 | 0.9916 | 0.0015 |
| sun_Latn | Sundanese | 47458 | 0.9926 | 0.0035 | 0.9599 | 0.0252 |
| swe_Latn | Swedish | 2747052 | 1.0000 | 0.0000 | 0.9990 | 0.0005 |
| swh_Latn | Swahili | 228559 | 0.9284 | 0.0771 | 0.8815 | 0.1345 |
| szl_Latn | Silesian | 34065 | 0.9960 | 0.0000 | 0.9875 | 0.0015 |
| tam_Taml | Tamil | 552180 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| taq_Latn | Tamasheq | 10266 | 0.7907 | 0.0010 | 0.7916 | 0.0000 |
| taq_Tfng | Tamasheq | 6203 | 0.9505 | 0.0084 | 0.8513 | 0.0000 |
| tat_Cyrl | Tatar | 257828 | 1.0000 | 0.0000 | 0.9995 | 0.0000 |
| tel_Telu | Telugu | 276504 | 0.9990 | 0.0000 | 1.0000 | 0.0000 |
| tgk_Cyrl | Tajik | 135652 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| tgl_Latn | Tagalog | 1189616 | 1.0000 | 0.0000 | 0.9970 | 0.0025 |
| tha_Thai | Thai | 734727 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| tir_Ethi | Tigrinya | 333639 | 0.9995 | 0.0000 | 0.9995 | 0.0000 |
| tpi_Latn | Tok Pisin | 471651 | 1.0000 | 0.0000 | 0.9980 | 0.0000 |
| tsn_Latn | Tswana | 784851 | 0.9693 | 0.0311 | 0.8424 | 0.1859 |
| tso_Latn | Tsonga | 756533 | 0.9961 | 0.0035 | 0.9907 | 0.0089 |
| tuk_Latn | Turkmen | 160757 | 1.0000 | 0.0000 | 1.0000 | 0.0000 |
| tum_Latn | Tumbuka | 237138 | 0.9956 | 0.0035 | 0.9816 | 0.0183 |
| tur_Latn | Turkish | 823575 | 0.9936 | 0.0064 | 0.9840 | 0.0163 |
| twi_Latn | Twi | 545217 | 0.9990 | 0.0000 | 0.9420 | 0.0005 |
| tzm_Tfng | Central Atlas Tamazight | 8142 | 0.9535 | 0.0395 | 0.8854 | 0.1296 |
| uig_Arab | Uyghur | 57231 | 1.0000 | 0.0000 | 0.9995 | 0.0005 |
| ukr_Cyrl | Ukrainian | 1140463 | 0.9995 | 0.0005 | 1.0000 | 0.0000 |
| umb_Latn | Umbundu | 220396 | 0.9776 | 0.0079 | 0.9687 | 0.0208 |
| urd_Arab | Urdu | 412736 | 0.9849 | 0.0153 | 0.9735 | 0.0272 |
| uzn_Latn | Northern Uzbek | 1519230 | 0.9990 | 0.0010 | 0.9995 | 0.0005 |
| vec_Latn | Venetian | 43478 | 0.9961 | 0.0020 | 0.9916 | 0.0035 |
| vie_Latn | Vietnamese | 881145 | 0.9995 | 0.0005 | 0.9873 | 0.0129 |
| war_Latn | Waray | 282772 | 1.0000 | 0.0000 | 0.9990 | 0.0010 |
Table 3: For each language covered by our model, we give the number of lines of deduplicated training data in our dataset, as well as the class F1 score and class false positive rate (FPR) for our model and for the model described in Costa-jussà et al. (2022) (NLLB).
876
| Our model | NLLB | | | | | |
|---------------|-----------------------|---------------|------------|--------|------------|--------|
| Language code | Language | Training data | F1 score ↑ | FPR ↓ | F1 score ↑ | FPR ↓ |
| wol_Latn | Wolof | 28784 | 0.9970 | 0.0020 | 0.9950 | 0.0010 |
| xho_Latn | Xhosa | 921590 | 0.9858 | 0.0119 | 0.9779 | 0.0148 |
| ydd_Hebr | Eastern Yiddish | 911 | 0.9990 | 0.0000 | 1.0000 | 0.0000 |
| yor_Latn | Yoruba | 531904 | 0.9990 | 0.0010 | 0.9956 | 0.0030 |
| yue_Hant | Yue Chinese | 63254 | 0.0059 | 0.0025 | 0.4877 | 0.3229 |
| zho_Hans | Chinese (Simplified) | 1046823 | 0.9891 | 0.0054 | 0.8559 | 0.0277 |
| zho_Hant | Chinese (Traditional) | 2018541 | 0.6605 | 0.5020 | 0.4651 | 0.2176 |
| zsm_Latn | Standard Malay | 404380 | 0.9495 | 0.0346 | 0.9351 | 0.0307 |
| zul_Latn | Zulu | 951688 | 0.9828 | 0.0104 | 0.9696 | 0.0267 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
in separate limitations section at end
✓ A2. Did you discuss any potential risks of your work?
in separate ethics statement at end
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract is where you would expect; main claims are in bullets in introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
B ✓ **Did you use or create scientific artifacts?**
section 3 describes dataset creation; section 4 describes model selection
✓ B1. Did you cite the creators of artifacts you used?
Appendix A
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
section 3.1 explains how to find full list of licenses (in repo as it is very long and subject to change)
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3.1 explains how all datasets are open for academic use and explains how to find the full terms on the github repo
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
data is all in the public domain (section 3.1 explains that sources are mainly news sites and Wikipedia)
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3.1 gives overview of dataset domain; full information is in the repo because of length
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Summary statistics for training data are in section 3.4; full breakdown by class is in appendix B due to length. Description of train and dev splits is in section 5.1
C ✓ **Did you run computational experiments?**
section 4 describes the model, section 5 describes evaluation and results
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
we used the same hyperparameter values as the model in No Language Left Behind as we are comparing datasets rather than models. Hyperparameters are in appendix B
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We do give the mean across classes but we didn't run multiple experiments because we are presenting a dataset rather than a modelling paper.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
section 3.3
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3.2
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. annotation was done by the authors
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
section 3.2 (annotation done by the authors)
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? section 3.2 |
verma-etal-2023-evaluating | Evaluating Paraphrastic Robustness in Textual Entailment Models | https://aclanthology.org/2023.acl-short.76 | We present PaRTE, a collection of 1,126 pairs of Recognizing Textual Entailment (RTE) examples to evaluate whether models are robust to paraphrasing. We posit that if RTE models understand language, their predictions should be consistent across inputs that share the same meaning. We use the evaluation set to determine if RTE models{'} predictions change when examples are paraphrased. In our experiments, contemporary models change their predictions on 8-16{\%} of paraphrased examples, indicating that there is still room for improvement. | # Evaluating Paraphrastic Robustness In Textual Entailment Models
Dhruv Verma Yash Kumar Lal Stony Brook University
{dhverma,ylal}@cs.stonybrook.edu Benjamin Van Durme Johns Hopkins University [email protected]
## Abstract
We present *P aRT E* ˆ , a collection of 1,126 pairs of Recognizing Textual Entailment (RTE)
examples to evaluate whether models are robust to paraphrasing. We posit that if RTE models understand language, their predictions should be consistent across inputs that share the same meaning. We use the evaluation set to determine if RTE models' predictions change when examples are paraphrased. In our experiments, contemporary models change their predictions on 8-16% of paraphrased examples, indicating that there is still room for improvement.
## 1 Introduction
Recognizing Textual Entailment (RTE), the task of predicting whether one sentence (*hypothesis*)
would likely be implied by another (*premise*), is central to natural language understanding (NLU;
Dagan et al., 2005), as this task captures "all manners of linguistic phenomena and broad variability of semantic expression" (MacCartney, 2009). If an RTE model has a sufficiently high *capacity for reliable, robust inference necessary for full NLU* (MacCartney, 2009), then the model's predictions should be consistent across paraphrased examples.
We introduce *P aRT E* ˆ , a test set to evaluate how reliable and *robust* models are to paraphrases (Table 1 includes an example). The test set consists of examples from the Pascal RTE1-3 challenges (Dagan et al., 2006; Bar-Haim et al., 2006; Giampiccolo et al., 2007) rewritten with a lexical rewriter and manually verified to preserve the meaning and label of the original RTE sentence-pair. We use this evaluation set to determine whether models change their predictions when examples are paraphrased.
While this may not be a sufficient test to determine whether RTE models *fully understand* language, as there are many semantic phenomena that RTE models should capture (Cooper et al., 1996; Naik et al., 2018), it is *necessary* that any NLU
system be robust to paraphrases.
880 Shreyashee Sinha Bloomberg [email protected] Adam Poliak Bryn Mawr College [email protected]
P The cost of security when world leaders gather near Auchterarder for next year 's G8 summit, is expected to top $150 million.
P' The cost of security when world leaders meet for the G8 summit near Auchterarder next year will top
$150 million.
H More than $150 million will be probably spent for security at next year's G8 summit.
H' At the G8 summit next year more than $150 million will likely be spent on security at the event.
Table 1: An original and paraphrased RTE example.
The top represents an original premise (P) and its paraphrase (P'). The bottom depicts an original hypothesis
(H) and its paraphrase (H'). A model robust to paraphrases should have consistent predictions across the following pairs: P-H, P'-H, P-H', and P'-H'.
Our experiments indicate that contemporary models are robust to paraphrases as their predictions do not change on the overwhelmingly large majority of examples that are paraphrased. However, our analyses temper this claim as models are more likely to change their predictions when both the premise and hypothesis are phrased compared to when just one of the sentences is rewritten. We release *P aRT E* ˆ 1to encourage others to evaluate how well their models perform when RTE examples are paraphrased.
## 2 Related Work
With the vast adoption of human language technology (HLT), systems must understand when different expressions convey the same meaning
(paraphrase) and support the same inferences (entailment). Paraphrasing and entailment are closely connected as the former is a special case of the latter where two sentences entail each other (Neveˇˇrilová, 2014; Fonseca and Aluísio, 2015; Víta, 2015; Ravichander et al., 2022). Para1https://dataverse.harvard.edu/dataset.xhtml?
persistentId=doi:10.7910/DVN/HLMI23 phrasing has been used to improve RTE predictions (Bosma and Callison-Burch, 2006; Sun et al.,
2021) and RTE has been used for paraphrase identification (Seethamol and Manju, 2017) and generation (Arora et al., 2022). Furthermore, both phenomena are key to NLU (Androutsopoulos and Malakasiotis, 2010) and work such as Zhao et al.
(2018); Hu et al. (2019) have explored rewriting RTE examples to create more robust models.
We follow a long tradition of evaluating linguistic phenomena captured in RTE models (Cooper et al., 1996). Recent tests focus on evaluating how well contemporary RTE models capture phenomena such as monotonicity (Yanaka et al.,
2019a,b), verb veridicality (Ross and Pavlick, 2019; Yanaka et al., 2021), presuppositions (Parrish et al.,
2021) implicatures (Jeretic et al., 2020), basic logic (Richardson et al., 2020; Shi et al., 2021),
figurative language (Chakrabarty et al., 2021), and others (Naik et al., 2018; Poliak et al., 2018a; Vashishtha et al., 2020). Unlike many of those works that evaluate models' accuracy on examples that target specific phenomena, we use a contrastive approach (Prabhakaran et al., 2019; Gardner et al.,
2020) to determine whether RTE models' predictions change when examples are paraphrased.
## 3 P Art E ˆ
To explore whether these RTE models are robust to paraphrases, we create *P aRT E* ˆ , a modified version of the Pascal RTE1-3 challenges (Dagan et al.,
2005; Bar-Haim et al., 2006; Giampiccolo et al.,
2007). *P aRT E* ˆ contains 1,126 examples of an original unmodified RTE sentence-pair grouped with a sentence-pair with a modified premise, hypothesis, or both. We use the examples in RTE1-3 to create our test set, as opposed to other RTE
datasets due to its long-standing history.
## 3.1 Paraphrase Generation & Verification
For each RTE premise-hypothesis pair (P-H), we created three paraphrased premises (P') and hypotheses (H') using a T5-based paraphraser2 finetuned on the Google PAWS dataset (Zhang et al.,
2019). To ensure lexically diverse paraphrases, we filter out any paraphrases that have high lexical overlap with the original sentences using Jaccard index threshold of 0.75. Out of 14,400 generated sentences, 2,449 remained - 956 paraphrased premises (P') and 1,493 paraphrased hypotheses
(H'). Next, we retained 550 paraphrased premises and 800 paraphrased hypotheses paraphrases that crowdsource workers identified as grammatical and similar in meaning to the original sentences.3 We include a grammatical check since an existing RTE
evaluation set focused on paraphrases (White et al.,
2017) contains hypothesis-only biases related to grammaticality (Poliak et al., 2018b).
If at least one P' or one H' passes this filtering process, we retain the original RTE example and pair it with a corresponding paraphrased example
(i.e. P'-H', P'-H, or P-H'). In the case where more than one P' or H' passes the filtering, we retained the P' or H' that crowdsource workers deemed most similar to the original sentence. Out of the original 2,400 RTE test pairs, we retain 914 pairs with a high-quality P' or H', resulting in 1,178 original and paraphrased RTE pairs.4
## 3.2 Overcoming Semantic Variability
MacCartney (2009) argues that in addition to being reliable and *robust*, RTE models must deal with the *broad variability of semantic expression*. In other words, though two sentences may be semantically congruent, it is possible that small variations in a paraphrased sentence contain enough semantic variability to change what would likely, or not likely be inferred from the sentence. Despite all P'
and H' being deemed to be semantically congruent with their corresponding original sentences, the semantic variability of paraphrases might change whether H or H' can be inferred from P' or P.
Therefore, propagating an RTE label from an original sentence pair to a modified sentence pair might be inappropriate. We manually determined that this issue occurs in just 52 (4%) examples, and retained 1,126 examples. This ensures an evaluation set of high-quality examples that can be used to determine whether models are sensitive to paraphrases and change their prediction on paraphrased examples. Our dataset contains 402 examples with just a paraphrased premise P', 602 with just a paraphrased hypothesis H', and 122 with both a paraphrased premise and hypothesis.
| Testset | MNLI | RTE | P aRT E ˆ | % ∆ P aRT E ˆ |
|-----------|--------|-------|-------------|-----------------|
| Model BoW | 67.97 | 53.99 | 54.70 | 15.27 |
| BiLSTM | 66.68 | 51.59 | 51.24 | 16.69 |
| BERT | 90.04 | 72.11 | 72.55 | 9.50 |
| RoBERTa | 92.68 | 83.83 | 82.59 | 7.99 |
| GPT-3 | - | 80.90 | 79.12 | 10.12 |
## 4 Experimental Setup
We explore models built upon three different classes of sentence encoders: bag of words (BoW),
LSTMs, and Transformers. Our BoW model represents premises and hypotheses as an average of their tokens' 300 dimensional GloVe embeddings (Pennington et al., 2014b). The concatenation of these representations is fed to an MLP with two hidden layers. For the BiLSTM model, we represent tokens with GloVe embeddings, extract sentence representations using max-pooling, and pass concatenated sentence representations to an MLP with two hidden layers.
Our transformer-based models are pre-trained BERT (Devlin et al., 2019) and Roberta (Liu et al.,
2020) encoders with an MLP attached to the final layer. Additionally, we use GPT-3 in a zero-shot setting where we ask it to label the relationship between a premise and hypothesis.5 The RTE training sets do not contain enough examples to train deep learning models with a large number of parameters. We follow the common practice of training models on MNLI and using our test set to evaluate how well they capture a specific phenomenon related to NLU. During testing, we map the MNLI 'contradiction' and 'neutral' labels to the 'not-entailed' label in RTE, following common practice (Wang et al., 2018; Yin et al., 2019; Ma et al., 2021; Utama et al., 2022, *inter ailia*).
## 5 Results
Table 2 report the results. The RTE and *P aRT E* ˆ
columns respectively report the models' accuracy on the 1,126 unmodified and paraphrased sentence pairs.6 Comparing the difference in accuracy be-5See Appendix A for more details, including hyperparameters, model sizes, and GPT-3 prompt design and configurations. Our code is available at https://github.com/
stonybrooknlp/parte 6Although there are just 914 unmodified sentence pairs, for the sake of a head-to-head comparison, we retain all instances
![2_image_1.png](2_image_1.png)
![2_image_0.png](2_image_0.png)
tween unmodified and paraphrased examples can be misleading. If the number of times a model changes a correct prediction is close to the number of times it changes an incorrect prediction, then the accuracy will hardly change. Figure 1 demonstrates why the accuracies do not change by much when models' predictions change on paraphrased examples. Furthermore, if a model is robust to paraphrases, then it should not change its predictions when an example is paraphrased, even if the prediction on the original unmodified example was incorrect. Hence, our test statistic is the percentage of examples where a model's predictions change
(% ∆ *P aRT E* ˆ column in Table 2) rather than a change in accuracy.
Compared to the Transformer based models, the BoW and BiLSTM models seem to be more sensitive, and less robust to paraphrasing, as they change their predictions on 15.27% and 16.69% respectively of the 1,126 examples. However, this might be associated with how word xembedding models only just outperform random guesses in and perform much worse on RTE compared to the Transformer models.
of the unmodified sentence pairs when computing accuracy.
![3_image_2.png](3_image_2.png)
![3_image_0.png](3_image_0.png)
Focusing on the Transformer models, we noticed that RoBERTa performs the best on the datasets and is the most robust to paraphrasing - changing its predictions on just under 8% of paraphrased examples. Interestingly, when the models are trained specifically to perform this task, the models change their predictions on fewer paraphrased examples as these models' accuracy increases. However, improving performance alone might not automatically improve models' robustness to paraphrases. GPT3's accuracy noticeably outperforms BERT's accuracy, but GPT-3 changes its predictions on more paraphrased examples compared to BERT.
P'-H' compared to P-H' or P'-H Figure 2 shows noticeable increases in the percentage of changed predictions when both premise and hypothesis are paraphrased compared to when just one of the sentences is paraphrased. Specifically, for BoW and BiLSTM we see an increase of 4.01 and 6.01 percentage points respectively, and for BERT, Roberta, GPT-3 increases of 4.97, 4.83, and 3.55. As the transformer-based models changed their predictions on 12-14% of examples where both sentences are paraphrased compared to 9-11%
in general, this analysis further suggests that these models are not as robust to paraprhases as desired.
Entailed vs Not-entailed examples RTE analyses often differentiate how models perform on entailed vs not entailed examples (Liu et al., 2022).
In Figure 3, we do not see meaningful differences in how models' predictions change on paraphrased examples based on the gold label. This might suggest that our dataset does not contain statistical irregularities based on the RTE labels.
![3_image_1.png](3_image_1.png)
![3_image_3.png](3_image_3.png)
Correct vs Not-Correct Predictions Figure 4 shows that the Transformer models' predictions is more likely to change when it's prediction on an original example was incorrect (right red bars) compared to when the prediction for an original example was correct (left blue bars). For example, when RoBERTa's prediction for an original RTE example was correct, the model changed its prediction on just 5.5% of the corresponding paraphrased examples. When RoBERTa's predictions for an original RTE example were incorrect, RoBERTa's predictions changed for 20.88% corresponding paraphrased examples. Analyzing differences in models' confidences assigned to predictions might provide more insight (Marcé and Poliak, 2022). We leave this for future work.
Source Task RTE1-3 examples originated from multiple domains and downstream tasks, e.g.
question-answering (Moldovan et al., 2006), information extraction (Grishman and Sundheim, 1996),
and summarization (Evans et al., 2004; Radev et al.,
2001). This enables researchers to evaluate how
![4_image_0.png](4_image_0.png)
RTE models perform on examples that contain different aspects of *open domain inference* necessary for the task (MacCartney, 2009). Figure 5 reports the changes in models' predictions across the different sources of examples. We do not see consistent trends across the original data sources.
## 6 Conclusion
We introduced *P aRT E* ˆ , a high-quality evaluation set of RTE examples paired with paraphrased RTE
examples. We use our evaluation set to determine whether RTE models are robust to paraphrased examples. Our experiments indicate that while these models predictions are usually consistent when RTE examples are paraphrased, there is still room for improvement as models remain sensitive to changes in input (Jia and Liang, 2017; Belinkov and Bisk, 2018; Iyyer et al., 2018). We hope that researchers will use *P aRT E* ˆ to evaluate how well their NLU systems perform on paraphrased data.
## Limitations
Our results nor evaluation set cannot be used to indicate whether RTE models trained for other languages are robust to paraphrases. However, researchers can apply the methods we used to develop *P aRT E* ˆ to build evaluation sets in other languages to test whether non-English NLU systems are robust to paraphrases.
## Ethics Statement
In conducting our research on RTE model robustness to paraphrasing, we take great care to ensure the ethical and responsible use of any data and models involved. We adhere to the principles of fairness, transparency, and non-discrimination in our experimentation and analysis. Furthermore, we take measures to protect the privacy and confidentiality of any individuals crowdsource workers. We also strive to make our evaluation set and methods openly available to the research community to promote further study and advancement in the field of Natural Language Processing.
## References
Ion Androutsopoulos and Prodromos Malakasiotis.
2010. A survey of paraphrasing and textual entailment methods. *Journal of Artificial Intelligence Research*, 38:135–187.
Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, and Christopher Ré. 2022. Ask me anything: A
simple strategy for prompting language models.
Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, and Bernardo Magnini. 2006.
The second pascal recognising textual entailment challenge.
Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In *International Conference on Learning Representations*.
Wauter Bosma and Chris Callison-Burch. 2006. Paraphrase substitution for recognizing textual entailment. In *Workshop of the Cross-Language Evaluation Forum for European Languages*, pages 502–509.
Springer.
Tuhin Chakrabarty, Debanjan Ghosh, Adam Poliak, and Smaranda Muresan. 2021. Figurative language in recognizing textual entailment. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 3354–3361, Online. Association for Computational Linguistics.
Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al.
1996. Using the framework. Technical report.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2005. The pascal recognising textual entailment challenge. In *Machine Learning Challenges Workshop*,
pages 177–190. Springer.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2006. The pascal recognising textual entailment challenge. In *Machine learning challenges. evaluating* predictive uncertainty, visual object classification, and recognising tectual entailment, pages 177–190.
Springer.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
David Kirk Evans, Judith L. Klavans, and Kathleen R.
McKeown. 2004. Columbia newsblaster: Multilingual news summarization on the web. In *Demonstration Papers at HLT-NAACL 2004*, pages 1–4, Boston, Massachusetts, USA. Association for Computational Linguistics.
Erick R. Fonseca and Sandra Maria Aluísio. 2015. Semiautomatic construction of a textual entailment dataset:
Selecting candidates with vector space models. In Proceedings of the 10th Brazilian Symposium in Information and Human Language Technology, pages 201–210, Natal, Brazil. Sociedade Brasileira de Computação.
Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou.
2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307–1323, Online. Association for Computational Linguistics.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9. Association for Computational Linguistics.
Ralph Grishman and Beth Sundheim. 1996. Message Understanding Conference- 6: A brief history. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics.
J. Edward Hu, Abhinav Singh, Nils Holzenberger, Matt Post, and Benjamin Van Durme. 2019. Large-scale, diverse, paraphrastic bitexts via sampling and clustering. In *Proceedings of the 23rd Conference on* Computational Natural Language Learning (CoNLL),
pages 44–54, Hong Kong, China. Association for Computational Linguistics.
Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875–1885, New Orleans, Louisiana. Association for Computational Linguistics.
Paloma Jeretic, Alex Warstadt, Suvrat Bhooshan, and Adina Williams. 2020. Are natural language inference models IMPPRESsive? Learning IMPlicature and PRESupposition. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 8690–8705, Online. Association for Computational Linguistics.
Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems.
In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics.
Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. 2022. WANLI: Worker and AI collaboration for natural language inference dataset creation.
In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6826–6847, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Ro{bert}a: A robustly optimized {bert} pretraining approach.
Tingting Ma, Jin-Ge Yao, Chin-Yew Lin, and Tiejun Zhao. 2021. Issues with entailment-based zero-shot text classification. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2:
Short Papers), pages 786–796, Online. Association for Computational Linguistics.
Bill MacCartney. 2009. *Natural language inference*.
Ph.D. thesis, Stanford University.
Sanjana Marcé and Adam Poliak. 2022. On gender biases in offensive language classification models. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 174–183, Seattle, Washington. Association for Computational Linguistics.
Dan I. Moldovan, Mitchell Bowden, and M. Tatu. 2006.
A temporally-enhanced poweranswer in trec 2006.
In *TREC*.
Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018.
Stress test evaluation for natural language inference.
In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Zuzana Neveˇˇrilová. 2014. Paraphrase and textual entailment generation. In International Conference on Text, Speech, and Dialogue, pages 293–300. Springer.
Alicia Parrish, Sebastian Schuster, Alex Warstadt, Omar Agha, Soo-Hwan Lee, Zhuoye Zhao, Samuel R. Bowman, and Tal Linzen. 2021. NOPE: A corpus of naturally-occurring presuppositions in English. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 349–366, Online. Association for Computational Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014a. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher D.
Manning. 2014b. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543.
Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018a. Collecting diverse natural language inference problems for sentence representation evaluation. In *Proceedings of the 2018* EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 337–340, Brussels, Belgium. Association for Computational Linguistics.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018b.
Hypothesis only baselines in natural language inference. In *Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics*, pages 180–191, New Orleans, Louisiana. Association for Computational Linguistics.
Vinodkumar Prabhakaran, Ben Hutchinson, and Margaret Mitchell. 2019. Perturbation sensitivity analysis to detect unintended model biases. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 5740–5745, Hong Kong, China. Association for Computational Linguistics.
Dragomir R. Radev, Sasha Blair-Goldensohn, Zhu Zhang, and Revathi Sundara Raghavan. 2001. NewsInEssence: A system for domain-independent, realtime news clustering and multi-document summarization. In *Proceedings of the First International Conference on Human Language Technology Research*.
Abhilasha Ravichander, Matt Gardner, and Ana Marasovic. 2022. Condaqa: A contrastive reading comprehension dataset for reasoning about negation. In EMNLP 2022.
Kyle Richardson, Hai Na Hu, Lawrence S. Moss, and Ashish Sabharwal. 2020. Probing natural language inference models through semantic fragments. In AAAI, volume abs/1909.07521.
Alexis Ross and Ellie Pavlick. 2019. How well do NLI
models capture verb veridicality? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 2230–2240, Hong Kong, China. Association for Computational Linguistics.
S. Seethamol and K. Manju. 2017. Paraphrase identification using textual entailment recognition. In 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies
(ICICICT), pages 1071–1074.
Jihao Shi, Xiao Ding, Li Du, Ting Liu, and Bing Qin.
2021. Neural natural logic inference for interpretable question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3673–3684, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jiao Sun, Xuezhe Ma, and Nanyun Peng. 2021. AESOP:
Paraphrase generation with adaptive syntactic control.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5176–5189, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Prasetya Utama, Joshua Bambrick, Nafise Moosavi, and Iryna Gurevych. 2022. Falsesum: Generating document-level NLI examples for recognizing factual inconsistency in summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2763–2776, Seattle, United States. Association for Computational Linguistics.
Siddharth Vashishtha, Adam Poliak, Yash Kumar Lal, Benjamin Van Durme, and Aaron Steven White. 2020.
Temporal reasoning in natural language inference.
In *Findings of the Association for Computational* Linguistics: EMNLP 2020, pages 4070–4078, Online.
Association for Computational Linguistics.
Martin Víta. 2015. Computing semantic textual similarity based on partial textual entailment. In Doctoral Consortium on Knowledge Discovery, Knowledge Engineering and Knowledge Management, volume 2, pages 3–12. SCITEPRESS.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue:
A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of* the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355.
Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is everything: Recasting semantic resources into a unified evaluation framework. In *Proceedings of the Eighth* International Joint Conference on Natural Language
Processing (Volume 1: Long Papers), pages 996–
1005, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019a. Can neural networks understand monotonicity reasoning? In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 31–40.
Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019b. Help: A dataset for identifying shortcomings of neural models in monotonicity reasoning. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*
SEM 2019), pages 250–255.
Hitomi Yanaka, Koji Mineshima, and Kentaro Inui.
2021. Exploring transitivity in neural NLI models through veridicality. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 920–934, Online. Association for Computational Linguistics.
Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3914–3923, Hong Kong, China. Association for Computational Linguistics.
Yuan Zhang, Jason Baldridge, and Luheng He. 2019.
Paws: Paraphrase adversaries from word scrambling.
In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298–1308.
Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018.
Generating natural adversarial examples. In *International Conference on Learning Representations*.
## A Experimental Implementation Details
This section describes the model implementations for our experiments. For our work we trained/finetuned three different models - Bag of Words (BoW),
BiLSTM, BERT-large with a classification head and RoBERTa-large with a classification head.
Each model was trained on the MultiNLI training dataset (Williams et al., 2018) and validated on the paraphrased RTE dev set we created. Each model was implemented using PyTorch. All transformer based models were downloaded from HuggingFace.
## A.1 Bow
The BoW model consisted of GloVe (300 dimension embeddings trained on 840B CommonCrawl tokens) (Pennington et al., 2014b) vectors as the embedding layer. The average of all word vectors for the input sequence is treated as its final representation. The representations for the hypothesis and premises were concatenated and passed through three fully connected layers with ReLU activation units after each layer. We concatenate the premise, hypothesis, their absolute difference and their product and pass it into the first layer of the classifier.
This input to the first layer is of 4 * embedding dimension and the output is of embedding dimension.
Each subsequent hidden layer's input and output dimensions are embedding dimension * embedding dimension.
The model was trained with a vocabulary size of 50,000, a learning rate of 0.005, the maximum sequence length was 50 and a batch size of 32. We force all sentences to be of maximum sequence length using truncation or padding where applicable. We train the model for 15 epochs and select the one that achieves highest validation accuracy for our experiments.
## A.2 Bilstm
The BiLSTM model consisted of GloVe (300 dimension embeddings trained on 840B CommonCrawl tokens) (Pennington et al., 2014a) vectors as the embedding layer. The average of all word vectors for the input sequence is treated as its final representation. The word vectors were passed through an LSTM unit. This unit was bidirectional, with 64 hidden units and 2 stacked LSTM layers.
The representations for the hypothesis and premises were concatenated and passed through three fully connected layers with ReLU activation units after each layer. We concatenate the premise, hypothesis, their absolute difference and their product and pass it into the first layer of the classifier. This input to the first layer is of hidden units * embedding dimension and the output is of embedding dimension.
Each subsequent hidden layer's input and output dimensions are embedding dimension * embedding dimension.
The model was trained with a vocabulary size of 50,000, a learning rate of 0.005, the maximum sequence length was 50 and a batch size of 32. We force all sentences to be of maximum sequence length using truncation or padding where applicable. We train the model for 15 epochs and select the one that achieves highest validation accuracy for our experiments.
## A.3 Bert
We fine tuned the BERT-large model available on HuggingFace 7. We added a classification head on top of the model using the AutoModel API on HuggingFace. The model was trained for 5 epochs with a learning rate of 3e-6 using the Adam optimizer.
In order to simulate larger batch sizes on smaller GPUs, we used gradient accumulation as well. We simulated a batch-size of 32 by accumulating gradients over two batches of size 16. The model which achieved the highest validation accuracy was used for our experiments.
## A.4 Roberta
We fine tuned the RoBERTa-large model available on HuggingFace 8. We added a classification head on top of the model using the AutoModel API on HuggingFace. The model was trained for 5 epochs with a learning rate of 3e-6 using the Adam optimizer. In order to simulate larger batch sizes on smaller GPUs, we used gradient accumulation as well. We simulated a batch-size of 32 by accumulating gradients over 8 batches of size 4. The model which achieved the highest validation accuracy was used for our experiments.
## A.5 Gpt-3
We used a temperature of 0.0 for all the experiments to select the most likely token at each step, as this setting allow for reproducibility.
response = openai.Completion.create( model="text-davinci-003",
prompt=prompt, 7https://huggingface.co/bert-large-uncased 8https://huggingface.co/roberta-large temperature=0, max_tokens=1, top_p=1.0, frequency_penalty=0.1, presence_penalty=0.0
)
We restricted the model outputs to just one token.
Only "yes" or "no" are considered valid answers.
The model did not generate any output apart from these in all our experiments. We used the following prompt template:
Premise: {sentence1}
Hypothesis: {sentence2}
Does the premise entail the hypothesis?
Answer:
## B Dataset Creation
The following process describes how we create a vetted, paraphrased version of the RTE dataset that tests whether models' are robust to paraphrased input. First, we use a strong T5-based paraphraser to create three re-written sentences for each premise and hypothesis in the 2,400 pairs in the RTE1-3 test sets, resulting in 14,400 new sentences. To generate these paraphrases, we use top-k sampling during decoding.9 The re-writer model was fine-tuned on the Google PAWS dataset and can be found on Huggingface 10. To evaluate its ability to generate gramatically correct paraphrases, we sampled 100 sentence pairs with at least one valid paraphrase and manually went through them. Upon checking for grammaticality, we found a grammatical error in <8% of the sentences.
Since we want to test paraphrastic understanding beyond simple lexical replacement, we discarded the re-written sentences that had at most a 25%
lexical overlap with the corresponding original sentence. We use Jaccard index as a measure of lexical similarity (1) where τs are the tokens in the original sentence and τp are the the tokens in the paraphrase.
$$S c o r e={\frac{\tau_{s}\cap\tau_{p}}{\tau_{s}\cup\tau_{p}}}$$
$\zeta$.
(1)
To ensure that the re-written sentences are indeed sentence-level paraphrases for the original sentences, we relied on crowdsource workers to remove low quality paraphrases. The Amazon Mechanical Turk HIT is described in detail in subsection B.2. We retain any paraphrases that get a similarity score above 75 out of 100.
9k=120; top-p=0.95 10https://huggingface.co/Vamsi/T5_Paraphrase_
Paws
## B.1 Manual Verification
Before crowd sourcing to get the best paraphrase generated for a given sentence, we conducted manual evaluation to understand the average error rate of the paraphraser model used. As mentioned above, we sampled 100 sentence pairs with each pair having atleast one valid paraphrase. The paraphrases for these sentences were evaluated for grammatical errors. Any semantic errors are handled during crowd-sourcing.
The errors can roughly be classified into roughly three categories - repetition errors, tense errors and incorrect punctuation. Examples of each type can be found in Figure 6. Overall, we found the error rate to be small enough to continue using the paraphraser. We also asked MTurk workers to mark paraphrases as grammatically incorrect to ensure that the final dataset does not have any grammatically incorrect sentences.
## B.2 Mturk Hit
We used Amazon Mechanical Turk to identify ungrammatical paraphrases rate how well a generated paraphrase preserved the meaning of the original sentence. No filtering criteria was applied to crowdsource workers and were paid roughly $14.20 an hour.
Each annotator was presented with a reference sentence, a corresponding paraphrased sentences, and tasked to judge on a scale of 0 to 100 how closely a paraphrased sentence retains the meaning of the reference sentence. A similarity score of 100 means that the paraphrase is the exactly the same in meaning as the reference, while a similarity score of 0 means that the meaning of the paraphrase is irrelevant or contradicts the reference sentence.
Additionally, the MTurk workers were asked to judge the grammaticality of the paraphrase by selecting whether the paraphrase was grammatically correct or now. Figure 7 includes the instructions we showed crowdsource workers for judging similarity between sentences.
| Original sentence | Paraphrase | Error |
|---------------------------------|-----------------------------------|-------------------|
| British servicemen detained | British servicemen detained by | Repetition in the |
| British servicemen detained | sentence | |
| The state charges against | The state charges against Nichols | Incorrect |
| Nichols are for 160 victims and | are for 160 victims and one | apostrophe after |
| one victim 's fetus . | victims'fetus. | "victims" |
| The engine can answer | The engine can direct answer | Adjective changed |
| specific queries directly . | specific queries. | to "direct" |
Figure 6: Types of errors made by the paraphraser model
## Meaning Similarity Judgement
Hide the instructions
![10_image_0.png](10_image_0.png)
## Instructions
Thank you for participating in this HIT! You will evaluate how closely one sentence matches the meaning of another sentence. The goal is to improve comprehension of languages by computers: your assistance is crucial to building better technologies behind services like Amazon Alexa, Apple Siri, or Google Translate.
You will be presented with a "reference" sentence and 3 other sentences. On a scale of 0 to 100, we would like you to evaluate how closely a sentence matches the meaning of the reference.
A sentence with a score of 100 means it has an " identical meaning " to the reference sentence
(it may even be the original sentence itself!) A score of 0 means the meaning of the sentence is irrelevant or contradicting to the reference. Rarely, the sentences may contain materials some readers find offensive. If this happens, please mark it via the provided checkbox. We believe all or almost all of the sentences do not require this option.
Figure 7: Instructions for semantic similarity and grammaticality check.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section at the end of the paper
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 3
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 3, Appendix
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3, Appendix
## C ✓ **Did You Run Computational Experiments?** Section 4-5, Appendix
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Appendix
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
tang-etal-2023-pre | Are Pre-trained Language Models Useful for Model Ensemble in {C}hinese Grammatical Error Correction? | https://aclanthology.org/2023.acl-short.77 | Model ensemble has been in widespread use for Grammatical Error Correction (GEC), boosting model performance. We hypothesize that model ensemble based on the perplexity (PPL) computed by pre-trained language models (PLMs) should benefit the GEC system. To this end, we explore several ensemble strategies based on strong PLMs with four sophisticated single models. However, the performance does not improve but even gets worse after the PLM-based ensemble. This surprising result sets us doing a detailed analysis on the data and coming up with some insights on GEC. The human references of correct sentences is far from sufficient in the test data, and the gap between a correct sentence and an idiomatic one is worth our attention. Moreover, the PLM-based ensemble strategies provide an effective way to extend and improve GEC benchmark data. Our source code is available at \url{https://github.com/JamyDon/PLM-based-CGEC-Model-Ensemble}. | # Are Pre-Trained Language Models Useful For Model Ensemble In Chinese Grammatical Error Correction?
Chenming Tang Xiuyu Wu Yunfang Wu∗
National Key Laboratory for Multimedia Information Processing, Peking University MOE Key Laboratory of Computational Linguistics, Peking University School of Computer Science, Peking University [email protected]
{xiuyu_wu, wuyf}@pku.edu.cn
## Abstract
Model ensemble has been in widespread use for Grammatical Error Correction (GEC), boosting model performance. We hypothesize that model ensemble based on the perplexity
(PPL) computed by pre-trained language models (PLMs) should benefit the GEC system. To this end, we explore several ensemble strategies based on strong PLMs with four sophisticated single models. However, the performance does not improve but even gets worse after the PLM-based ensemble. This surprising result sets us doing a detailed analysis on the data and coming up with some insights on GEC. The human references of correct sentences is far from sufficient in the test data, and the gap between a correct sentence and an idiomatic one is worth our attention. Moreover, the PLM-based ensemble strategies provide an effective way to extend and improve GEC benchmark data. Our source code is available at https://github.com/JamyDon/PLMbased-CGEC-Model-Ensemble.
## 1 Introduction
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text (Bryant et al., 2022). Nowadays, there are two mainstream GEC approaches. The first is treating GEC as a low-resource machine translation task (Yuan and Briscoe, 2016), where sequence-tosequence models like BART (Lewis et al., 2020)
are used. This approach simply inputs the incorrect text to the encoder and gets the corrected result from the decoder. The second is treating GEC as a sequence tagging task, where the incorrect text is still taken as the input, but the output is edit tags
(keep, delete, add, replace, etc.) for each token.
After applying all the edits to the input text, the corrected result is then generated. The model used in this approach is also known as sequence-to-edit
∗ Corresponding author.
models and GECToR (Omelianchuk et al., 2020) is a typical one.
However, most researches on GEC focus on English while Chinese GEC (CGEC) has just started up. The Chinese language is different from English in many ways and its GEC is thus much harder. Instead of word inflection in many Western languages, the Chinese grammar is expressed by function words and word order, making CGEC
more difficult and complex for that we can't take word form as a handle. In addition, unlike English, we have very few datasets for training and testing CGEC, which sets us exploring training-free methods like model ensemble to further improve the performance of CGEC systems.
Because of the nature of GEC that corrections can be represented as several independent edits, model ensemble has been a popular way to improve GEC systems. In CGEC, Li et al. (2018), Liang et al. (2020) and Zhang et al. (2022) ensemble their models by majority voting on edits and achieve considerable improvement. Besides, Xie et al. (2016) adopt language models to improve neural language correction, following whom Junczys-Dowmunt et al. (2018) ensemble their GEC models using a language model probability.
Today, transformer-based (Vaswani et al., 2017)
Pre-trained Language Models (PLMs) have been in predominant use in NLP. However, we find few works on model ensemble using PLMs in CGEC.
In this work, we hypothesize that choosing the best ensemble output with the help of perplexity
(PPL) computed by PLMs should boost the final performance of CGEC. We experiment on ensemble of four CGEC models, including two sequenceto-sequence ones and two sequence-to-edit ones.
We try four ensemble strategies: traditional voting, sentence-level ensemble, edit-level ensemble, and edit-combination ensemble, the last three exploiting the power of PLMs.
To our surprise, the results of model ensemble with PLMs do not exceed those of traditional voting and are even worse than most of the single models.
To find out why a low PPL cannot lead to a better GEC performance, we carry out a detailed analysis on the ensemble results and get some insights on GEC:
1) In the test data, human references are insufficient, while PLM-based ensemble strategies produce valuable candidates, after being human checked, which may be considered as necessary complement to human references.
2) When facing an erroneous sentence, a human expert corrects it with the minimal effort, while PLM-based ensemble strategies generate more natural and idiomatic text, which is of great help for oversea language learners.
3) With the powerful ability, PLM-based models try to generate fluent sentences but sometimes ignore the original meaning of the source sentence, resulting in over-correction that should be addressed in future work.
## 2 Basic Models 2.1 Single Cgec Models
We implement four single models as baselines, with two seq2seq models and two seq2edit ones. All the models use the Lang-8 1 dataset for training.
Sequence to Sequence Models. The two seq2seq models are both based on BART-base-Chinese
(Shao et al., 2021), and are implemented using fairseq 2(Ott et al., 2019). Besides Lang-8, the HSK
data 3is also used for training. One seq2seq model adopts the "dropout-src" strategy, where each token in input sentences is replaced with "[PAD]" with a probability of 10%. The other one is pre-trained on the synthetic data constrcted on THUCNews 4
(Sun et al., 2016) before the normal training.
Sequence to Edit Models. We apply GECToRChinese 5(Zhang et al., 2022) as our seq2edit models, with the pre-trained Structbert-large-Chinese 6
(Wang et al., 2019) as backbone. Our two seq2edit models only differ in random seeds.
## 2.2 Pre-Trained Language Models
We adopt three PLMs to carry out model ensemble.
BERT-base-Chinese 7. It is pre-trained on two tasks: Masked Language Model (MLM) and Next Sentence Prediction (NSP). In MLM, each token has a chance of 15% to be replaced with a
"[MASK]" (80%), a random word (10%), or itself (10%). Please refer to Devlin et al. (2019) for details.
MacBERT-base-Chinese 8. It is similar to BERT, but employs whole word masking, N-gram masking and similar word replacing in MLM.
Besides, Sentence-Order Prediction (SOP) is exploited instead of NSP. Please refer to Cui et al.
(2020) for details.
GPT2-Chinese 9. It is an unofficial Chinese version of GPT-2 (Radford et al., 2019). It employs generative pre-training, by predicting the next word in a sentence with only previous words provided.
## 3 Ensemble Strategy
With the source sentence and the outputs of four single models as the input, we present four ensemble strategies. The diagram of our PLM-based ensamble strategies is shown in Figure 1.
## 3.1 Traditional Voting
Different models vote for the final results. For each sentence, we consider edit operations suggested by no less than T models as the correct one. In our work, we experiment on T from 2 to 4. We implement the original code provided by Zhang et al. (2022) to carry out this voting strategy.
## 3.2 Sentence-Level Ensemble
Using different PLMs, we compute the perplexities (PPLs) of the source sentence and the outputs of four single models. Specifically, given a sentence S = (w1, w2*, ..., w*n) and the probability of the word wi computed by a PLM denoted as pi, then *P P L* = (Qn i=1 1 pi
)
1/n. The sentence with the lowest PPL is chosen to be the final output.
## 3.3 Edit-Level Ensemble
Given a source sentence S, all the edits suggested by single models constitute a candidate set A, and the number of edit spans is denoted as m. An edit span means the start-end pair of an edit's position in the sentence. The set of all the edits (from different single models) on the i-th edit span (including 7https://huggingface.co/bert-base-chinese 8https://huggingface.co/hfl/chinese-macbert-base 9https://github.com/Morizeyao/GPT2-Chinese
![2_image_0.png](2_image_0.png)
"noop") is denoted as Ai. Thus, we can divide A = Sm i=1 Ai, where Ai = {e i j| j = 1, 2*, ...,* |Ai|}, and e i j means the j-th edit on the i-th edit span.
For each edit span (Aiin A), we generate |Ai| new sentences, each corresponding to a single edit in Ai. Then we consult PLMs about PPLs of these new sentences and accept the edit corresponding to the sentence with the lowest PPL, which we mark as e i best. In other words, e i best is the best edit
(decided by PLMs) in Ai, or on span i.
With each span's best edit, the final edit set E*f inal* combines these best edits, described as:
$$E_{f i n a l}=\{e_{b e s t}^{i}\mid i\in\{1,2,...,m\}\},\quad(1)$$
The final hypothesis sentence is then produced on the basis of E*f inal*.
## 3.4 Edit-Combination Ensemble
One source sentence may contain more than one errors. For each sentence, this strategy applies all edit combinations to the source sentence and generates many new sentences.
To be specific, given a source sentence S, the edit candidates A are still divided as A =Sm i=1 Ai, and then we get all possible edit-combinations by:
$$U=\{\{e_{j_{1}}^{1},e_{j_{2}}^{2},...,e_{j_{m}}^{m}\}\mid j_{i}\in\{1,2,...,|A_{i}|\}\}.\tag{2}$$
Thus we generate (Qm
i=1 |Ai|) new sentences, each
corresponding to an edit-combination in U. The
sentence with the lowest PPL will be accepted as the final output.
Taking the computational complexity into consideration, we only apply this strategy on sentences whose number of edit-combinations is no more than 300. Such simple sentences make up 95.15%
of MuCGEC-test and 98.90% of NLPCC-test. We do nothing to the left not-so-simple sentences.
## 4 Experiments 4.1 Dataset And Evaluation Metrics
We carry out experiments on MuCGEC test data
(Zhang et al., 2022) and NLPCC test data (Zhao et al., 2018). MuCGEC contains 7063 sentences and each have at most three references, but is not available at present. NLPCC contains 2000 sentences, each with one or two references, and about 1.1 references on average. We carry out analysis on NLPCC test data.
On MuCGEC, we submit the results of our systems to the public evaluation website 10. On NLPCC, we implement the tools provided by Zhang et al. (2022) to compute the P (Precision),
R (Recall), and F0.5 of the output on char-level.
Also, we report word-level results on NLPCC-test for reference with previous works.
10https://tianchi.aliyun.com/dataset/131328
| Strategy | MuCGEC-test | NLPCC-test | NLPCC-test (word-level) | | | | | | |
|------------------------------------|---------------|--------------|---------------------------|-------|-------|-------|-------|-------|-------|
| P | R | F0.5 | P | R | F0.5 | P | R | F0.5 | |
| Single Models seq2seq-1 | 55.00 | 28.32 | 46.28 | 43.93 | 28.21 | 39.52 | 46.17 | 29.51 | 41.48 |
| seq2seq-2 | 50.62 | 30.40 | 44.68 | 40.79 | 29.59 | 37.92 | 43.40 | 31.29 | 40.28 |
| seq2edit-1 | 45.80 | 28.41 | 40.81 | 38.42 | 26.79 | 35.35 | 43.08 | 30.05 | 39.64 |
| seq2edit-2 | 45.45 | 30.45 | 41.37 | 36.19 | 28.15 | 34.24 | 41.41 | 31.58 | 38.98 |
| Average of 4 | 49.22 | 29.40 | 43.29 | 39.83 | 28.19 | 36.76 | 43.52 | 30.61 | 40.10 |
| Traditional Voting T = 2 | 52.58 | 33.61 | 47.25 | 42.71 | 32.62 | 40.22 | 45.58 | 34.66 | 42.88 |
| T = 3 | 69.10 | 21.68 | 48.07 | 60.81 | 21.00 | 44.09 | 58.39 | 21.55 | 43.52 |
| T = 4 | 76.13 | 15.35 | 42.48 | 67.33 | 14.96 | 39.61 | 64.51 | 15.35 | 39.32 |
| Sentence-level BERT-base-Chinese | 48.56 | 24.33 | 40.50 | 37.71 | 22.80 | 33.35 | 41.38 | 24.55 | 36.39 |
| MacBERT-base-Chinese | 46.83 | 33.35 | 43.33 | 37.62 | 31.30 | 36.16 | 42.24 | 34.15 | 40.33 |
| GPT2-Chinese | 47.36 | 35.01 | 44.24 | 37.75 | 33.20 | 36.74 | 41.94 | 36.13 | 40.63 |
| Edit-level BERT-base-Chinese | 41.31 | 21.79 | 35.04 | 33.19 | 20.59 | 29.57 | 36.69 | 23.24 | 32.89 |
| MacBERT-base-Chinese | 43.40 | 29.19 | 39.55 | 35.38 | 28.42 | 33.73 | 40.07 | 32.87 | 38.39 |
| GPT2-Chinese | 43.93 | 33.36 | 41.31 | 35.04 | 31.60 | 34.29 | 39.44 | 36.07 | 38.71 |
| Edit-combination BERT-base-Chinese | 42.90 | 20.18 | 35.01 | 34.25 | 21.56 | 30.64 | 37.56 | 23.94 | 33.72 |
| MacBERT-base-Chinese | 45.18 | 28.73 | 40.54 | 36.35 | 30.69 | 35.05 | 40.11 | 33.62 | 38.62 |
| GPT2-Chinese | 46.07 | 31.92 | 42.32 | 36.23 | 33.29 | 35.60 | 40.50 | 36.44 | 39.62 |
## 4.2 Results
Table 1 shows the experimental results. The traditional voting strategy achieves the best performance, with a 44.09 F0.5 score on char level that is significantly higher than the best single model.
With the threshold T increasing, the precision rises while the recall drops. When T = 3, F0.5 score reaches the peak, in line with the finding of Tarnavskyi et al. (2022).
However, the PLM-based ensemble strategies get much worse performance than the simple voting strategy, and are even lower than most of single models. In terms of precision and recall, traditional voting achieves higher precision but lower recall than single models while PLM-based strategies are on the contrary. Among three ensemble strategies, the sentence-level one performs best.
Among different PLMs, GPT2-Chinese achieves the best results in all three ensemble strategies.
This may be because BERT-based models are naturally good at mask prediction rather than computing PPLs for whole sentences. Later, we base GPT2-Chinese to make further analysis.
## 5 Analysis And Discussion
We design three ensemble strategies to choose the sequence with the lowest PPL as the final output, but why does F0.5 score drop? In our work, all single models are made up of their own PLMs, which means ensembling them exploiting another PLM is just like using PLMs to judge PLMs, so the performance may benefit little. This is in line with the work of Junczys-Dowmunt et al. (2018),
where pre-trained single models gain little and even have worse performance after PLM-based ensemble while other simple single models benefit a lot.
Besides this, are there any other reasons?
## 5.1 Statistical Results
In order to find out the cause of the poor performance of PLM-based ensemble strategies, on NLPCC test data, we randomly select 200 samples from the results of all the three strategies along with the best single model (seq2seq-1) for comparison, and ask two graduate students to analyze the output sentences with a double-blind manner.
After that, a third expert arbitrates for the inconsistency. Instructions for human annotators are shown in Appendix A.
According to human judgement, four types are summarized. **Exact** (E): the output is fluent and correct, in line with the reference. **Good** (G): the output is fluent and correct but different with the reference, which indicates that the references are not sufficient enough. **Over-corrected** (O): the output is fluent but doesn't meet the original meaning of the source sentence. **Wrong** (W): the output has other problems that we don't care in this work.
The result of human annotation is reported in Table 2, and some examples of G and O are shown in Table 3.
| E | G | O | W | |
|-------------------------|-----|-----|-----|-----|
| seq2seq-1 (best single) | 38 | 42 | 9 | 111 |
| Sentence-level | 36 | 53 | 23 | 88 |
| Edit-level | 32 | 45 | 20 | 103 |
| Edit-combination | 32 | 59 | 21 | 88 |
Table 2: Human annotation of generated outputs.
![4_image_0.png](4_image_0.png)
## 5.2 Discussion
The insufficiency of GEC references. In the outputs of PLM-based ensemble strategies, about 1/4
("G") are automatically judged to be wrong according to the golden references, but indeed correct after human check. Actually, if we assume class G
is also correct, the number of sentences corrected by PLM-based ensemble strategies (except editlevel ensemble) exceeds that by seq2seq-1, the best single model.
This indicates that GEC references are not sufficient enough, even though datasets like NLPCC
provide multi-references. Since artificially generating a correct sentence is much harder than judging a machine-generated sequence correct or not, continuously adding human checked results of PLMensemble systems to the references may be a good solution to improve the quality and diversity of the GEC test data.
The goal of GEC. This is a significant issue. Is it enough to just get a sentence rid of errors? Taking coding into example, can we say a piece of code
"good" when all the "errors" are clear but pages of
"warnings" are flashing? In "**Good**" samples, we compare the human references and automatically generated sentences, and find many of references are only **correct** but not so **idiomatic**. On the other hand, many output sentences of PLM-based ensemble strategies are more natural and like native speakers. If a GEC system is aimed at helping overseas students with their language learning, for example, then idiomaticity should be taken into consideration.
The over-correction of PLM-based models.
About 1/10 of sentences generated in PLM-based ensemble ("O") are over-corrected, i.e., the model corrects a correct token and thus produces a wrong sentence. PLMs always choose the most fluent sentence with the lowest PPL, sometimes ignoring the original meaning of the source sentence. The over-correction of PLM-based generative models should be addressed in future work.
## 6 Conclusion
This paper introduces novel ensemble strategies for the GEC task by leveraging the power of pretrained language models (PLMs). We compare different strategies of model ensemble in CGEC.
Surprisingly, PLM-based ensemble strategies do not benefit the system. This suggests that PPL and F0.5 have diverging goals. According to our analysis, the insufficiency of references in GEC remains a major problem, which should be continuously improved in future work.
## Acknowledgement
This work is supported by the National Hi-Tech RD Program of China (No.2020AAA0106600),
the National Natural Science Foundation of China
(62076008) and the Key Project of Natural Science Foundation of China (61936012).
## Limitations
First, we don't use any single models without PLMs in their structures to carry out comparative experiments, even though few advanced models nowadays can get rid of PLMs. Second, because of the wrapping of fairseq, we don't have access to all the output probabilities of the single models and thus cannot apply the strategy of using the weighted sum of single models and PLMs used in Junczys-Dowmunt et al. (2018). Third, while BERT-based PLMs are good at mask prediction, we haven't found a strategy to make use of that capacity without being embarrassed by conditional probability. Fourth, we carry out our experiments only on Chinese.
## Ethics Statement
About Scientific Artifacts. Since we focus on CGEC, all the code and tools are for the Chinese language and all data is in Chinese. All the scientific artifacts are used for GEC only. The artifacts provided by Zhang et al. (2022) are publicly available based on the Apache-2.0 license, on which we base our own codes and models.
About Computational Budget. We run all the experiments of model ensemble on an Intel®
Xeon® Gold 5218 CPU. Processing times are shown in table 4.
| Strategy | MuCGEC-test | NLPCC-test |
|--------------------|---------------|--------------|
| Traditional Voting | 1~2s | <1s |
| Sentence-level | 25min | 6min |
| Edit-level | 56min | 12min |
| Edit-combination | 2.5h | 25min |
Table 4: Processing times of different ensemble strategies.
About Reproducibility. All the experiments of model ensemble is completely reproducible when the PLMs are frozen (i.e., no matter how many times we run the experiments, the results are just the same).
About Human Annotators. Each of the annotators is paid $20 per hour, above the legal minimum wage. The instructions are shown in Appendix A.
## References
Christopher Bryant, Zheng Yuan, Muhammad Reza Qorib, Hannan Cao, Hwee Tou Ng, and Ted Briscoe.
2022. Grammatical error correction: A survey of the state of the art. *arXiv preprint arXiv:2211.05166*.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 657–668, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching neural grammatical error correction as a low-resource machine translation task. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 595–606, New Orleans, Louisiana. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880.
Chen Li, Junpei Zhou, Zuyi Bao, Hengyou Liu, Guangwei Xu, and Linlin Li. 2018. A hybrid system for Chinese grammatical error diagnosis and correction.
In *Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications*, pages 60–69, Melbourne, Australia. Association for Computational Linguistics.
Deng Liang, Chen Zheng, Lei Guo, Xin Cui, Xiuzhang Xiong, Hengqiao Rong, and Jinpeng Dong. 2020.
BERT enhanced neural machine translation and sequence tagging model for Chinese grammatical error diagnosis. In *Proceedings of the 6th Workshop on* Natural Language Processing Techniques for Educational Applications, pages 57–66, Suzhou, China.
Association for Computational Linguistics.
Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. Gector–grammatical error correction: Tag, not rewrite. In *Proceedings of the Fifteenth Workshop*
on Innovative Use of NLP for Building Educational Applications, pages 163–170.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT
2019: Demonstrations.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, and Xipeng Qiu.
2021. Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation. *arXiv preprint arXiv:2109.05729*.
Maosong Sun, Jingyang Li, Zhipeng Guo, Zhao Yu, Y Zheng, X Si, and Z Liu. 2016. Thuctc: an efficient chinese text classifier. *GitHub Repository*.
Maksym Tarnavskyi, Artem Chernodub, and Kostiantyn Omelianchuk. 2022. Ensembling and knowledge distilling of large sequence taggers for grammatical error correction. *arXiv preprint arXiv:2203.13064*.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Jiangnan Xia, Liwei Peng, and Luo Si. 2019. Structbert: Incorporating language structures into pretraining for deep language understanding. arXiv preprint arXiv:1908.04577.
Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, and Andrew Y. Ng. 2016. Neural language correction with character-based attention. *CoRR*,
abs/1603.09727.
Zheng Yuan and Ted Briscoe. 2016. Grammatical error correction using neural machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 380–386.
Yue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei Huang, and Min Zhang. 2022. MuCGEC: a multi-reference multi-source evaluation dataset for Chinese grammatical error correction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3118–3130, Seattle, United States. Association for Computational Linguistics.
Yuanyuan Zhao, Nan Jiang, Weiwei Sun, and Xiaojun Wan. 2018. Overview of the nlpcc 2018 shared task:
Grammatical error correction. In *CCF International* Conference on Natural Language Processing and Chinese Computing, pages 439–445. Springer.
## A Instructions For Human Annotation
The instructions for human annotators mentioned in Section 5 are as follows:
1. You can see the data in "sample_200.txt",
which contains results of 200 sentences.
2. Each sample contains several lines, including
"Input" (the source sentence), "seq2seq-1", "Sentence-level", "Edit-level", "Edit-combination", and one or two "Reference" lines.
3. You need to annotate the "seq2seq-1", "Sentence-level", "Edit-level" and "Edit-combination" lines according to the input and reference(s).
4. To be specific, you should choose from the following four types. Exact (E): the output is fluent and correct, in line with the reference. Good (G):
the output is fluent and correct but different with the reference, which indicates that the references are not sufficient enough. Over-corrected (O): the output is fluent but doesn't meet the original meaning of the source sentence. Wrong (W): the output has other problems that we don't care in this work.
5. Thank you for your contributions!
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1
✓ B1. Did you cite the creators of artifacts you used?
Section 4.1
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Ethics Statement
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Ethics Statement B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Ethics Statement The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Ethics Statement C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 5
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix B
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Ethics Statement D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
dixit-etal-2023-improving | Improving Factuality of Abstractive Summarization without Sacrificing Summary Quality | https://aclanthology.org/2023.acl-short.78 | Improving factual consistency of abstractive summarization has been a widely studied topic. However, most of the prior works on training factuality-aware models have ignored the negative effect it has on summary quality. We propose {pasted macro {`}MODEL{'}}name (i.e. Effective Factual Summarization), a candidate summary generation and ranking technique to improve summary factuality without sacrificing quality. We show that using a contrastive learning framework with our refined candidate summaries leads to significant gains on both factuality and similarity-based metrics. Specifically, we propose a ranking strategy in which we effectively combine two metrics, thereby preventing any conflict during training. Models trained using our approach show up to 6 points of absolute improvement over the base model with respect to FactCC on XSUM and 11 points on CNN/DM, without negatively affecting either similarity-based metrics or absractiveness. | # Improving Factuality Of Abstractive Summarization Without Sacrificing Summary Quality
Tanay Dixit ∗ Fei Wang **Muhao Chen**
![0_image_0.png](0_image_0.png)
Indian Institute of Technology Madras University of Southern California [email protected] {fwang598,muhaoche}@usc.edu
## Abstract
Improving factual consistency of abstractive summarization has been a widely studied topic.
However, most of the prior works on training factuality-aware models have ignored the negative effect it has on summary quality. We propose EFACTSUM (i.e., Effective **Fact**ual Summarization), a candidate summary generation and ranking technique to improve summary factuality without sacrificing summary quality. We show that using a contrastive learning framework with our refined candidate summaries leads to significant gains on both factuality and similarity-based metrics. Specifically, we propose a ranking strategy in which we effectively combine two metrics, thereby preventing any conflict during training. Models trained using our approach show up to 6 points of absolute improvement over the base model with respect to FactCC on XSUM and 11 points on CNN/DM, without negatively affecting either similarity-based metrics or absractiveness.1
## 1 Introduction
Although recent methods have made significant improvements in abstractive summarization (Lewis et al., 2020; Raffel et al., 2020; Zhang et al., 2020),
they do still lack a very critical component - factual consistency. Recent works (Cao et al., 2020; Kryscinski et al., 2019; Maynez et al., 2020) have shown that a majority of the model-generated summaries are unfaithful and suffer from a wide range of hallucination (Tang et al., 2022). Making summarization models factually consistent is critical for its trustworthiness in real-world applications.
Recent studies have made several attempts to improve factuality of abstractive summarization by either modifying the maximum likelihood estimation
(MLE) training objective (Cao and Wang, 2021;
∗ This work was done when the first author was visiting the University of Southern California.
1Code is available at https://github.com/tanay2001/
EFactSum.
Goyal and Durrett, 2021), directly optimizing factuality metrics using reinforcement learning (Cao et al., 2022) or improving the quality of the training data (Goyal and Durrett, 2021; Nan et al., 2021a).
However, most of these works have reported a negative relationship between factual consistency and summary quality2. For example, Goyal and Durrett
(2021) improve factuality at a cost of a 6-point drop in ROUGE-L, Wan and Bansal (2022) also observe a 2-point drop in ROUGE-L. Prior approaches have also optimized factuality at the cost of abstractiveness (Ladhak et al., 2022). This leads to a critical question: *Can we improve the factuality of summarization without the cost on the summary quality?*
To this end, we propose EFACTSUM (i.e.
Effective **Fact**ual Summarization): A candidate summary generation and ranking technique for contrastive summarization training (Fig. 1) that not only achieves significant gains in factuality of abstractive summarization but also improves the sum2summary quality as measured by metrics like ROUGE,
BERTScore, etc.
mary quality. Unlike prior works which often sacrifice summary quality for improving faithfulness, we take an alternative approach to improve both faithfulness and summary quality. We make use of the fine-tuning strategy by Liu et al. (2022) and make key modifications to the ranking process. As depicted in Fig. 1 we start with generating a number of candidate summaries using existing fine-tuned models. Using these summaries, we select a subset by effectively combining two evaluation metrics of the two different criteria (§2), thus avoiding optimizing one at the cost of the other. This technique helps obtain gains over methods that simply optimize one metric (§3.4). The promising results by EFACTSUM on XSUM and CNN/DM
have shown consistent improvements in both aspects over strong baselines, demonstrating effectively enhanced summarization factuality without sacrificing the quality.
## 2 Approach
Given a document (D), the task of summarization seeks to generate its summary (S) that satisfies some conditions like factuality, coherence, etc. The standard fine-tuning process involved the use of Maximum Likelihood Estimation (MLE). Inspired by Liu et al. (2022), in addition to the cross-entropy loss, we incorporate a contrastive loss that encourages models to provide a higher probability mass to the more factual summaries. Formally, for every training document D and a ranked list of the most probable candidate summaries [S1, S2*, . . . S*n], the model learns to rank the summaries according to the factuality score. To achieve this, we make use of the following loss:
$${\mathcal{L}}_{C L}=\sum_{i}\sum_{j>i}\operatorname*{max}(0,f(S_{j})-f(S_{i})+\lambda_{i j}),\,\,(1)$$
where Si and Sj are two different candidate summaries and Si ranks higher than Sj , λij = (j−i)∗λ is a rank-based margin, and f(.) is the estimated log-probability normalized by length:
$$f(S)={\frac{\sum_{t=1}^{l}\log p_{g\theta}(s_{t}|D,S_{<t};\theta)}{|S|^{\alpha}}}.\quad\quad(2)$$
Candidate Set Generation. To generate the candidate summarization set {Si}, we make use of an existing model and sample summaries using beam search (Vijayakumar et al., 2018). We observe that just using the model trained with crossentropy leads to generating a number of unfaithful summaries. In order to generate more faithful summaries, we make use of factually improved models.
Ranking Strategy. Since our primary goal is to optimize factuality without adversarially affecting summary quality, we need to consider two metrics while deciding the ideal ranking. In order to measure the factuality of Si, we choose FactCC
(Kryscinski et al., 2020) because it correlates well with human judgments of faithfulness (Pagnoni et al., 2021) and it is also computationally more efficient than other question-answering based metrics
(Scialom et al., 2021). To measure the summary quality, we use the popular ROUGE metric (Lin, 2004). Now, amongst the set of candidate summaries that have been scored to be faithful, we further choose the top m summaries that have the highest ROUGE score. We select the set of unfaithful summaries in the same way just that we choose the m summaries with the lowest ROUGE
scores. This technique of incorporating two evaluation metrics helps overcome the inherent conflict
(Chaudhury et al., 2022). We highlight the importance of the proposed steps in §3.4. At last, these 2m summaries are used in creating the ranked list of candidate summaries for each article in the training set. The intuition behind this approach is that since the FactCC scores are not confidence scores, summaries from only one set can not provide sufficient supervision signals. Instead, training the model with balanced summaries from both sets would be beneficial.
Finally, our training objective combines the cross-entropy loss and our contrastive loss
$${\mathcal{L}}_{t o t a l}={\mathcal{L}}_{C E}+\gamma{\mathcal{L}}_{C L},$$
$$(3)$$
where $\gamma$ is the weight of the contrastive loss.
## 3 Experiments
We state the experimental setup in §3.1 and report the results in §3.2, followed by an abstractiveness analysis in §3.3. In §3.4, we analyze the importance of the various components in our approach.
## 3.1 Experimental Settings
Datasets. To understand the effectiveness of EFACTSUM, we make use of two widely-used news summarization datasets, XSUM (Narayan et al., 2018) and CNN/DM (Hermann et al., 2015).
| Summ. Quality | Factuality | | | | |
|-----------------|--------------|-------|-------|--------|-------|
| Model | R-1 | R-L | BS. | FactCC | DAE ↓ |
| XSUM | | | | | |
| PEGASUS | 47.07 | 39.26 | 89.19 | 24.33 | 0.426 |
| BRIO | 48.69 | 40.13 | 90.87 | 21.47 | 0.452 |
| FASum | 29.72 | 23.29 | 88.57 | 26.08 | 0.616 |
| DAE | 38.63 | 30.22 | 88.44 | 26.66 | 0.462 |
| CLIFF | 46.33 | 38.27 | 88.96 | 24.54 | 0.386 |
| EFACTSUM | 47.24 | 39.45 | 89.79 | 30.48 | 0.417 |
| CNN/DM | | | | | |
| BART | 43.04 | 39.41 | 87.21 | 49.07 | 0.049 |
| BRIO | 47.53 | 44.02 | 89.12 | 30.35 | 0.093 |
| FASum | 40.40 | 36.97 | 88.23 | 51.17 | 0.046 |
| CLIFF | 44.14 | 40.72 | 88.82 | 51.84 | 0.047 |
| EFACTSUM | 44.37 | 40.92 | 88.36 | 60.74 | 0.041 |
Baselines. In addition to models fine-tuned using *cross-entropy* and competitive fine-tuning techniques: **BRIO** (Liu et al., 2022), we compare EFACTSUM with prior works that have modified the fine-tuning process to improve factuality, including (1) **CLIFF** (Cao and Wang, 2021) which uses contrastive learning to train summarization models to differentiate between consistent and hallucinated summaries, (2) **FASum** (Zhu et al., 2021)
that modifies the Transformer architecture by incorporating knowledge graphs for factual consistency, and (3) DAE (Goyal and Durrett, 2021) that masks out the nonfactual tokens during training.
This comparison is only available for the XSUM
dataset.
Metrics. To evaluate factuality, we make use of FactCC (Kryscinski et al., 2020), a popular metric that uses a BERT-based metric to measure whether the generated output is faithful. We also consider DAE (Goyal and Durrett, 2020), a textualentailment-based metric that correlates well with human judgment of factuality (Tang et al., 2022). It uses an arc entailment model to evaluate the factuality of a summary. We make use of the token-level score in order to complement the sentence-level scores from FactCC. For quality assessment, we use ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2019) to evaluate the summary against the reference.
Implementation Details. We use CLIFF and cross-entropy trained models to generate the candidate set of summaries (S1, S2*, ..., S*n). We use n = 6 and only retain those training articles that contain at least 2 factual and non-factual candidate summaries. Using this new subset of training data, we fine-tune BART-Large (Lewis et al., 2020) on CNN/DM and PEGASUS (Zhang et al., 2020) on XSUM. More details are in Appx. §A.
## 3.2 Main Results
We report the results of the model fine-tuned using our approach in Tab. 1. Outputs of models finetuned using our strategy are presented in Tab. 2 and Appx. §C. Overall we can observe the proposed EFACTSUM leads to improvements on both the factuality metrics while preserving or improving the performance on reference-based similarity metrics.
For XSUM, EFACTSUM achieves a notable relative gain of 25% on FactCC and 3% on DAE
(token) in comparison to PEGASUS while simultaneously showing non-trivial gains on both ROUGE
and BERTScore. Although EFACTSUM is trained to optimize FactCC, it also does well on the other evaluation metric, thus pointing out that the training process does not exploit any biases related to the evaluation metrics. One should note that although CLIFF does better on DAE, it is sacrificing summary quality. A similar story holds for CNN/DM
also where EFACTSUM achieves a relative gain of 20% and 16% on FactCC and DAE respectively.
Unlike some of the prior works, this gain in factuality has not come at a cost of summary quality or abstractivness (§3.3). Although BRIO outperforms our approach on ROUGE and BERTScore, it substantially decreases factuality score, which is not desirable. Our approach aims to strike a balance between factuality and summary quality.
## 3.3 Factuality Vs Abstractiveness Tradeoff
Ladhak et al. (2022) show that it is naively possible to increase the factuality of generated summaries by increasing extractiveness (decreasing abstractiveness). Hence we analyze the extractiveness level of the generated summaries to understand if our method suffers from this tradeoff.
Along with the extractiveness scores (Grusky et al.,
2018), we compute the MINT (Metric for lexical INdependence of generated Text) scores and the abstractiveness-adjusted metrics scores (Dreyer et al., 2023). Fig. 2 depicts the extractiveness levels for the various summarization systems. Scores are
| System | Summary | Article |
|-----------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|
| Lesbos used to get more than 5,000 a day. On Monday there were just four. But with Europe's borders | | |
| Base. | The number of migrants and refugees arriving on the closed, more than 50,000 migrants remain in Greece waiting for a decision about their futures But Greek island of Lesbos has halved in the past week. here she is in Moria, once a transit camp for migrants, now since the EU deal with Turkey, a detention centre, run by central government It is another sign of how Greece was simply overwhelmed by | |
| Ours | The number of migrants arriving on the Greek island the numbers who came, while itself in the middle of an economic crisis. Most of those who arrived of Lesbos has halved since the EU struck a deal with before March 20, the start of the EU-Turkey agreement, are free to come and go, but cannot leave the island. Those who came after that date are locked in, waiting for a decision . . . Turkey to stem the flow. The US investment bank will switch to video interviews with first-round undergraduate candidates | |
| Base | Goldman Sachs will no longer conduct face-to-face from next month Goldman hoped the move will allow it to find students who do not attend top-tier interviews with students applying for analyst jobs. US universities It will still conduct second-round interviews in person. The shift will not affect business schools or professional hires, but is part of a broader move by Goldman to use technology in | |
| Ours | Goldman Sachs is changing the way it hires students. the hiring process. The new method will include structured interviews, which the bank said will allow for greater comparisons between candidates . . . The plane was flying over the Amanos Mountains in the southern province of Osmaniye on Monday when it lost radio contact, Anatolia news agency said Rescuers found the pilot's body near to the wreckage of the aircraft. Osmaniye Governor Celalettin Cerrah had earlier announced that a cockpit window and some other pieces of the aircraft had been found in the Caksir area. . . People living around the village of Yarpuz, about 25km (16 miles) north of the Syrian border, said that they had heard a loud bang like an explosion, according to local media A Turkish fighter jet was shot down by Syria | |
| Ours | A Turkish air force pilot has been killed after his jet crashed near the Syrian border , officials say. over the Mediterranean in June 2012, after Syrian forces said it had entered the country's airspace. | |
| Base | The pilot of a Turkish military jet has died after it crashed in the south-west of the country, state media report. | |
Table 2: Sample summaries from PEGASUS (Base) and EFACTSUM (Ours) on XSUM articles. The information from the article that contradicts the Base summaries is in **bold**. We can see that the outputs from our fine-tuned model not only generate faithful summaries but also capture the essential information from the article well.
![3_image_0.png](3_image_0.png)
also presented in Appx. §B. We can observe that the extractiveness score for our model (EFACTSUM)
is lesser than other models; it also achieves higher MINT scores (Tab. 3), which measures the abstractiveness of the summaries. Additionally, EFACTSUM shows higher scores for abstractiveness calibrated FactCC metric (µFactCC) for both datasets.
This clarifies that the additional gains in factuality are not at a cost of absractiveness.
## 3.4 Ablation Study
In order to justify the modification made in the candidate ranking process of EFACTSUM, we compute baselines that highlight the importance of each individual component. We perform the following studies using PEGASUS fine-tuned on XSUM.
Candidate Selecting Process. As explained in §2
Dataset Model MINT µ**FactCC**
CNN/DM
BART 57.94 42.14
CLIFF 52.18 39.77
EFACTSUM **60.70 47.47**
XSUM
PEGASUS 25.21 44.12 CLIFF 25.31 43.36
EFACTSUM **31.24 48.61**
we restrict the number of candidates summaries in-order to maintain a class *balanced* set. We relax this constraint by simply scoring all the candidate summaries using FactCC. This is represented by EFACTSUM- w/o select. in Tab. 4. We can observe that this process leads to improved model factuality but still falls far short of the main approach by 4 points. Hence highlighting the advantage of focusing on generating quality training data.
Dual Scoring Technique. To understand the importance of using ROUGE to select the top candidates from both factual and non-factual sets, we ablate this step by selecting the top factual and non-factual summaries using FactCC itself. This is marked as EFACTSUM- w/o ROUGE in Tab. 4.
Although the gains from this model on factuality are almost the same as EFACTSUM, it negatively affects the ROUGE score.
| Model | R-L | FactCC |
|-----------------------|-------|----------|
| PEGASUS | 39.26 | 24.33 |
| EFACTSUM- w/o select. | 38.32 | 26.38 |
| EFACTSUM- w/o ROUGE | 38.34 | 29.83 |
| EFACTSUM | 39.45 | 30.48 |
## 4 Related Work
Factual consistency in abstractive summarization has garnered much attention recently (Goyal and Durrett, 2020; Zhu et al., 2021). Existing works have explored improving factual consistency during fine-tuning, inference, and pre-training stages, respectively. For factual fine-tuning, works have applied contrastive learning (Cao and Wang, 2021; Nan et al., 2021b), reinforcement learning (Gunasekara et al., 2021) or knowledge integration
(Zhu et al., 2021) to teach the model identify summaries of high factual consistency while Wan and Bansal (2022) modify the pretraining process to introduce factuality-awareness. Several works have also improved summary factuality through postprocessing in inference, such as correcting errors and re-ranking by factual scores (Cao et al., 2020; Dong et al., 2020; Balachandran et al., 2022; Chen et al., 2021; Zhu et al., 2021). Our work differs from the aforementioned works as we improve both factuality and summary quality, unlike other methods, which often sacrifice one for the other.
## 5 Conclusion
We present EFACTSUM (Effective **Fact**ual Summarization), a candidate summary generation and ranking technique for contrastive summarization training, which helps make models more faithful without adversely affecting summary quality.
Results show that this simple, yet effective method can achieve consistent gains on both factuality and similarity-based metrics without negatively affecting the degree of abstractiveness. We hope that our findings will encourage future research on factuality-consistent summarization to focus more on the tradeoffs between summary quality and factuality.
## Acknowledgement
We appreciate the reviewers for their insightful comments and suggestions. We would also like to thank Raj Dabre and Sumanth Doddapaneni for their feedback on the initial versions of the work.
Tanay Dixit was supported by the NSF REU Site Grant 2051101. Fei Wang was supported by the Annenberg Fellowship at USC. Muhao Chen was supported by the NSF Grant IIS 2105329, by Air Force Research Laboratory under agreement number FA8750-20-2-10002, by an Amazon Research Award and a Cisco Research Award. Computing of this work was partly supported by a subaward of NSF Cloudbank 1925001 through UCSD.
## Limitations
While our approach helps train factuality-aware summarization models, it comes at an additional computation cost. It takes 3X time to train compared to the vanilla cross-entropy model. There is also an additional overhead computational cost in generating and scoring the candidate summaries for each article in the training dataset, but we believe that the gains justify the additional computation cost. Improving faithfulness in summarization models is a challenging task. Although we make improvements over prior work by achieving improved factuality metrics, like the compared prior works, our work has not focused on numerical consistency. This could be a meaningful research direction for follow-up work.
## References
Vidhisha Balachandran, Hannaneh Hajishirzi, William Cohen, and Yulia Tsvetkov. 2022. Correcting diverse factual errors in abstractive summarization via postediting and language model infilling. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9818–9830, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Meng Cao, Yue Dong, and Jackie Cheung. 2022. Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 3340–3354, Dublin, Ireland. Association for Computational Linguistics.
Meng Cao, Yue Dong, Jiapeng Wu, and Jackie Chi Kit Cheung. 2020. Factual error correction for abstractive summarization models. In Proceedings of the 2020 Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 6251–6258, Online. Association for Computational Linguistics.
Shuyang Cao and Lu Wang. 2021. CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6633–6649, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Subhajit Chaudhury, Sarathkrishna Swaminathan, Chulaka Gunasekara, Maxwell Crouse, Srinivas Ravishankar, Daiki Kimura, Keerthiram Murugesan, Ramón Fernandez Astudillo, Tahira Naseem, Pavan Kapanipathi, and Alexander Gray. 2022. XFACTOR: A cross-metric evaluation of factual correctness in abstractive summarization. In *Proceedings of the 2022 Conference on Empirical Methods* in Natural Language Processing, pages 7100–7110, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Sihao Chen, Fan Zhang, Kazoo Sone, and Dan Roth.
2021. Improving faithfulness in abstractive summarization with contrast candidate generation and selection. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5935–5941.
Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, and Jingjing Liu. 2020. Multifact correction in abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9320–9331.
Markus Dreyer, Mengwen Liu, Feng Nan, Sandeep Atluri, and Sujith Ravi. 2023. Evaluating the tradeoff between abstractiveness and factuality in abstractive summarization. In Findings of the Association for Computational Linguistics: EACL 2023, pages 2089–
2105, Dubrovnik, Croatia. Association for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment.
In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3592–3603, Online.
Association for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2021. Annotating and modeling fine-grained factuality in summarization.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1449–1462, Online. Association for Computational Linguistics.
Max Grusky, Mor Naaman, and Yoav Artzi. 2018.
Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719, New Orleans, Louisiana. Association for Computational Linguistics.
Chulaka Gunasekara, Guy Feigenblat, Benjamin Sznajder, Ranit Aharonov, and Sachindra Joshi. 2021.
Using question answering rewards to improve abstractive summarization. In *Findings of the Association for Computational Linguistics: EMNLP 2021*,
pages 518–526, Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *NIPS*, pages 1693–1701.
Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019.
Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540–551, Hong Kong, China. Association for Computational Linguistics.
Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9332–9346, Online. Association for Computational Linguistics.
Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. *arXiv* preprint arXiv:1910.09700.
Faisal Ladhak, Esin Durmus, He He, Claire Cardie, and Kathleen McKeown. 2022. Faithful or extractive?
on mitigating the faithfulness-abstractiveness tradeoff in abstractive summarization. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 1410–1421, Dublin, Ireland. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing order to abstractive summarization. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903, Dublin, Ireland. Association for Computational Linguistics.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics.
Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, and Bing Xiang. 2021a. Entitylevel factual consistency of abstractive text summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2727–2733, Online. Association for Computational Linguistics.
Feng Nan, Cicero Nogueira dos Santos, Henghui Zhu, Patrick Ng, Kathleen McKeown, Ramesh Nallapati, Dejiao Zhang, Zhiguo Wang, Andrew O. Arnold, and Bing Xiang. 2021b. Improving factual consistency of abstractive summarization via question answering.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6881–6894, Online. Association for Computational Linguistics.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 4812–4829, Online. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. QuestEval: Summarization asks for fact-based evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6594–6604, Online
and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Liyan Tang, Tanya Goyal, Alexander R Fabbri, Philippe Laban, Jiacheng Xu, Semih Yahvuz, Wojciech Krys-´
cinski, Justin F Rousseau, and Greg Durrett. 2022. ´
Understanding factual errors in summarization: Errors, summarizers, datasets, error detectors. arXiv preprint arXiv:2205.12854.
Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. *Proceedings of the AAAI Conference on Artificial Intelligence*,
32(1).
David Wan and Mohit Bansal. 2022. FactPEGASUS:
Factuality-aware pre-training and fine-tuning for abstractive summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1010–1028, Seattle, United States. Association for Computational Linguistics.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q
Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, and Meng Jiang. 2021. Enhancing factual consistency of abstractive summarization. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 718–733, Online.
Association for Computational Linguistics.
## A Additional Training Details
All experiments were carried out using 4, 24GB
NVIDIA RTX A5000 GPUs. Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO2eq/kWh. Total emissions are estimated to be 4.84 kgCO2eq of which 0 percents were directly offset. Estimations were conducted using the MachineLearning Impact calculator presented in (Lacoste et al., 2019).
| Hyperparameters | Value |
|-----------------------|---------------------|
| model | google/pegasus-xsum |
| no. of params | 568M |
| max learning rate | 1e-4 |
| warmup steps | 500 |
| number of epochs | 5 |
| per device batch size | 1 |
| accumulation step | 16 |
| margin | 0.001 |
| max seq length | 512 |
| mle weight | 1 |
| ranking weight | 10 |
XSUM : For every news article in XSUM, we use diverse beam search (Vijayakumar et al., 2018)
to generate 16 summaries using fine-tuned PEGASUS3and 16 summaries using CLIFF (maskrel, syslowcon, swapent and *regenrel*). We use the standard ROUGE-L4implementation and for FactCC,
we use the FactCC checkpoint from the official implementation provided by the authors5. Articles for which we are unable to generate the required number of factual and non-factual summaries are discarded. In the end, our training dataset contains 145,040 data points. Choosing a bigger candidate size (>6) led to a decrease in the training dataset size as mentioned in §2.
Table 5: Hyperparameters for PEGASUS on XSUM.
CNN/DM For CNN/DM we follow the same process as described for XSUM, except here we use BART Large6. For CLIFF on CNN/DM we use *syslowcon_maskrel, syslowcon, syslowcon_swapent* and *syslowcon_regenrel* models. In the end our training dataset has 246,796 articles.
3google/pegasus-xsum 4https://github.com/summanlp/evaluation/tree/master/ROUGERELEASE-1.5.5 5https://github.com/salesforce/factCC
6facebook/bart-large-cnn Training details For training we use the Adam optimizer with linear learning rate scheduling for the model training. Tab. 5 and Tab. 6 contain the best set of hyper-parameters for training PEGASUS
on XSUM and BART on CNN/DM. These hyperparameters were obtained after an extensive grid search. We perform validation after every 1600 steps and save the best model using the validation cross-entropy loss.
Table 6: Hyperparameters for BART on CNN/DM.
Decoding parameters We follow Cao and Wang
(2021) and use the beam search algorithm to decode summaries. For BART, we set the beam sizes as 4 on CNN/DM and a beam size of 8 is used for PEGASUS on XSUM. The additional decoding parameters are in Tab. 7.
| Hyperparameters | Value |
|-----------------------|-------------------------|
| model | facebook/bart-large-cnn |
| no. of params | 400M |
| max learning rate | 3e-5 |
| warmup steps | 500 |
| number of epochs | 5 |
| per device batch size | 1 |
| accumulation step | 16 |
| margin | 0.001 |
| max seq length | 1024 |
| mle weight | 0.1 |
| ranking weight | 10 |
| Hyperparameters | Value |
|-------------------|---------|
| BART | |
| beam size | 4 |
| length penalty | 2 |
| max-length | 140 |
| min-length | 55 |
| PEGASUS | |
| beam size | 8 |
| length penalty | 0.6 |
| max-length | 62 |
| min-length | 11 |
Table 7: Decoding parameters for BART and PEGASUS
## B Extractiveness Results
The extractivenes scores as calculated using the coverage score defined by Grusky et al. (2018)
are present in Tab. 9 and Tab. 8. Lower the score the higher the abstraction. We can observe that EFACTSUM achieves a lower abstraction level than CLIFF on both the datasets.
| Model | Abstractiveness (↓) |
|-----------|-----------------------|
| Reference | 0.666 |
| Pegasus | 0.735 |
| CLIFF | 0.759 |
| EFACTSUM | 0.720 |
Table 8: Extractivness analysis for XSUM
| Model | Abstractiveness (↓) |
|-----------|-----------------------|
| Reference | 0.880 |
| BART | 0.991 |
| CLIFF | 0.989 |
| EFACTSUM | 0.979 |
Table 9: Extractivness analysis for CNN/DM
## C Generated Outputs
More examples generated outputs by EFACTSUM
on different backbones and raw documents are in Tabs. 10 and 11.
System Summary Article Base. The number of migrants and refugees arriving on the Greek island of Lesbos has halved in the past week.
Lesbos used to get more than 5,000 a day. On Monday there were just four. But with Europe's
borders closed, more than 50,000 migrants remain in Greece waiting for a decision about their
futures. . . . But here she is in Moria, once a transit camp for migrants, now **since the EU deal with**
Turkey, a detention centre, run by central government. . . . It is another sign of how Greece was
simply overwhelmed by the numbers who came, while itself in the middle of an economic crisis.
Most of those who arrived before March 20, the start of the EU-Turkey agreement, are free to come
and go, but cannot leave the island. Those who came after that date are locked in, waiting for a
decision . . .
Ours The number of migrants arriving on the
Greek island of Lesbos has halved since
the EU struck a deal with Turkey to stem
the flow .
Base Hundreds of eggs from two rare bird
species have been stolen.
The Mediterranean gull and black-headed gull eggs were illegally harvested from from islands in
Poole Harbour, Dorset.. . . Natural England is urging any restaurants or pubs to ask to see a valid
licence before buying eggs to prepare in meals. Birds of Poole Harbour had been surveying a group of islands in the harbour when the theft was discovered. Mediterranean gulls are classified as a Schedule One species, meaning anyone disturbing their nests must have a special licence.
Paul Morton, who runs the charity, said Mediterranean gulls' eggs were not approved for human
consumption, and could be a "health issue". "I'm distraught, really. To see the taking of hundreds
and hundreds of eggs from an important colony is quite sickening," he said. Mr Moreton said there
had been previous convictions for egg poaching in the last 10 or 15 years...
| the EU struck a deal with Turkey to stem decision . . . the flow . The Mediterranean gull and black-headed gull eggs were illegally harvested from from islands in | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Base | Hundreds of eggs from two rare bird Poole Harbour, Dorset Natural England is urging any restaurants or pubs to ask to see a valid species have been stolen. licence before buying eggs to prepare in meals. Birds of Poole Harbour had been surveying a group of islands in the harbour when the theft was discovered. Mediterranean gulls are classified as a Schedule One species, meaning anyone disturbing their nests must have a special licence. Paul Morton, who runs the charity, said Mediterranean gulls' eggs were not approved for human consumption, and could be a "health issue". "I'm distraught, really. To see the taking of hundreds and hundreds of eggs from an important colony is quite sickening," he said. Mr Moreton said there had been previous convictions for egg poaching in the last 10 or 15 years... |
| Ours | Hundreds of gull eggs have been stolen from a protected colony. The victims were farming in an area that was declared unsafe because of its close proximity to Mount Sinabung. The volcano was still spewing ash on Sunday, hampering rescue operations. More than a dozen people were killed when it erupted in 2014. It also erupted in 2010, after having been dormant for 400 years. Rescue teams are still scouring the area, looking for more victims who may have been killed or badly burned by the hot gas and ash clouds released in the eruption. Rescue |
| Ours | At least 11 people have been killed after a volcano on the Indonesian island of teams were searching homes and farms in the village of Gamber, which was also evacuated in 2014. What causes volcanoes? The 2,460-metre (8,070 foot) tall volcano is among the country's most Sumatra erupted , officials say . active. Indonesia, located on the Pacific Ring of Fire, has more than 120 active volcanoes. It was the first time the Single Transferable Vote (STV) system had been used to select two members |
| Base | The SNP and Labour have won seats on in the same ward in a by-election. The SNP topped the vote in the Leith Walk by-election, while Edinburgh Council in two by-elections. Scottish Labour won the second seat from the Greens. The by-election was called after Deidre Brock of the SNP and Maggie Chapman of the Scottish Greens stood down The turnout for the |
| Ours | A by-election has been held in Edinburgh by-election was 25.1%. The SNP also held the Midlothian West seat on Midlothian Council with a to fill two seats on the city council . swing of 6.3% from Labour. The party's Kelly Parry secured 1,540 votes, ahead of Labour's Ian Miller on 945 votes. The by-election was called after Owen Thompson was elected as SNP MP for the Midlothian constituency. |
| Base | A volcano in western Indonesia has erupted for the second time in two years, killing at least 11 people, officials say. |
Table 10: Sample summaries from PEGASUS (Base) and EFACTSUM (Ours) on XSUM articles.
System Summary Article
Base Video shows the lions interacting with the visitors who stand inside a metal cage attached to a car. The video was captured by ekant veer, 35, an associate professor at the university of canterbury. A number of the lions are fed directly through the metal bars, while others receive meat dropped from the back of the cage.
| the visitors who stand inside a metal cage attached to a car. the video, which was captured by ekant veer, 35, an associate professor at the university of canterbury, also shows the lions scaling the cage and eating meat through its bars as the keeper speaks, the lion licks at a piece of meat that is held up against the bars as another lion walks across the roof of the cage. looking down at the people below, the lion wanders around as if deciding who it would like to make its prey before staring down the lens of the camera. set tongues wagging! one of the lions notices meat and begins sticking out its tongue in the hope of being fed. a lion stands next to one of the keepers and its large paw is the same size as the lady's head. the people inside can be seen recording the many lions from their phones, while another with paws the same size as the keepers head holds itself up against the cage and chews on some meat. later in the video people can be seen pointing out the various felines as a keeper moves her hand along the cage, instigating the lion to follow. still frames capture a lion standing up against the side of the cage alongside the keeper its power and size is plain to see orana wildlife trust. located on the outskirts of christchurch, the wildlife park is unique in that the people are caged in order to view the animals, not the other way around. | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| who stand inside a metal cage attached to a car. The video was captured by ekant veer, 35, an associate professor at the university of canterbury. A number of the lions are fed directly through the metal bars, while others receive meat dropped from the back of the cage. | |
| Ours | the video was filmed at the orana wildlife park in new zealand , the country 's only open-range zoo . the video shows the lions interacting with the visitors who stand inside a metal cage attached to a car . a number of the lions are fed directly through the metal bars , while others receive meat dropped from the back of the cage . taxpayers are having to find 11billion a year to top up the wages of millions of people working in supermarkets and other low paid jobs. the money, which amounts to a massive public subsidy for the companies involved, is paid to some 5.2million workers in the form of tax credits and other benefits the charity is campaigning for the adoption of the living wage - 9.15 an hour in london and 7.85 for the rest of the uk - across both the public and private sector. it estimates this would reduce the need for in-work benefits by 6.7bn a year, which would make a massive dent in the 12billion reduction in welfare spending which the conservatives say is necessary. the current minimum wage for those over 21 is 6.50 an hour and will rise to 6.70 in october, da and sainsburys posted combined profits of 3.9bn last year, but between them cost the taxpayer more than 750m in benefits paid to their staff. tesco paid 519m in tax but received 364m in public subsidy for its 209,000 low-paid workers. asda spent 150m in tax but its 120,000 low-paid workers received 221m in benefits thesupermarkets said they paid above the minimum wage of 6.50 an hour for those aged over 21, regularly reviewed pay and gave employees benefits such as staff discounts. asda, which is part of the us retail goliath walmart, said pay and benefits should be considered in the round. in the usa, it is estimated that walmarts low-wage workers cost u.s. taxpayers an estimated $6.2 billion (4.2bn) in public assistance including food stamps, medicaid and subsidised housing |
| Base | Taxpayers are having to find 11billion a year to top up the wages of millions of people working in supermarkets and other low paid jobs. Money is paid to some 5.2million workers in the form of tax credits and other benefits. Total amount of benefits paid to staff at some companies exceeds what the firms pay in corporation tax. |
| Ours | Taxpayers are having to find 11billion a year to top up the wages of millions of people working in supermarkets and other low paid jobs. Money is paid to some 5.2million workers in the form of tax credits and other benefits. Total amount of benefits paid to staff at some companies exceeds what the firms pay in corporation tax. Table 11: Sample summaries from BART Large (Base) and EFACTSUM (Ours) on CNN/DM articles. |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
3 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3, Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3.2
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
steen-etal-2023-little | With a Little Push, {NLI} Models can Robustly and Efficiently Predict Faithfulness | https://aclanthology.org/2023.acl-short.79 | Conditional language models still generate unfaithful output that is not supported by their input. These unfaithful generations jeopardize trust in real-world applications such as summarization or human-machine interaction, motivating a need for automatic faithfulness metrics. To implement such metrics, NLI models seem attractive, since they solve a strongly related task that comes with a wealth of prior research and data. But recent research suggests that NLI models require costly additional machinery to perform reliably across datasets, e.g., by running inference on a cartesian product of input and generated sentences, or supporting them with a question-generation/answering step. In this work we show that pure NLI models {\_}can{\_} outperform more complex metrics when combining task-adaptive data augmentation with robust inference procedures. We propose: (1) Augmenting NLI training data toadapt NL inferences to the specificities of faithfulness prediction in dialogue;(2) Making use of both entailment and contradiction probabilities in NLI, and(3) Using Monte-Carlo dropout during inference. Applied to the TRUE benchmark, which combines faithfulness datasets across diverse domains and tasks, our approach strongly improves a vanilla NLI model and significantly outperforms previous work, while showing favourable computational cost. | # With A Little Push, Nli Models Can **Robustly And Efficiently Predict** Faithfulness
Julius Steen Juri Opitz Anette Frank Katja Markert Department of Computational Linguistics Heidelberg University 69120 Heidelberg, Germany
(steen|opitz|frank|markert)@cl.uni-heidelberg.de
## Abstract
Conditional language models still generate unfaithful output that is not supported by their input. These unfaithful generations jeopardize trust in real-world applications such as summarization or human-machine interaction, motivating a need for automatic faithfulness metrics.
To implement such metrics, NLI models seem attractive, since they solve a strongly related task that comes with a wealth of prior research and data. But recent research suggests that NLI
models require costly additional machinery to perform reliably across datasets, e.g., by running inference on a cartesian product of input and generated sentences, or supporting them with a question-generation/answering step.
In this work we show that pure NLI models can outperform more complex metrics when combining task-adaptive data augmentation with robust inference procedures. We propose: (1)
Augmenting NLI training data to adapt NL inferences to the specificities of faithfulness prediction in dialogue; (2) Making use of both entailment and contradiction probabilities in NLI, and (3) Using Monte-Carlo dropout during inference. Applied to the TRUE benchmark, which combines faithfulness datasets across diverse domains and tasks, our approach strongly improves a vanilla NLI model and significantly outperforms previous work, while showing favourable computational cost.
## 1 Introduction
Conditional language models suffer from a tendency to *hallucinate* information (Maynez et al.,
2020), resulting in generations that are not faithful to their input documents, which limits the trustworthiness of such models. This raises a need for automatic faithfulness metrics. In this context, models trained on natural language inference (NLI) (Bowman et al., 2015) are attractive since, intuitively, a generation being *faithful* implies it must be *entailed* by the source (Falke et al., 2019).
However, pure NLI models have seen mixed success in faithfulness evaluation (Falke et al., 2019; Kryscinski et al., 2020; Wang et al., 2020; Maynez et al., 2020). While in recent evaluation on the TRUE benchmark (Honovich et al., 2022),
which contains datasets from knowledge-grounded dialogue, summarization and paraphrasing, NLIderived metrics perform best overall, they require impractically large models, or costly additional machinery such as question generation and answering models at inference, while still showing robustness issues. Thus we ask: What is still needed for pure NLI models to perform robustly across faithfulness datasets - while remaining cheap enough to serve as a lean and practical evaluation tool?
We enhance a relatively small NLI model to make it work robustly across tasks in three ways:
Task-Adaptive Data Augmentation. In NLI,
a hypothesis must be fully entailed by its supporting premise. However, in faithfulness, not all parts of the generation always need to be grounded. We identify an instance of this phenomenon in dialogue where parts of a turn can fulfill communicative functions such as hedging or establishing emotional connection and are often disregarded in faithfulness annotation. Hence, when applying NLI models to complete dialogue turns that may include statements irrelevant for grounding, we run a risk of producing incorrect unfaithfulness predictions.
To alleviate this issue, we propose a simple **data**
augmentation method to adapt NLI models to genres where they need to be aware of statements that must be exempt from NLI-based faithfulness evaluation. Our approach is computationally attractive, as it avoids an increase of cost at inference time.
Integration of NLI Contradiction Scores. Existing NLI faithfulness metrics typically use the entailment score for their predictions (Honovich et al., 2022; Falke et al., 2019; Kryscinski et al.,
2020). However, Chen and Eger (2022) show that subtracting the contradiction score from the entail914 ment score (referred to as e-c ) can improve NLI
performance in certain evaluation tasks. We show that there also is a strong positive effect of e-c for faithfulness prediction, and demonstrate that this is due to a high contradiction probability being a more reliable predictor of unfaithfulness than low entailment probability.
Monte-Carlo Dropout Inference. Applying NLI models to faithfulness prediction involves a domain shift from largely human-written data to automatically generated text. To make NLI model scores more robust under this shift, we propose to use Monte-Carlo dropout during inference (Srivastava et al., 2014). This essentially creates a cheap ensemble and has been shown to deal better with noisy labels (Goel and Chen, 2021). This approach leads to consistent score improvements in our tasks.
The combination of all modifications not only strongly improves over a baseline NLI model, but also outperforms all other metrics on TRUE, on average, while being **cheaper** and **smaller**.
1
## 2 Method Details 2.1 Task-Adaptive Data Augmentation
To illustrate that task requirements can be incompatible between faithfulness and NLI, consider the following instance from the Q2 dialogue corpus
(Honovich et al., 2021) that is labelled as faithful:
Grounding: American pancakes are similar to Scotch pancakes or drop scones.
Generation: yes , i love american pancakes , they are like scotch pancakes From an NLI perspective, the generation is clearly not entailed, since the statement "I love american pancakes" is not supported by the input.
To better prepare an NLI system for such genre or task-specific cases, we manually curate a small list of statements that should not influence the faithfulness prediction. We augment NLI data from the ANLI corpus (Nie et al., 2020) by adding a randomly chosen phrase from this set to each instance, while preserving the label. We then train an already fine-tuned NLI model on a concatenation of these augmented samples and original ANLI data. For training details see Appendix A.
## 2.2 Monte-Carlo Dropout
To compute scores under Monte-Carlo dropout, we randomly sample k dropout masks and compute the average of the model predictions. We set k = 15, since preliminary experiments showed that performance did not profit from additional samples.
## 3 Experimental Setup
We run experiments on TRUE (Honovich et al.,
2022), a benchmark that compiles a wide variety of faithfulness tasks in a standardized format. It contains summarization (Pagnoni et al., 2021; Maynez et al., 2020; Wang et al., 2020; Fabbri et al., 2021),
knowledge-grounded dialog (Honovich et al., 2021; Gupta et al., 2022; Dziri et al., 2022)
2and paraphrasing (Zhang et al., 2019) datasets.3 Following recommendations in TRUE, we evaluate using Area under the ROC Curve (AUC).
As our BASE model, we use the DeBERTa-large
(He et al., 2020) model of Laurer et al. (2022),
trained on MultiNLI (Williams et al., 2018), FeverNLI (Thorne et al., 2018), ANLI (Nie et al., 2020),
LingNLI (Parrish et al., 2021) and WANLI (Liu et al., 2022). The metric All uses all three of our proposed modifications to Base. We also investigate a variant without MC dropout inference (-MC)
as a more cost efficient alternative.
We compare to the strongest models on TRUE:
T5 ANLI (Honovich et al., 2022) is a T5-11B
(Raffel et al., 2020) model trained on ANLI.4 SummacZS (Laban et al., 2022) evaluates an NLI
model on all pairs of input and generated sentences and then averages maximum entailment probabilities for each generated sentence.
Q2 (Honovich et al., 2021) combines a question generation/answering pipeline with an NLI score.
Finally, Honovich et al. (2022) introduce a strong ensemble of these 3 methods (Eorig). To further verify our approach, we construct a new ensemble (Eour) by replacing T5 with All.
## 4 Results
| Method | Q2 | SummacZS | T5 ANLI | Base | -MC | All | Eorig | Eour |
|----------------------------------|------------------|------------------|----------------|-----------------|------------------|------------------|----------------|----------------|
| Summarization Frank 85.487.890.0 | 86.789.191.1 | 87.389.491.2 | 83.185.688.0 | 84.286.6† 88.9 | 85.587.7† 89.8 | 89.491.293.0 | 89.791.593.2 | |
| MNBM | 65.668.771.7 | 68.671.374.1 | 75.577.980.2 | 71.774.677.4 | 70.173.576.6 | 71.374.577.4 | 74.076.679.4 | 73.676.479.2 |
| SummEval | 75.978.881.4 | 79.481.783.9 | 78.080.583.0 | 69.672.875.8 | 72.375.2† 78.1 | 73.276.1† 78.8 | 80.482.985.4 | 80.383.085.3 |
| QAGS-X | 65.570.976.2 | 73.178.182.9 | 79.583.888.2 | 76.981.686.5 | 77.782.286.8 | 76.381.185.4 | 80.484.888.9 | 79.483.888.0 |
| QAGS-C | 79.183.587.9 | 76.380.985.2 | 77.582.186.7 | 68.774.179.3 | 73.078.4† 82.9 | 73.278.0† 82.9 | 83.587.791.3 | 83.186.790.3 |
| Dialogue BEGIN | 77.279.782.2 | 79.282.084.6 | 80.382.685.1 | 77.580.482.9 | 75.778.581.4 | 76.479.382.3 | 84.186.288.2 | 82.184.787.1 |
| DialFact | 85.486.186.8 | 83.384.184.8 | 76.877.778.6 | 81.081.8∗ 82.5 | 91.391.8∗†x 92.3 | 92.092.5∗†x 93.0 | 89.990.491.0 | 94.194.5x 94.9 |
| 82.0 | 87.288.8∗†x 90.3 | 87.889.4∗†x 90.9 | 80.882.884.9 | 86.888.5x | | | | |
| Q2 | 78.880.983.0 | 74.977.479.7 | 70.372.775.2 | 77.579.8∗ | 90.1 | | | |
| Paraphrasing PAWS 89.189.790.3 | 87.588.288.7 | 85.786.487.1 | 87.287.8∗ 88.4 | 88.489.0∗† 89.6 | 89.490.0∗† 90.5 | 90.791.291.7 | 91.892.3x 92.8 | |
| 84.1 | 85.186.086.8 | 86.086.8x | | | | | | |
| Avg | 79.780.781.7 | 80.481.482.3 | 80.681.582.4 | 78.879.880.8 | 81.782.7† 83.6 | 82.283.2∗† | 87.7 | |
Base on six out of nine corpora, but also significantly outperforms all other competitors on average, while being more computationally efficient.
As expected, we find the biggest gains in dialogue, where the All model even outperforms Eorig on 2 out of 3 corpora. We do not improve on BEGIN, which is likely due to bias in the dataset construction, which we elaborate on in Section 5.1.
On the summarization part, All improves significantly over Base on 3 out of 5 corpora, while not significantly harming performance on any corpus. However, it still falls short of the best models in TRUE. The strong showing of T5 on these corpora suggests that this might be alleviated with a stronger base model.
Overall, a very similar behaviour is exhibited by
-MC, presenting an attractive option when the added overhead of multiple samples is undesirable.
Eour is on par with Eorig, despite massively reduced costs; it even significantly outperforms it on two dialog and the paraphrasing corpora.
We also investigate the performance of each individual modification to our model (Table 2). They all improve average scores, while only leading to a notable decrease on BEGIN for both e-c and dialogue augmentations and on MNBM for e-c .
Outside of dialogue, we find that the augmentation methods have a positive impact on PAWS, as well as all summarization corpora that are at least partially based on summaries for the CNN/DM dataset (Hermann et al., 2015) (Frank, QAGS-C,
and SummEval). While we do not have a definitive explanation for this phenomenon, we hypothesize that on these datasets our augmentations aid in making the model robust in the presence of noise
| Corpus | +e-c | +MC | +Aug. |
|----------|--------------|--------------|--------------|
| Frank | -0.0+0.3+0.5 | +0.1+0.9+1.8 | +0.3+1.0+1.7 |
| MNBM | -2.1-0.8+0.5 | +1.4+2.1+2.9 | -0.4+0.0+0.6 |
| SummEval | +0.7+1.0+1.3 | +0.1+1.2+2.3 | +0.6+1.6+2.6 |
| QAGS-X | -0.4+0.3+0.9 | -1.5-0.2+1.1 | -0.3+0.9+2.1 |
| QAGS-C | +0.5+1.2+2.0 | -1.6-0.1+1.5 | +2.2+3.5+5.0 |
| BEGIN | -3.0-1.1+0.6 | +0.0+0.6+1.3 | -1.6-1.0-0.5 |
| DialFact | +8.3+9.1+9.9 | +1.1+1.3+1.5 | +3.1+3.3+3.5 |
| Q2 | +5.1+6.5+7.9 | -0.4-0.0+0.4 | +3.5+4.2+5.0 |
| PAWS | +0.3+0.4+0.5 | +1.1+1.3+1.4 | +0.8+0.9+1.0 |
| Avg | +1.6+1.9+2.2 | +0.5+0.8+1.1 | +1.4+1.6+1.9 |
or irrelevant context since our augmentations are label-neutral and must similarly be 'ignored' during training.
## 5 Analysis 5.1 Effect Of Dialogue Adaptation
We investigate whether the improvements via our augmentation approach are indeed due to them improving the handling of personal statements.
We use the occurrences of the pronoun I in a generation as a proxy measure5and compute its correlation with human labels and metrics (see Table 3).
On both Q2 and Dialfact, our proxy measure, while uncorrelated with human labels, is strongly correlated with the scores of both Base and T5. This indicates these metrics indeed tend to incorrectly reject generations with personal statements. All on the other hand reduces this dependency.
Our results also help explain why All fails to improve on BEGIN, since BEGIN gold labels are 5We use spacy (spacy.io) for POS tagging to identify pronouns.
| Method | (BEGIN) | Q2 | DialFact |
|------------|-----------|-------|------------|
| T5 | (-0.27) | -0.40 | -0.13 |
| Base | (-0.28) | -0.32 | -0.10 |
| All | (-0.19) | -0.19 | 0.04 |
| Gold Label | (-0.35) | -0.03 | 0.05 |
![3_image_0.png](3_image_0.png)
negatively correlated with first person pronouns.
This is likely due to a bias in dataset construction:
The BEGIN dataset used in TRUE has generations from two models, one of which is both more likely to generate pronouns and more likely to generate unfaithful output (see Appendix B).
## 5.2 Effect Of Integrating Contradiction Scores
To isolate the effect of e-c we compare score distributions of Base and Base+e-c in Figure 1. The lefthand side of the figure shows that in Base ca. 2700 faithful instances are predicted as non-entailed (i.e.,
e-score near 0), which implies they are labelled as contradictory or neutral. e-c , on the other hand, further differentiates these instances into instances with high contradiction (negative e-c score) and high neutral probability (e-c score near 0). We observe that almost all low-scoring faithful generations are classified as neutral, whereas nearly all instances that are classified as contradictory are indeed unfaithful. Where Base has no way to make use of this information, e-c allows to reliably label contradictory instances as unfaithful.
## 5.3 Cost Comparison To Other Approaches
There is increasing awareness of the resource-hungriness of deep learning (Strubell et al., 2019). Especially for faithfulness, cheap and reliable metrics are critical, given rising demands for NLG in research and industry. Table 4 shows that our model
| Method | AUC↑ | Param·106↓ | Model calls↓ |
|----------|--------|-----------------|----------------|
| SummacZS | 80.7 | 355 | #snt×#snt |
| T5 ANLI | 81.5 | 11,000 | 1 |
| Q2 | 81.4 | 220 + 355 + 355 | #Q × (Ql + 2) |
| -MC | 82.7 | 350 | 1 |
| All | 83.2 | 350 | 15 |
Table 4: Performance vs. cost analysis
| w/ Five Augmentations | No Aug. | | | | |
|-------------------------|-----------|------|------|------|------|
| Dataset | Avg. | Std. | Min | Max | Avg. |
| Frank | 86.7-1.0 | 0.4 | 85.8 | 87.6 | 86.2 |
| MBNM | 74.4-0.1 | 0.4 | 73.7 | 74.9 | 75.1 |
| SummEval | 75.2-0.9 | 0.5 | 74.5 | 76.0 | 74.3 |
| QAGS-X | 81.6+0.5 | 0.5 | 80.8 | 82.4 | 80.7 |
| QAGS-C | 76.4-1.6 | 0.8 | 74.7 | 77.9 | 75.2 |
| DialFact | 92.1-0.4 | 0.2 | 91.5 | 92.3 | 91.2 |
| BEGIN | 79.6+0.3 | 0.5 | 79.0 | 80.6 | 80.9 |
| Q2 | 88.8-0.6 | 0.3 | 88.1 | 89.2 | 86.3 |
| PAWS | 89.7-0.3 | 0.1 | 89.5 | 90.0 | 89.3 |
| Avg. | 82.7-0.5 | 0.2 | 82.3 | 82.9 | 82.1 |
requires fewer parameters than any other metric, including a more than 30x reduction compared to T5. During inference our model always requires a constant number of calls which can be reduced to a single call when ablating MC dropout. On the other hand, the number of calls in SummacZS
scales with the number of input and output sentences. Q2 needs to generate questions by calling an auto-regressive QG model n times, where n factors in the amount and length of questions (\#Q×Ql),
answer \#Q questions with the QA model and finally check \#Q answers with an NLI model (\#Q × 2).
In sum, our model compares favourably with other approaches, while also allowing for a performance/cost tradeoff by forgoing MC dropout.
## 5.4 Phrase Selection Robustness
To ensure that our augmentation is robust and not overly reliant on any particular choice of phrases, we repeat our dataset augmentation process multiple times with five randomly chosen augmentation phrases out of the original ten. We sample ten such datasets and retrain our model for each. Table 5 shows the average score, minimum and maximum score, as well as the standard deviation of the scores. We also report results of a model with both MC dropout and e-c but without any additional training and augmentations to directly quantify whether the augmentations are still helpful in their reduced form. This corresponds to applying MC dropout and e-c to Base.
As expected, we find that reducing the variety of available phrases leads to a drop in performance across almost all datasets, compared to All. The only exception is BEGIN, where we instead see a slight improvement. This is likely to be related to the construction of BEGIN (see the discussion in Section 5.1).
When comparing our limited augmentation models to the non-augmented model, we find that they still outperform the non-augmented model in almost all cases. In particular for Q2 and DialFact, for which we expect the strongest impact of our augmentations, we find that even the worst run still outperforms non-augmented model. This suggests that our augmentations can robustly adapt the model to the dialogue task.
Finally, we observe a relatively large drop in scores for all datasets that are at (least partially) derived from CNN/DM (Frank, SummEval and QAGS-C). This mirrors our earlier observation in Section 4 that these datasets profit from our augmentation procedure.
## 6 Related Work
Previous work on the utility of NLI for faithfulness led to mixed conclusions. In summarization, Falke et al. (2019) and Kryscinski et al. (2020) find out-of-the-box models have only limited utility in a faithfulness setting. In Wang et al. (2020), an NLI model is outperformed by a question generation/answering (QA/QG)-based method. In contrast, Maynez et al. (2020) find that a similar NLI
model vastly outperforms a QA/QG metric on their data. In knowledge-grounded dialogue, Dziri et al.
(2022), Gupta et al. (2022) and Honovich et al.
(2021) find out-of-the-box models underperform.
To improve NLI models for faithfulness in summarization, Kryscinski et al. (2020) propose FactCC, which is trained on artificially noised summaries. Utama et al. (2022) propose a controllable generation model to generate artificial faithfulness data. In knowledge-grounded dialogue, Dziri et al.
(2022) and Gupta et al. (2022) combine noising techniques to generate additional training data for NLI-based faithfulness models. In contrast to our work, these approaches a) generate training data from external sources, instead of directly augmenting NLI data, and b) do not explicitly focus on reconciling differences between NLI and faithfulness with their augmentation. Outside of augmentationbased approaches, Goyal and Durrett (2020) propose to train NLI models to label faithfulness at the dependency arc level.
## 7 Conclusion
We have demonstrated that with a small number of focused adaptations, even a relatively small NLI
model can robustly predict faithfulness. We have:
1. Shown that NLI-based metrics can be incompatible with task-specific requirements and identified and fixed one such incompatibility in dialogue with an augmentation strategy.
2. Demonstrated the importance of contradiction probability for scoring and that the underlying mechanism is the high reliability of NLI contradiction scores for detecting unfaithfulness 3. Shown that using Monte-Carlo dropout improves metric performance.
Our improved NLI model significantly improves over its baseline across many corpora and outperforms all competitors in average score on TRUE,
while being much more efficient at inference.
Our work suggests that strong improvements are possible for NLI-based faithfulness metrics, by combining data augmentation with adapted NLI
score computation. We hope this finding will spurn advances in cheap and robust NLI for faithfulness.
## 8 Limitations
Some of the summarization datasets annotated for faithfulness are relatively small, which makes score estimates uncertain. Furthermore, many datasets contain only output from a limited number of generation systems, which makes it hard to properly account for potential biases towards certain generation systems that may confound scores (see Pagnoni et al. (2021)). These concerns are, however, alleviated to some extent since we study trends across many independently created datasets, which makes it less likely for a single bias to persist in all of them. Furthermore the availability of generation and thus annotated faithfulness data limits our experiments to English. Finally, it remains unclear whether our results would still provide advantages when applied to larger models such as T5-11B, whose parameter count makes experimentation infeasible on the hardware available to us.
## 9 Ethics Statement
Faithfulness metrics help reduce the amount of incorrect information generated by NLG systems, reducing the risk associated which such generations.
However, faulty or unreliable faithfulness metrics might cause harm by incorrectly classifying faithful content as unfaithful and vice versa.
We run all experiments on publicly available data that has been specifically constructed for faithfulness evaluation. The underlying publication has been published at a conference whose review process involved an ethics review. For a specific discussion of the human effort involved in creation of the datasets we refer the reader to the original publications.
## References
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Yanran Chen and Steffen Eger. 2022. Menli: Robust evaluation metrics from natural language inference.
arXiv preprint arXiv:2208.07316.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations.
Nouha Dziri, Hannah Rashkin, Tal Linzen, and David Reitter. 2022. Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark. *Transactions* of the Association for Computational Linguistics, 10:1066–1083. **Note:** TRUE uses an earlier version of the BEGIN dataset. The version used in TRUE is described in an earlier preprint at https:
//arxiv.org/pdf/2105.00071v1.pdf.
Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´
Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. *Transactions of the Association for* Computational Linguistics, 9:391–409.
Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language
inference. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 2214–2220, Florence, Italy. Association for Computational Linguistics.
Purvi Goel and Li Chen. 2021. On the robustness of monte carlo dropout trained with noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 2219–2228.
Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment.
In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3592–3603, Online.
Association for Computational Linguistics.
Prakhar Gupta, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022. DialFact: A benchmark for fact-checking in dialogue. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3785–3801, Dublin, Ireland. Association for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations.
Karl Moritz Hermann, Tomáš Kociský, Edward Grefen- ˇ
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *Proceedings of the 28th International Conference on Neural Information Processing* Systems - Volume 1, NIPS'15, page 1693–1701, Cambridge, MA, USA. MIT Press.
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3905–3920, Seattle, United States. Association for Computational Linguistics.
Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021.
q 2: Evaluating factual consistency in knowledgegrounded dialogues via question generation and question answering. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7856–7870, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9332–9346, Online. Association for Computational Linguistics.
Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-visiting NLIbased models for inconsistency detection in summarization. *Transactions of the Association for Computational Linguistics*, 10:163–177.
Moritz Laurer, W v Atteveldt, Andreu Casas, and Kasper Welbers. 2022. Less annotating, more classifying–addressing the data scarcity issue of supervised machine learning with deep transfer learning and bert-nli.
Alisa Liu, Swabha Swayamdipta, Noah A Smith, and Yejin Choi. 2022. Wanli: Worker and ai collaboration for natural language inference dataset creation. arXiv preprint arXiv:2201.05955.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4885–4901, Online. Association for Computational Linguistics.
Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 4812–4829, Online. Association for Computational Linguistics.
Alicia Parrish, William Huang, Omar Agha, Soo-Hwan Lee, Nikita Nangia, Alexia Warstadt, Karmanya Aggarwal, Emily Allaway, Tal Linzen, and Samuel R.
Bowman. 2021. Does putting a linguist in the loop improve NLU data collection? In Findings of the Association for Computational Linguistics: EMNLP
2021, pages 4886–4901, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Hannah Rashkin, David Reitter, Gaurav Singh Tomar, and Dipanjan Das. 2021. Increasing faithfulness in knowledge-grounded dialogue with controllable features. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 704–718, Online. Association for Computational Linguistics.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014.
Dropout: A simple way to prevent neural networks from overfitting. *Journal of Machine Learning Research*, 15(56):1929–1958.
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics.
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana.
Association for Computational Linguistics.
Prasetya Utama, Joshua Bambrick, Nafise Moosavi, and Iryna Gurevych. 2022. Falsesum: Generating document-level NLI examples for recognizing factual inconsistency in summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2763–2776, Seattle, United States. Association for Computational Linguistics.
Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020.
Asking and answering questions to evaluate the factual consistency of summaries. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 5008–5020, Online. Association for Computational Linguistics.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. *Advances in neural information* processing systems, 32.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American*
| Introductory Statements Here is what I know: yep. Also Sure! Here is what I know: Hedging I am not sure, but |
|----------------------------------------------------------------------------------------------------------------|
| I am not sure but I do know that |
| I do not have information on this but I think I believe Sentiment I love that! I like that! |
Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yuan Zhang, Jason Baldridge, and Luheng He. 2019.
PAWS: Paraphrase adversaries from word scrambling.
In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298–1308, Minneapolis, Minnesota. Association for Computational Linguistics.
## A Augmentation Training Details A.1 Augmentation Phrases
Table 6 lists our manually curated list of phrases inserted during data augmentation. All phrases were derived via a small manual error analysis on the Base model.
We broadly divide our phrases into three categories: introductory statements, hedging, and sentiment statements. For each instance in ANLI, one random phrase from the list is prepended to the hypothesis. We use all three rounds of ANLI annotations. This results in 162,865 augmented instances
![7_image_0.png](7_image_0.png)
which, together with the original ANLI instances, leads to a total of 325,730 training instances.
## A.2 Hyperparameters
Table 7 lists the hyperparameter settings for our model. We use the same optimizer hyperparameters as Laurer et al. (2022) except for an increased batch size and the learning rate. For the latter we tested three learning rates (5e − 6, 5e − 2, 5e − 1)
and select the one that provided the best loss on the augmented ANLI validation set. We initially ran models for 10,000 steps with a checkpoint every 1,000 steps and selected the checkpoint with the lowest loss on the augmented ANLI validation set. Later we reduced the number of training steps to 2,000 since we found we would usually select an early checkpoint as validation loss increased later in training, likely related to overfitting on the augmented data.
## A.3 Training
We use the DeBERTa implementation in the huggingface transformers library (Wolf et al., 2020)
and trained our model on a single node using two RX6800 GPUs, with one training run taking about three hours. Later experiments with fewer steps cut that time by 80%.
## B Dataset Bias In Begin
BEGIN is the only dialogue corpus on which first person pronoun occurrence shows a strong (negative) correlation with faithfulness (see Table 3).
Since there is nothing in the annotation guidelines that would explain this correlation, we instead hypothesize that this is the consequence of a model induced bias in the data. Specifically, we hypothesize that one of the two models in BEGIN is (1)
more likely to generate personal statements and (2)
less likely to generate faithful responses.
To avoid confusion in the remainder of this section, we highlight that there are two variants of BEGIN:
BEGIN-v1 is the variant used in TRUE. It contains labeled generations by a fine-tuned GPT-
2 base (Radford et al., 2019) and a fine-tuned T5 base model (Raffel et al., 2020) on the Wizard of Wikipedia dataset (Dinan et al., 2019).6
BEGIN-v2 is a more recent variant of BEGIN that is not part of TRUE. In addition to new instances generated by T5 and GPT-2 it contains outputs from two additional models. It also has a revised annotation procedure. When we refer to BEGIN-v2, we exclusively mean the Wizard of Wikipedia subset.
Unfortunately, BEGIN-v1 does not allow us to retrieve which model generated which instance.
This makes it impossible to directly investigate for model bias. However, BEGIN-v2 includes outputs by the same two models, fine-tuned on the same data. Since we only need corpus level statistics to verify our assumptions, we conduct our analysis on the GPT-2 and T5 instances in BEGIN-v2.
To verify (1), we compute the correlation between a binary variable indicating which model generated each instance (T5: 0, GPT-2: 1) and firstperson pronoun occurrence. We find a positive correlation (Kendall's τ wrt. to I-pronoun occurrence:
0.18, p < 0.001), indicating that GPT-2 generates outputs including more first-person pronouns.
To investigate whether GPT-2 is also more likely to be unfaithful, i.e. to verify (2), we compute the correlation between the binary model indicator variable and a faithfulness variable that is 1 when the output is labelled as *Fully attributable* and 0 otherwise. We find a negative correlation (Kendall's τ wrt. to Faithfulness: −0.25, p < 0.001), supporting our hypothesis that GPT-2 is also overall less faithful. To ensure that this is not an effect of additional personal statements leading to more unfaithful generations, we conduct the same analysis only on instances where we identify no first-person pronouns. We find a similarly strong negative correlation of −0.29 (p < 0.001).
Our analysis shows that GPT-2 produces both overall less faithful outputs and more first-person pronouns than T5. Since BEGIN-v1 contains only outputs from T5 and GPT-2 this suggests that the root cause for the negative correlation between faithfulness label and first-person pronoun occurrence in BEGIN-v1 is model bias confounding faithfulness and first-person pronoun occurrence.
6The relevant data can be found at https://raw.
githubusercontent.com/google/BEGIN-dataset/ 5fa0cb0dde0e653d2016724a52a5ca27fe8b6a3f/dev_05_ 24_21.tsv
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
![8_image_2.png](8_image_2.png)
![8_image_4.png](8_image_4.png) ![8_image_3.png](8_image_3.png)
## B.1 Dataset Bias In Begin-V2
We conduct a preliminary study to investigate whether similar biases also exist in BEGIN-v2.
We observe that while BEGIN-v2 uses data from four dialogue systems, a majority of faithful generations is produced by a single system called CTRLDIALOG (Rashkin et al., 2021). CTRL-DIALOG
is specifically trained to generate less subjective text, which we hypothesize might result in fewer first person pronouns. Since CTRL-DIALOG also produces more faithful texts, this would lead to a negative correlation between faithfulness and first person pronouns, similar to what we observe on BEGIN-v1.
We verify this assumption by computing the correlation of a binary variable indicating an instance has been generated by CTRL-DIALOG with a) the faithfulness labels on BEGIN-v2 and b) first-person pronoun occurrence. We find that an instance being generated by CTRL-DIALOG is positively correlated with it having a *faithful* label (Kendall τ w.r.t. faithfulness: 0.48, p< 0.001) while being negatively correlated with the number of pronouns (Kendall τ w.r.t. I-pronoun occurrence: -
0.34, p< 0.001). This suggests future evaluations on the BEGIN-v2 might run into similar bias issues.
## C Dataset Statistics
We report the number of instances, as well as the class distribution of TRUE in Table 8.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
8
✓ A2. Did you discuss any potential risks of your work?
9
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 1,3
✓ B1. Did you cite the creators of artifacts you used?
1,3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
1,9
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 9
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Most data is machine generated and thus unlikely to reveal personal information. All data is also already publicly available and has been introduced in peer-reviewed publications, providing an additional safeguard.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We discuss the limitation to English in Section 9.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix C
## C ✓ **Did You Run Computational Experiments?** 3,4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5.2,Appendix A
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
kauf-ivanova-2023-better | A Better Way to Do Masked Language Model Scoring | https://aclanthology.org/2023.acl-short.80 | Estimating the log-likelihood of a given sentence under an autoregressive language model is straightforward: one can simply apply the chain rule and sum the log-likelihood values for each successive token. However, for masked language models (MLMs), there is no direct way to estimate the log-likelihood of a sentence. To address this issue, Salazar et al. (2020) propose to estimate sentence pseudo-log-likelihood (PLL) scores, computed by successively masking each sentence token, retrieving its score using the rest of the sentence as context, and summing the resulting values. Here, we demonstrate that the original PLL method yields inflated scores for out-of-vocabulary words and propose an adapted metric, in which we mask not only the target token, but also all within-word tokens to the right of the target. We show that our adapted metric (PLL-word-l2r) outperforms both the original PLL metric and a PLL metric in which all within-word tokens are masked. In particular, it better satisfies theoretical desiderata and better correlates with scores from autoregressive models. Finally, we show that the choice of metric affects even tightly controlled, minimal pair evaluation benchmarks (such as BLiMP), underscoring the importance of selecting an appropriate scoring metric for evaluating MLM properties. | # A Better Way To Do Masked Language Model Scoring
Carina Kauf Massachusetts Institute of Technology [email protected]
## Abstract
Estimating the log-likelihood of a given sentence under an autoregressive language model is straightforward: one can simply apply the chain rule and sum the log-likelihood values for each successive token. However, for masked language models (MLMs), there is no direct way to estimate the log-likelihood of a sentence. To address this issue, Salazar et al.
(2020) propose to estimate sentence pseudolog-likelihood (PLL) scores, computed by successively masking each sentence token, retrieving its score using the rest of the sentence as context, and summing the resulting values. Here, we demonstrate that the original PLL method yields inflated scores for out-ofvocabulary words and propose an adapted metric, in which we mask not only the target token, but also all within-word tokens to the right of the target. We show that our adapted metric (PLL-word-l2r) outperforms both the original PLL metric and a PLL metric in which all within-word tokens are masked. In particular, it better satisfies theoretical desiderata and better correlates with scores from autoregressive models. Finally, we show that the choice of metric affects even tightly controlled, minimal pair evaluation benchmarks (such as BLiMP),
underscoring the importance of selecting an appropriate scoring metric for evaluating MLM
properties.1
## 1 Introduction
Most state-of-the-art transformer-based large language models (LLMs) fall into two classes: unidirectional (or autoregressive) models, where each token is generated based on its left context (e.g.,
GPT models; Radford et al., 2019), and bidirectional models, where a token is predicted from both left and right context tokens, some of which may be masked (e.g., BERT; Devlin et al., 2018).
Often, it is beneficial to compare these models' performance on controlled sentence generation benchmarks. Whereas unidirectional architectures offer a 1Our results and code are available at https://github.
com/carina-kauf/better-mlm-scoring.
Anna A. Ivanova
![0_image_0.png](0_image_0.png)
Massachusetts Institute of Technology [email protected] Figure 1: Three different ways to compute the PLL
![0_image_1.png](0_image_1.png)
![0_image_2.png](0_image_2.png)
score of a multi-token word (e.g., souvenir) during masked language modeling. *Purple*: target token, *pink*:
within-word tokens that are available during inference, turquoise: within-word tokens that are masked during inference. Sentence tokens that do not belong to the current word are always available during inference.
natural way of calculating sentence log-likelihood
(summing the log-likelihood scores of each sentence token given its left context), there is no direct way of estimating sentence log-likelihood for a bidirectional model.
So far, the best available method to score a sentence under a bidirectional LLM has been the pseudo-log-likelihood (PLL) scoring approach described by Salazar et al. (2020) (and initially used by Shin et al., 2019; Wang and Cho, 2019). The PLL of a sentence is calculated as the sum of PLL
scores for each token given all other sentence tokens, thus providing a comparable metric to unidirectional models' log-likelihood (LL) sentence scoring. The PLL metric is extremely popular; it is used extensively in LLM studies tackling topics as diverse as effects of training data (Sinha et al., 2021; Zhang et al., 2021), model fluency (Laban et al., 2021), syntactic and conceptual knowledge
(Sinclair et al., 2022; Bhatia and Richie, 2022), social biases (Nangia et al., 2020), and others. Some of these studies have already accrued dozens of citations.
Here, we show that the metric proposed by Salazar et al. (PLL-original) has important shortcomings that limit its utility. Specifically, PLL-original overestimates the PLL of outof-vocabulary (OOV) words, which LLM tokenizers split into multiple tokens. As a result, PLL-original scores fail on several theoretically 925
![1_image_0.png](1_image_0.png)
desired property tests: a robust inverse relationship between sentence length and sentence PLL
(Section 4.1), a robust positive correlation between a word's frequency and its PLL score (4.2), and a positive correlation between unidirectional and bidirectional model scores for the same sentences
(Section 5). To remedy these issues, we propose an adjusted PLL metric, PLL-word-l2r (l2r: leftto-right), which estimates token PLL when future within-word tokens are also masked (Figure 1).
We show that the PLL-word-l2r metric outperforms both PLL-original and alternative PLLbased metrics. We therefore recommend to use the PLL-word-l2r metric when estimating sentence PLL under a bidirectional LLM.
## 2 Motivation: Score Inflation For Multi-Token Words
The PLL-original metric grossly overestimates the probability of OOV lexical items, such as *souvenir* (Figure 2). This is because OOV words are tokenized into subword tokens (e.g., so \#\#uven \#\#ir), and each subword token is predicted using the token's bidirectional context, which crucially includes the remaining tokens that make up the OOV word. Thus, even though the OOV word itself may be surprising given the sentence context, the individual parts of the OOV word are not surprising to a bidirectional model given a sentence context that includes all other subtokens of that word (e.g., it is easy to predict so given \#\#uven
\#\#ir; see Appendix A for additional examples).
To mitigate this bias, we adjust the PLL sentence scoring algorithm such that the model cannot access future within-word tokens (PLL-word-l2r) or any within-word tokens (PLL-whole-word) when predicting the target.
Below, we conduct a rigorous investigation of our modified metrics to determine whether this intuitive benefit holds quantitatively.
## 3 Methods
For our analysis, we adapt the scorer module of the minicons library (Misra, 2022), an open-source wrapper library around HuggingFace transformers (Wolf et al., 2020) that enables efficient extraction of word- and sentence-level probabilities from LLMs. The MLM scoring procedure of the minicons library follows the procedure originally proposed by Salazar et al. (2020). For details on sentence preprocessing, see Appendix B.
## 3.1 Pll Metrics
PLL-original. In this metric, each sentence token st of a sentence S with n tokens is consecutively replaced with a [MASK] and is predicted using all past and future tokens, irrespective of whether the context tokens belong to the same or a different word than the target token. Thus, inference is conditioned on the context S\t:=
(s1, . . . , st−1, st+1*, . . . , s*n). The final sentence score is obtained as the sum of the log probabilities of each sentence token given its context:
$$\mathrm{PLL}_{\mathrm{orig}}(S):=\sum_{t=1}^{n}\log\,P_{\mathrm{MLM}}(s_{t}\mid S_{\setminus t})\quad\quad(1)$$
PLL-word-l2r. In this metric, a [MASK] is placed not only over the current target token (now: swt
),
but also over all future sentence tokens that belong to the same word sw as the target. Inference is then conditioned on a context that includes all preceding sentence tokens (including those belonging to the current word) and all sentence tokens from future words. The final score of a sentence S is obtained as the sum of the log probabilities of each of the |w| tokens in each of the |S| words:
926
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
$$\begin{split}\text{PLL}_{\text{l2r}}(S):=\sum_{w=1}^{|S|}\sum_{t=1}^{|w|}\log P_{\text{MLM}}(s_{w_{t}}\mid S_{\backslash s_{w_{t^{\prime}\geq t}}})\\ \end{split}\tag{2}$$
PLL-whole-word. This metric is similar to PLL-word-l2r and differs from it only in that a
[MASK] is placed over all sentence tokens that belong to the same word sw as the target (both preceding and future). Inference is then conditioned on a context that includes all sentence tokens except those belonging to the current word. The final score of a sentence S is obtained as the sum of the log probabilities of each of the |w| tokens in each of the |S| words in S given the token's context:
$$\mathrm{PLL}_{\mathrm{ww}}(S):=\sum_{w=1}^{|S|}\sum_{t=1}^{|w|}\log P_{\mathrm{MLM}}(s_{w_{t}}\mid S_{\setminus s_{w}})\tag{3}$$
In Appendix G, we also report results for a PLL metric where not only future within-word tokens, but all sentence tokens to the right of the target context are masked (PLL-sentence-l2r).
Although this method is most similar to autoregressive LL scoring, sentence-l2r masking for BERT is known to produce poor quality generations (Wang and Cho, 2019); we therefore refrain from including this metric in the main text.
## 3.2 Models
We report results for bert-base-cased (and gpt2-medium for comparison) unless stated otherwise. Results for larger models are provided in Appendices D-F.
## 3.3 Datasets
For our main analyses, we use the EventsAdapt dataset (Kauf et al., 2022, based on Fedorenko et al., 2020). It contains a curated set of 782 syntactically simple sentence pairs that describe plausible or implausible agent-patient interactions in active or passive voice (e.g., *The traveler lost the souvenir*). Sentences in this dataset are 5-7 words long
(mean: 6.1, std: 1.05), with an average word log frequency of 10.95. We use this dataset because it
![3_image_0.png](3_image_0.png)
contains a high number of OOV words (19.6% for BERT and 40.3% for GPT-2; see also Appendix C).
In Appendices D-F, we show that our results generalize to two larger and more diverse corpora: the Brown corpus (Francis and Kucera, 1979) and the reference sentence set from the LibriSpeech corpus
(Panayotov et al., 2015). We also apply our PLL
metrics to score the sentences in the Benchmark of Linguistic Minimal Pairs (BLiMP) (Warstadt et al.,
2020), a challenge set of 67k sentence pairs which target specific aspects of linguistic knowledge.
## 4 Evaluating Pll Metric Properties 4.1 Effects Of Sentence Length
Like Salazar et al. (2020), we expect that models should, on average, assign lower probability to longer sentences. Thus, negative PLL
(which reflects model surprisal) should be positively correlated with sentence length. However, the PLL-original metric violates this expectation in our test sentence set, which shows a negative correlation between the number of tokens and negative PLL. In contrast, PLL-word-l2r and PLL-whole-word metrics exhibit a positive correlation between the number of sentence tokens and negative PLL, just as the negative LL scores for a unidirectional model, GPT2-medium (Figure 3A).
## 4.2 Effects Of Word Frequency
An appropriate (P)LL metric should reflect the fact that LLMs are sensitive to distributional patterns in training text corpora. In particular, we expect more frequent words to have higher (P)LL scores in the absence of contextual effects. This is indeed the case for GPT2-medium; however, the score inflation for multi-token words means that the PLL-original metric grossly overestimates the scores for low-frequency words (Figure 3B). PLL-word-l2r scores restore this relationship: their correlation with word frequency is much higher than for PLL-original. PLL-whole-word also performs well, although its correlation with word frequency is lower than for PLL-word-l2r, suggesting that it excessively penalizes OOV
words.
## 5 Correlation With Gpt-2 Scores
We expect that PLL scores for bidirectional models should be at least somewhat consistent with LL
scores for unidirectional models: both metrics are designed to serve are a proxy for sentence probability. Here, we show that the GPT-2/BERT score correlation for the PLL-original metric is very low, whereas correlation scores for PLL-word-l2r and PLL-whole-word are much higher (Figure 4), indicating the validity of this metric for cross-model comparison. As in Section 4.2, PLL-word-l2r slightly outperforms PLL-whole-word, likely because it does not penalize OOV words as severely.
See Appendices D-F for evidence that all three trends hold for larger models and for other datasets
(although the effects in other datasets are attenuated due to a lower OOV ratio).
## 6 Effects On Benchmarking
Here, we show that the choice of PLL metric affects benchmarking results for a popular, highly controlled, minimal pair linguistic benchmark: BLiMP.
Despite the fact that the comparisons are highly controlled, different metrics yield different BLiMP
scores. For all four tested models, PLL-word-l2r achieves the best overall BLiMP score (Table 1).
| Model | Metric | Overall score |
|-----------------|--------------|-----------------|
| PLL-original | 84.2 | |
| BERT (base) | PLL-word-l2r | 84.7 |
| PLL-whole-word | 83.1 | |
| PLL-original | 84.8 | |
| BERT (large) | PLL-word-l2r | 85.0 |
| PLL-whole-word | 82.6 | |
| PLL-original | 85.4 | |
| RoBERTa (base) | PLL-word-l2r | 86.7 |
| PLL-whole-word | 85.4 | |
| PLL-original | 86.5 | |
| RoBERTa (large) | PLL-word-l2r | 87.5 |
| PLL-whole-word | 85.9 | |
Table 1: Bidirectional model performance on the BLiMP benchmark using different PLL metrics.
## See Appendix H For Detailed Scores. 7 Conclusion
We have shown that PLL-word-l2r is the preferred metric for evaluating sentence PLL under a masked language model, such as BERT. Although the results from studies using the PLL-original metric can still be informative, they become harder to interpret if the proportion of OOV words in their test set is high. Therefore, we recommend using PLL-word-l2r in future works.
## Limitations
The proposed PLL-word-l2r metric has the same practical limitations as previous LL/PLL approaches. Most importantly, these scores can be influenced by many superfluous factors, such as the number of available synonyms (*computer* vs.
laptop; Holtzman et al., 2021). We therefore expect our method to be most useful in highly controlled minimal pair or multiple choice setups.
Even more accurate metrics may emerge in the future. For instance, our approach pre-specifies the number of tokens in a word, thus limiting the space of possible alternatives. Future approaches might investigate a way to normalize the PLL score distribution over words with a varying number of tokens. Further, it would be interesting to attempt to estimate the joint probability of all tokens in a word instead of predicting them left-to-right (as in PLL-word-l2r) or without any other within-word contextual information (as in PLL-whole-word).
Finally, we test our approach on English text corpora; our results might not generalize to agglutinative languages (due to a high number of tokens per word and, therefore, increased uncertainty) and are of less relevance to isolating languages (where, if enough training data are available, most wordlevel items can be represented as single tokens).
## Ethics Statement
In our proposed metric, word tokens are masked from left to right following the writing tradition in English; however, for speakers of languages such as Arabic, a "right to left" notation would be more intuitive. Note, however, that this is primarily a denotational difference that does not affect the score itself (LLMs do not discriminate left and right, only beginning and end). We do not anticipate any specific harms that would be intrinsically associated with the techniques described in this paper.
## Acknowledgements
We thank Jacob Andreas, Evan Hernandez, and the anonymous ACL reviewers for their insightful feedback. CK was supported by the K. Lisa Yang Integrative Computational Neuroscience (ICoN)
Center at MIT. AI was supported by MIT Quest for Intelligence.
## References
Sudeep Bhatia and Russell Richie. 2022. Transformer networks of human conceptual knowledge. *Psychological Review*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Evelina Fedorenko, Idan Asher Blank, Matthew Siegelman, and Zachary Mineroff. 2020. Lack of selectivity for syntax relative to word meanings throughout the language network. *Cognition*, 203:104348.
W Nelson Francis and Henry Kucera. 1979. Brown corpus manual. *Letters to the Editor*, 5(2):7.
Jon Gauthier, Jennifer Hu, Ethan Wilcox, Peng Qian, and Roger Levy. 2020. Syntaxgym: An online platform for targeted evaluation of language models. Association for Computational Linguistics (ACL).
Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn't always right. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7038–7051, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Carina Kauf, Anna A Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan S She, Zawad Chowdhury, Evelina Fedorenko, and Alessandro Lenci.
2022. Event knowledge in large language models:
the gap between the impossible and the unlikely.
arXiv preprint arXiv:2212.01488.
Philippe Laban, Tobias Schnabel, Paul Bennett, and Marti A. Hearst. 2021. Keep it simple: Unsupervised simplification of multi-paragraph text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 6365–6378, Online.
Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Kanishka Misra. 2022. minicons: Enabling flexible behavioral and representational analyses of transformer language models. *arXiv preprint arXiv:2203.13112*.
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In *2015* IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206–5210.
IEEE.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Julian Salazar, Davis Liang, Toan Q Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 2699–2712.
Joonbo Shin, Yoonhyung Lee, and Kyomin Jung. 2019.
Effective sentence scoring method using bert for speech recognition. In *Asian Conference on Machine* Learning, pages 1081–1093. PMLR.
Arabella Sinclair, Jaap Jumelet, Willem Zuidema, and Raquel Fernández. 2022. Structural persistence in language models: Priming as a window into abstract language representations. *Transactions of the Association for Computational Linguistics*, 10:1031–1050.
Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021.
Masked language modeling and the distributional hypothesis: Order word matters pre-training for little.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 2888–2913, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. In *Proceedings of the* Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30–36, Minneapolis, Minnesota. Association for Computational Linguistics.
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R
Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for english. *Transactions of the* Association for Computational Linguistics, 8:377–
392.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45.
Yian Zhang, Alex Warstadt, Xiaocheng Li, and Samuel R. Bowman. 2021. When do you need billions of words of pretraining data? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1112–1125, Online.
Association for Computational Linguistics.
## Appendix A Additional Examples Of Score Inflation
![5_Image_0.Png](5_Image_0.Png)
Figure 5: The PLL-original metric inflates the score of
![5_image_1.png](5_image_1.png) the word *carnivore*. PLL-word-l2r mitigate this issue, whereas PLL-whole-word overly penalizes the word.
Model: bert-base-cased. Figure 6: The PLL-original metric inflates the score of the word *hooligan*. PLL-word-l2r mitigate this issue, whereas PLL-whole-word overly penalizes the word.
Model: bert-base-cased.
930
## B Text Preprocessing For (P)Ll Computation
The minicons library borrows the MLM preprocessing algorithm from Salazar et al. (2020): [CLS]
and [SEP] tokens are prepended and appended to the text, respectively, and are not masked during PLL computation. For CLMs, we minimally adjust the minicons scorer library default and necessarily prepend the beginning of sentence token,
<|endoftext|>, to the text, which enables us to get a probability for the first actual sentence token
(see also the lm-zoo library; Gauthier et al., 2020).
The (P)LLs of all special tokens are not counted toward the final sentence/word score.
When calculating the (P)LL score of individual words (to estimate word frequency effects),
we place them in a neutral context My word is
_. To ensure that the same pattern of results holds across multiple neutral contexts, we additionally test the context *I opened the dictionary and randomly picked a word. It was _*, as well as a nocontext setup. These additional results are reported in Appendix E.1.
Word frequency was operationalized as the log of the number of occurrences of the word in the 2012 Google NGram corpus. Laplace smoothing was applied prior to taking the logarithm.
## C Quantification Of Out-Of-Vocabulary Words Per Dataset
Dataset Model class **OOV ratio**
EventsAdapt
BERT 19.6% RoBERTa 40.3% GPT 40.3%
LibriSpeech
BERT 8%
RoBERTa 24.3%
GPT 24.3%
Brown
BERT 8%
RoBERTa 25%
GPT 25%
Table 2: The out-of-vocabulary (OOV) ratio per dataset, quantified as the number of words split into at least two tokens by a given model's tokenizer divided by the total number of words in the dataset.
GPT and RoBERTa models use byte-level BytePair-Encoding tokenizers (Radford et al., 2019; Liu et al., 2019); BERT models use WordPiece tokenization (Devlin et al., 2018).
## D Effects Of Sentence Length D.1 Larger Llm Versions
![6_Image_0.Png](6_Image_0.Png)
Figure 7: Sentence length effects for gpt2-xl and bert-large-cased on the EventsAdapt corpus.
## D.2 Larger Datasets
![6_Image_1.Png](6_Image_1.Png)
Figure 8: Sentence length effects for gpt2-medium and
![6_image_2.png](6_image_2.png)
bert-base-cased on the LibriSpeech corpus. Figure 9: Sentence length effects for gpt2-medium and bert-base-cased on the Brown corpus.
## E Effects Of Word Frequency E.1 Different Word Contexts
![6_Image_3.Png](6_Image_3.Png)
Figure 10: Word frequency effects for
![6_image_4.png](6_image_4.png)
bert-base-cased on the EventsAdapt corpus.
Word scores were retrieved with a neutral context: "I
opened a dictionary and randomly picked a word. It was _".
Figure 11: Word frequency effects for bert-base-cased on the EventsAdapt corpus.
Word scores were retrieved without supporting context.
931
## E.2 Different Datasets
![7_Image_0.Png](7_Image_0.Png)
Figure 12: Word frequency effects for
![7_image_1.png](7_image_1.png)
bert-base-cased on the LibriSpeech corpus.
Word scores were retrieved with a neutral context: "My word is _".
## F Correlation With Unidirectional Models F.1 Larger Llm Versions
![7_Image_3.Png](7_Image_3.Png)
Figure 14: Correlation between bert-large-cased and gpt2-xl scores on the EventsAdapt corpus.
![7_image_5.png](7_image_5.png)
Figure 15: Correlation between bert-base-cased and
![7_image_7.png](7_image_7.png)
gpt2-medium scores on the LibriSpeech corpus.
## G Whole-Sentence Left-To-Right Token Masking
Here, we report results for the scoring algorithm that masks the target token, st, and all sentence tokens to its right in a sentence S with n tokens
(PLL-sentence-l2r). As in autoregressive language models, target token inference is thus conditioned solely on the token's leftward context:
PMLM(st| S<t). The final sentence score is obtained as the sum of the log probabilities of each sentence token given its context:
$$\mathrm{PLL}_{\mathrm{sent}}(S):=\sum_{t=1}^{n}\log\,P_{\mathrm{MLM}}(s_{t}\mid S_{<t})\quad\quad(4)$$
Overall, the PLL-sentence-l2r metric satisfies
![7_image_2.png](7_image_2.png)
the metric desiderata better than the PLL-original metric but worse than PLL-word-l2r. In addition, it is inferior to other metrics on the BLiMP evaluation benchmark (Appendix H), in line with previous reports of subpar generation quality (Wang and Cho, 2019).
![7_image_4.png](7_image_4.png)
Figure 18: Word frequency (A) and sentence length (B)
![7_image_6.png](7_image_6.png)
effects for scores computed with PLL-sentence-l2r on the EventsAdapt corpus (bert-base-cased)
.
BERT (base)
PLL-original 84.2 97.0 80.0 **82.3** 79.6 97.6 89.4 **83.1** 96.5 73.6 84.7 **71.2 92.4** PLL-word-l2r **84.7 97.1 81.0 82.3 81.9 98.4 89.6** 83.0 96.5 **75.0 85.0** 69.8 92.1
PLL-whole-word 83.1 96.6 76.5 81.5 80.5 96.9 87.1 82.5 **97.1** 74.9 83.8 69.2 88.5
PLL-sentence-l2r 58.7 80.3 63.0 68.3 53.5 82.1 68.3 47.8 47.3 56.5 38.9 51.6 50.7
BERT (large)
PLL-original 84.8 **97.2 80.7 82.0** 82.7 97.6 **86.4 84.3 92.8** 77.0 83.4 **72.8 91.9** PLL-word-l2r **85.0** 96.8 80.6 81.9 **84.8 97.8** 85.8 84.0 92.0 **78.8 83.6** 71.7 91.2
PLL-whole-word 82.6 96.6 75.7 79.9 81.4 95.2 83.6 83.3 90.1 78.7 81.5 70.4 86.7
PLL-sentence-l2r 59.8 61.5 63.0 71.3 60.5 71.8 58.3 58.5 63.0 50.2 42.8 51.9 63.0
RoBERTa (base)
PLL-original 85.4 97.3 83.5 77.8 81.9 97.0 91.4 **90.1 96.2** 80.7 81.0 **69.8** 91.9
PLL-word-l2r **86.7 97.8 84.8 78.7 84.9 98.3 91.6** 90.0 95.4 **81.0** 84.4 69.7 **94.0**
PLL-whole-word 85.4 97.6 80.9 76.6 85.2 96.6 **91.6** 90.0 95.6 80.2 **84.7** 69.6 91.1
PLL-sentence-l2r 79.3 97.0 79.9 71.2 78.4 95.0 84.8 82.6 85.0 68.2 80.6 58.4 81.6
RoBERTa (large)
PLL-original 86.5 97.8 84.6 79.1 84.1 96.8 **90.8** 88.9 **96.8 83.4** 85.5 70.2 91.4
PLL-word-l2r **87.5** 98.0 **85.0 80.0** 86.8 **98.3** 90.4 **89.1** 95.7 **83.4 88.0 70.3 93.2**
PLL-whole-word 85.9 98.2 80.2 78.0 **87.1** 96.0 90.1 88.9 95.6 82.2 **88.0** 69.8 89.7 PLL-sentence-l2r 80.4 **98.8** 82.5 71.8 80.4 95.1 82.0 80.8 91.6 73.0 76.6 57.8 86.0
Human *88.6 97.5 90.0 87.3 83.9 92.2 85.0 86.9 97.0 84.9 88.1 86.6 90.9*
OverallANA. AGRARG STR.BINDINGCTRL. RAIS.
D-N AGRELLIPSISFILLER GAP
IRREGULAR
ISLANDNPIQUANTIFIERS
S-V AGR
## H Detailed Blimp Benchmark Results
Table 3 shows results for each sentence suite within the BLiMP benchmark (in addition to the overall scores reported in the main text). All models shown in Tables 1 and 3 are cased models. PLL-original scores replicate those reported in Salazar et al.
(2020).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
8
✗ A2. Did you discuss any potential risks of your work?
we do not anticipate specific risks associated with our work
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** All
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
the models are available on huggingface, and the experiments are computationally light The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3 and Appendix A (no hyperparameter search was conducted though)
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
all results figures
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
heck-etal-2023-chatgpt | {C}hat{GPT} for Zero-shot Dialogue State Tracking: A Solution or an Opportunity? | https://aclanthology.org/2023.acl-short.81 | Recent research on dialog state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and fine-tuning of ever larger language model based architectures. In contrast, general purpose language models, trained on large amounts of diverse data, hold the promise of solving any kind of task without task-specific training. We present preliminary experimental results on the ChatGPT research preview, showing that ChatGPT achieves state-of-the-art performance in zero-shot DST. Despite our findings, we argue that properties inherent to general purpose models limit their ability to replace specialized systems. We further theorize that the in-context learning capabilities of such models will likely become powerful tools to support the development of dedicated dialog state trackers and enable dynamic methods. | # Chatgpt For Zero-Shot Dialogue State Tracking: A Solution Or An Opportunity?
Michael Heck, Nurul Lubis, Benjamin Ruppik, Renato Vukovic, Shutong Feng, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, Milica Gašic´
Heinrich Heine University Düsseldorf, Germany
{heckmi,lubis,ruppik,revuk100,fengs,geishaus,linh,niekerk,gasic}@hhu.de
## Abstract
Recent research on dialogue state tracking
(DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and fine-tuning of ever larger language model based architectures. In contrast, general purpose language models, trained on large amounts of diverse data, hold the promise of solving any kind of task without task-specific training. We present preliminary experimental results on the ChatGPT research preview, showing that ChatGPT achieves state-of-the-art performance in zeroshot DST. Despite our findings, we argue that properties inherent to general purpose models limit their ability to replace specialized systems.
We further theorize that the in-context learning capabilities of such models will likely become powerful tools to support the development of dedicated and dynamic dialogue state trackers.
## 1 Introduction
Dialogue state tracking (DST) is a critical component for task-oriented dialogue systems. Its purpose is to extract and track user's goals throughout a conversation (Young et al., 2010). DST is challenging due to the infinite possibilities of user/agent conversations, and because services and schemas/APIs that dialogue systems interface are subject to constant change (Ren et al., 2018). Although traditional approaches achieve high accuracy when operating on a pre-defined set of concepts called an ontology (Mrkšic et al. ´ , 2017; Liu and Lane, 2017; Zhong et al., 2018), ongoing research explores transfer to new domains with little to no additional learning (Rastogi et al., 2020) using ontology independent architectures to allow seamless adaptation to out-of-ontology concepts.
Many strategies for zero-shot transfer to unseen domains have been proposed. Li et al. (2021)
treat DST as a question answering (QA) task by leveraging data augmentation. Zhao et al. (2022)
936 propose DST by relying on schema descriptions while Heck et al. (2022) utilize natural language descriptions to facilitate zero-shot transfer. Gao et al. (2020) and Lin et al. (2021) suggest learning from non-dialogue QA data which are available in large amounts to improve generalization.
Campagna et al. (2020) harness large synthesized data based on abstract dialogue models. However, none of these techniques are ideal solutions.
Fine-tuning is challenging due to computational costs, risk of over-fitting and the need for expensive (Budzianowski et al., 2018) task-specific data.
Cross-task transfer still requires curated data and careful consideration of suitable learning tasks.
Data augmentation requires high level task knowledge and an adequate synthesizing strategy.
A new generation of large language models
(LLMs) (Brown et al., 2020; Ouyang et al., 2022; Glaese et al., 2022) comes with the promise to be equipped to solve any task without task-specific fine-tuning, but solely with world knowledge they acquired during self-training on massive amounts of data. Such LLMs have been shown to perform remarkably well on in-context learning (ICL),
where only a natural language prompt and examples are provided to condition the generation process, achieving significant improvements over fine-tuned approaches in few-shot setups (Brown et al., 2020; Wang et al., 2022). ChatGPT (OpenAI, 2022) - trained using human feedback and reinforcement learning - is the most recent of such models and single-handedly solves an array of challenging natural language processing (NLP) tasks with super-human capabilities, all through a natural language dialogue interface.
In this work, we aim to answer the question:
does ChatGPT solve the problem of zero-shot DST?
We show that crafting intuitive natural language prompts is sufficient to achieve state-of-the-art performance with ChatGPT, exceeding conventional, engineering-heavy approaches to zero-shot DST
by a large margin. However, despite our findings, we argue that properties inherent to general purpose models inhibit their ability to simply replace specialized systems. We speculate that while in the foreseeable future general purpose models may not become holistic solutions to complex problems, they will provide ample opportunities to empower specialized systems to go beyond their pre-defined scopes, enable on-the-fly extensibility and generation of high quality training data by zero-shot synthesizing or automatic labeling.
## 2 Background
Dialogue state tracking is tasked to (1) determine for every turn t in a dialogue {(Ut, Mt)}
T
1 with Ut and Mt being current user and preceding system utterance whether any of the slots in S = {Sn}
N
1is present, to (2) predict values for each Sn and to (3) track the dialogue state DSt ∀t ∈ [1, T]. The DS is cumulative, i.e.,
DSt = update(DSt−1, DSc t) is updated given the predictions of slot-value updates DSc t.
ChatGPT (OpenAI, 2022) is a dialogue agent (Leike et al., 2018), and in its core a GPT3.5 LLM fine-tuned on human-written promptresponse pairs followed by reinforcement learning with human feedback (RLHF) (Christiano et al.,
2017; Stiennon et al., 2020). RLHF utilizes a reward model trained on human feedback to improve generation quality and adequacy via proximal policy optimization (Schulman et al., 2017), thereby aligning model output to human values and user's expectations. At the time of writing this work, ChatGPT is proprietary. As a sibling model to InstructGPT, details of its training are elaborated by Ouyang et al. (2022).
## 3 Zero-Shot Dst With Chatgpt
Our investigative approach to zero-shot DST with ChatGPT differs considerably from related works.
We decode dialogue state updates with a general purpose model, without undergoing any parameter updates. Consequently, we neither employ data augmentation nor cross-task transfer learning. Instead, we solely rely on the general capacities of ChatGPT as an aligned dialogue agent. We take a most rigorous approach to zero-shot transfer where we do not allow the provision of any examples, nor of a formal task definition. Instead, we only permit natural language explanations of what the model is supposed to do. This sets our investigation apart from the closely related IC-DST (Hu et al., 2022).
In zero-shot DST, the set of slots S relevant during inference and the set of slots S′seen during training of the model Xθ with parameters θ are disjoint, i.e., S ∩ S′ = ∅. Further, it may be S′ = ∅,
in which case θ is not specifically tuned towards solving DST. This is precisely the case for ChatGPT in our setup. Our approach to zero-shot DST
with ChatGPT is formalized as follows. Let A1 =P ⊕ "system":M1 ⊕ "user":U1, At ="system":Mt ⊕ "user":Ut, ∀t ∈ [2, T],
where P is the task description which provides the model with instructions for how to process a dialogue between a system M and a user U. A1 is the initial prompt to ChatGPT. At≥2 are the follow-up prompts, only containing a single turn-pair of the dialogue of interest. ChatGPT is particularly suitable for this strategy due to its chat based interface.
ChatGPT generates its next output Bt conditioned on the current prompt At−1 , as well as all preceding user queries and system responses of the same chat. The dialogue state update DSdt can be found in Bt, but may not be directly interpretable as such due to the diversity in the output surface forms. Thus, we require a normalization operation DSdt = normalize(Bt). In contrast to (Hu et al., 2022), we do not condition Bt on DSt. This renders the task even more challenging, as ChatGPT is forced to solve complex subtasks such as coreference resolution - the case where a newly encountered slot refers to the value of another slot
- solely given the initial prompt and its own latent dialogue state given the dialogue history.
## 4 Experiments
At the time of conducting our experiments, ChatGPT is a proprietary research preview accessible for free via a web interface1. We used the Jan 9 version of the model. We use a regular expression term to extract all parts that are JSON formatted.
We form DSt by accumulating all predicted updates up to turn t.
Evaluation. We evaluate on the 1000 dialogues of the MultiWOZ 2.1 (Eric et al., 2020) test split and use joint goal accuracy (JGA) to compare methods. For a fair judgement of the ChatGPT predictions, we follow the evaluation procedure of Heck 1chat.openai.com (accessed 6. Jan. to 20. Jan. 2023)
![2_image_0.png](2_image_0.png)
F
als e n e g ativ e rate
et al. (2020). We process each dialogue once and refrain from using ChatGPT's *regeneration* feature.
Prompt. We imposed restrictions that the taskdefining prompt P be intuitive natural language and provides no formal schema. The crafting process involves simple trial-and-error on fewer than 10 held-out dialogues from the MultiWOZ training set. The design process was guided by the intention to imitate the behavior of a triple copy strategy
(TripPy) DST (Heck et al., 2020). P consists of three parts. First, a list of names for detectable informable slots along with natural language descriptions. The slot names help us extract a DSdt that is compatible with the dataset's labels. Second, a sparse list of slots that are categorical, along with their value candidates for (1) aiding normalization of values that are expected to show high variability in expression, and (2) modeling Boolean slots.
Third, an informal task description.2
## 4.1 Chatgpt Vs. Supervised Sota
Comparing ChatGPT's performance to state-of-theart *supervised* approaches that achieve close to 60%
JGA is not a fair fight3, and yet we observe an impressive 31.5% zero-shot JGA. This result is double-edged; on the one hand it is evidence that ChatGPT is capable of DST4, and on the other hand is no match for specialized systems.
The comparison to TripPy, a SOTA supervised model, allows us a more fine-grained analysis. In Figure 1, slot filling performance is broken down into value types. We observed that ChatGPT underperforms in non-trivial cases, namely *refer*, where a newly encountered slot refers to the value of another slot, and *inform*, where a slot-value was mentioned by the system and confirmed by the user. ChatGPT shows slight underperformance for Boolean slots. Remarkably, performance for values that are extracted directly from user utterances
- the most relevant category in terms of frequency –
2See Appendix A for the full prompt.
3https://github.com/budzianowski/multiwoz 4See Appendix B for an example dialogue.
| Models | attr. hotel rest. taxi train | avg. |
|--------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|--------|
| TRADE (2019; 2020) | 22.8 19.5 16.4 59.2 22.9 28.16 | |
| TripPy-R (2022) | 27.1 18.3 15.3 61.5 23.7 29.18 | |
| TransferQA (2021) | 31.3 22.7 26.3 61.9 36.7 35.78 | |
| Li et al. (2021) | 42.4 24.9 27.7 60.3 41.1 39.28 | |
| D3ST (2022) | 56.4 21.8 38.2 78.4 38.7 46.70 | |
| Campagna et al. (2020) 52.8 36.3 45.3 62.6 46.7 48.74 ChatGPT 52.7 42.0 55.8 70.9 60.8 56.44 IC-DST5 (2022) 60.0 46.7 57.3 71.4 49.4 56.96 | | |
is exceeding the strong supervised baseline. Lastly, ChatGPT has a clear advantage in the underrepresented and therefore notoriously difficult *dontcare* cases, where a user is indifferent about a particular value for a slot.
## 4.2 Chatgpt Vs. Zero-Shot Sota
ChatGPT considerably outperforms previous approaches to zero-shot DST (see Table 1) and is more stable across domains than other methods.
The model tends to handle challenging domains markedly better, while maintaining high performance on domains that are handled with relative ease by earlier approaches. Most approaches to zero-shot DST still employ supervised learning on a subset of domains and test on a held-out domain.
Such methods struggle in domains with many slots never seen during training. This is evident for *hotel*, which has many unique slots and is the only domain with Boolean slots. ChatGPT can excel in such challenging scenarios by drawing from its general world knowledge to interpret concepts. *taxi* is challenging due to its frequent *refer* cases. Where most other methods fail, ChatGPT shows competency in resolving co-references in the zero-shot setting. Other models designed for DST rely on architectures that are not fundamentally different from the backbone model of ChatGPT. The reason for ChatGPT's superior abilities in conducting DST is likely found in its training scheme, particularly instruction tuning and alignment via reinforcement learning with human feedback (Ouyang et al., 2022; Ziegler et al., 2019), combined with its massive scale in terms of model and training data size. IC-DST (Hu et al., 2022) was the first successful attempt at pseudo5zero-shot DST via ICL. Our preliminary results with ChatGPT are on 5Hu et al. (2022) uses hand-crafted labeled examples for ICL even in the "zero-shot" case.
par, which is remarkable for the following reasons.
(1) Our prompt is non-schematic and without examples, (2) our task-defining prompt is stated only once at the beginning of the chat, and (3) we do not maintain a DS to serve as additional input at each turn. The heightened zero-shot performance of IC-DST can be mainly attributed to these points.
## 4.3 Error Analysis
We identified a set of recurring errors that are likely caused by either the content of P or by the model's inherent properties. See Table 2 for examples. See also Appendix C lists for more detailed instances.
a) Failed carry-over of system-informed values. Our P does not explicitly instruct to resolve *inform* cases (see Section 4.1) cases. Nevertheless, ChatGPT handles the majority of cases correctly, failing to carry over only about 28% of system-informed values. Specifying the desired behavior in P may improve this ratio further.
b) Incomplete coreference resolution. Coreferences are usually detected - i.e., in about 65% of cases –, but often not resolved. Where a coference was detected, about 23% are not resolved correctly, and another 13% are incorrect due to other errors.
c) Overprediction of *dontcare*. The recall of ChatGPT for *dontcare* is considerably higher than for the supervised baseline, but precision is low.
About 35% of *none* false negatives in Figure 1 can be attributed to overpredicting *dontcare* by ChatGPT, compared to 5% for the supervised baseline.
This is likely caused by the formulation in P. Occasionally, the model interprets slots that are not specifically filled by the user as *dontcare*.
d) Ignoring value candidates. On rare occasions, ChatGPT ignores value candidates for categorical slots and picks variants from the dialogue context instead. We observed this error for 0.1% of all values to be extracted from the context.
e) Hallucinated slots. The model frequently hallucinates slots. About 90.4% of all of ChatGPT's slot predictions are MultiWOZ slots. Since we specifically prompt ChatGPT to fill slots requested by the user with "?", the vast majority of hallucinations - 8.6% of all slot predictions - are of the requestable type, which are not considered by the standard MultiWOZ evaluation and are therefore not listed in P. In fact, ChatGPT predicts all requestable slots appearing in the MultiWOZ dataset with an average recall of 61%. Rarely - in 0.3% of all cases - alternative names are hallucinated for slots listed in P. A further 0.6% are predictions for made up slots.
f) Arbitrary normalization. We observed that the model sometimes chooses to normalize predicted values. However, these normalizations are inconsistent across dialogues.
g) Predicting DSt**instead of** DSdt. Despite explicitly requesting to predict DS updates, ChatGPT
on rare occasions - in 0.2% of all processed dialogues - attempts to predict the full DS at each turn, which may cause other phenomena such as slot-value over-prediction.
## 5 Discussion: Solution Or Opportunity?6
ChatGPT is a sophisticated dialogue agent that, via alignment with human judgements, is capable of understanding context and intent of a multi-turn conversation far beyond the capacities of the previous generation of LLMs. This makes it well-suited for DST. Our results demonstrate that even with intuitive natural language prompts, a complex task such as DST can be solved exceedingly well without any form of additional learning.
While specialized systems can exert control over its input-processing and output-generation to arbitrary degrees, this is not the case for ChatGPT. Even with the most rigorous and schematic prompts, there can be no guarantee that the model interprets the input as intended or generates the output as required, which may lead to unexpected behavior. Furthermore, there is no guarantee that behavior is consistent across a series of similar inferences, such as in our experimental evaluation.
In terms of deployment, the cost factor of building and running massive models may hinder their utilization as a plug-and-play module.
Despite impressive zero-shot and ICL results for general purpose models, specialist models still perform best on most tasks thanks to task-specific solutions via adequate engineering (Heck et al.,
2020; Ye et al., 2021; Kim et al., 2020) and taskrelated data. However, the opportunities to improve dedicated systems with the help of general purpose models are plenty. Their predictive powers could be used for developing smaller, specialized, low inference cost models. Automatic labeling and data 6The model's own response on that matter is refreshingly balanced. See Appendix D for ChatGPT's response.
| a) PMUL4050 system: "I'd recommend the Autumn House. Would you like to make a booking?" user: "Yes please. I need the reservation to be for 8 people and 2 nights starting on Tuesday." Prediction: ... hotel-name: none Label: ..., hotel-name: autumn house b) PMUL0117 user: "Yes I also need a taxi that will get me to the restaurant by the booked time please." Prediction: taxi-destination: hotel, taxi-departure: restaurant Label: taxi-destination: the gonville hotel, taxi-departure: la mimosa c) SNG01873 user: "I need to be picked up from pizza hut city centre after 04:30" Prediction: ..., hotel-name: dontcare, ..., attraction-type: dontcare, ... Label: ... d) PMUL0599 user: "[...] Can you just help me find a high-end Mexican restaurant?" Prediction: ..., restaurant-pricerange: high-end Label: ..., restaurant-pricerange: expensive e) MUL2051 user: "Can I get address and postcode for the hotel?" Prediction: hotel-address: ?, hotel-postcode: ? Label: - system: "The address is 74 chesterton road, the postal code is cb41er, can I assist with anything else?" user: "That is all for now, goodbye." Prediction: hotel-address: 74 chesterton road, hotel-postcode: cb41er Label: - f) MUL0524 user: "I'm going to Cambridge on saturday and want to arrive by 14:15 please." Prediction: ..., train-day: Saturday Label: ..., train-day: saturday g) PMUL4246 user: "i need a place to go and should be a museum" Prediction: attraction-type: museum Label: attraction-type: museum system: "Okay! There are several museums in Cambridge. What part of town would you like to visit?" user: "How about ones in the centre, what's available?" Prediction: attraction-type: museum, attraction-area: centre Label: attraction-area: centre |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
augmentation are natural use cases for ChatGPT,
as is evident from our experimental results; a perdomain JGA of 70% (see Section 4.2) is surely sufficient to generate additional mid- to high-quality training data for dedicated systems. Automatic labeling may be conducted on-line for on-the-fly adaptation of production systems or off-line for iterative learning.
Another way of harnessing general purpose models is the integration into dedicated systems as fallback options in case of out-of-domain or out-ofontology requests. An integration via knowledgeseeking term detection (Gunasekara et al., 2020)
could facilitate the ability to provide context-aware responses that go beyond the original scope of the specialized system. General purpose models may handle unseen domains in place of the main model.
While hallucinations may be an issue if not handled adequately, they also pose an opportunity to enable zero-shot concept detection. We observed that many slot hallucinations were sensible and pointed at elements that were meaningful to conversations. Zero-shot slot detection may be utilized to annotate and prepare unstructured data for model training, and to expand a system's capacities on-the-fly. Dialogue state trackers with dynamic dialogue states have the potential to expand a taskoriented dialogue system's conversational range seamlessly (Geishauser et al., 2022). A general purpose model that has the capacity to identify new concepts may be utilized to generate API calls and database queries that are unknown to the specialized system (OpenAI, 2023; Chase, 2023).
General purpose models may replace some components in a modular dialogue system (Zhu et al.,
2022). It might still be beneficial to rely on specialized DST and a dedicated policy for particular tasks in order to maintain interpretability and a desired level of control over information flow. However, natural language understanding (NLU) and natural language generation (NLG) modules may be powered by generative large language model based systems such as ChatGPT in order to benefit from a heightened ability of semantic modeling and to facilitate more natural and diverse output, thus promoting more natural conversations with modular task-oriented dialogue systems.
## 6 Conclusion
This work is the first to investigate ChatGPT's capacities for zero-shot DST. Despite remarkable preliminary results that we achieved, we identified limitations rooted in inherent properties of general purpose models, preventing these to become holistic solutions to complex NLP problems without further research. We discussed opportunities provided by ChatGPT and similar models to advance the development of specialized systems. With our insights and discussion, we hope to stimulate research in similar directions.
## Limitations
At the time of writing this work, ChatGPT is only available as a proprietary free research preview via a web interface. This is limiting in several ways.
(1) Parts of our analysis are qualitative, as quantification is challenging due to limited accessability of the investigated model. (2) Some details about the investigated model are not yet disclosed. This is true for the model design as well as for the data used to train ChatGPT. MultiWOZ is a freely available and widely used dataset, therefore no guarantee can be given that ChatGPT has not been exposed to at least some meta details regarding this dataset. (3) Given the nature of the free research preview, exact reproducibility is not guaranteed, as the model may change any time. However, it is expected that any future version of ChatGPT retains its general abilities and behaviors.
Model-as-a-service. Building a general purpose model such as ChatGPT is extremely costly and an option only for few. However, once it exists, it may be utilized for a multitude of purposes. As a model, ChatGPT does not need to be built for DST in order to be useful for DST. With capable enough general purpose models, fine-tuning towards specific tasks may be avoided. Fine-tuning is challenging for multiple reasons such as the need for adequate data, computational costs, risk of over-fitting and catastrophic forgetting, among others.
Just like its sibling model, ChatGPT will become available as model-as-a-service. The advantage of this is that a massive LM such as this is usable independent of the user's hardware. But this advantage comes with the disadvantage that it will in all probability remain proprietary. In consequence, it will likely not be possible to ever run, adapt, train or modify ChatGPT on local machines.
ChatGPT as model-as-a-service is likely to remain a black box to customers and researchers, even if just in parts. The model may change any time. In fact, a model update during our experimental evaluation prompted us to re-process a few of our test dialogues. This property impedes backward compatibility and the ability to trust in familiar behavior.
A general purpose model may show too general behavior and converse about more than what is required or requested. This also poses vulnerabilities for adversarial attacks. To this end, models such as ChatGPT have been trained with human feedback to better handle malicious intent and abusive
## Behaviors.
A model-as-a-service is a gated resource. As such, its indefinite availability cannot be guaranteed. Further, recurring costs for access may be too high for certain downstream tasks. As a hosted service, latency might become a bottleneck or hindrance for its use as a component in complex applications.
## Ethics Statement
The disclaimer of ChatGPT states that the model may occasionally generate incorrect information and may occasionally produce harmful instructions or biased content. Models, code and datasets were used in accordance with their respective licenses, terms of use and intended use. We provide logs and code that we created for this work.7 Data that we used and generated does not contain any information that names or uniquely identifies individual people or offensive content.
## Acknowledgements
M. Heck, N. Lubis, S. Feng and C. van Niekerk are supported by funding provided by the Alexander von Humboldt Foundation in the framework of the Sofja Kovalevskaja Award endowed by the Federal Ministry of Education and Research, while C.
Geishauser, H-C. Lin, B. Ruppik and R. Vukovic are supported by funds from the European Research Council (ERC) provided under the Horizon 2020 research and innovation programme (Grant agreement No. STG2018804636). We thank Girish Kulkarni and Annika Hennes for their help in processing MultiWOZ dialogues with ChatGPT.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
7https://gitlab.cs.uni-duesseldorf.de/general/
dsml/chatgpt-dst-public Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašic. 2018. ´ MultiWOZ - a largescale multi-domain Wizard-of-Oz dataset for taskoriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics.
Giovanni Campagna, Agata Foryciarz, Mehrad Moradshahi, and Monica Lam. 2020. Zero-shot transfer learning with synthesized data for multi-domain dialogue state tracking. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 122–132, Online. Association for Computational Linguistics.
Harrison Chase. 2023. LangChain. Accessed 2023-0525.
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, volume 30, pages 4299—-4307. Curran Associates, Inc.
Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, and Dilek Hakkani-Tür. 2020. MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. In *Proceedings of the 12th Language Resources and Evaluation Conference*, pages 422–428, Marseille, France. European Language Resources Association.
Shuyang Gao, Sanchit Agarwal, Di Jin, Tagyoung Chung, and Dilek Hakkani-Tur. 2020. From machine reading comprehension to dialogue state tracking: Bridging the gap. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 79–89, Online. Association for Computational Linguistics.
Christian Geishauser, Carel van Niekerk, Hsien-chin Lin, Nurul Lubis, Michael Heck, Shutong Feng, and Milica Gašic. 2022. ´ Dynamic dialogue policy for continual reinforcement learning. In *Proceedings of* the 29th International Conference on Computational Linguistics, pages 266–284, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Amelia Glaese, Nat McAleese, Maja Tr˛ebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. 2022. Improving alignment of dialogue agents via targeted human judgements.
R. Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen,
Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek HakkaniTür, Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Liden, Kaili Huang, Shahin Shayandeh, Runze Liang, Baolin Peng, Zheng Zhang, Swadheen Shukla, Minlie Huang, Jianfeng Gao, Shikib Mehri, Yulan Feng, Carla Gordon, Seyed Hossein Alavi, David R. Traum, Maxine Eskénazi, Ahmad Beirami, Eunjoon Cho, Paul A. Crook, Ankita De, Alborz Geramifard, Satwik Kottur, Seungwhan Moon, Shivani Poddar, and Rajen Subba. 2020. Overview of the ninth dialog system technology challenge: DSTC9. *CoRR*,
abs/2011.06486.
Michael Heck, Nurul Lubis, Carel van Niekerk, Shutong Feng, Christian Geishauser, Hsien-Chin Lin, and Milica Gašic. 2022. Robust dialogue state tracking with ´
weak supervision and sparse data. Transactions of the Association for Computational Linguistics, 10:1175–
1192.
Michael Heck, Carel van Niekerk, Nurul Lubis, Christian Geishauser, Hsien-Chin Lin, Marco Moresi, and Milica Gasic. 2020. TripPy: A triple copy strategy for value independent neural dialog state tracking.
In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 35–44, 1st virtual meeting. Association for Computational Linguistics.
Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, and Mari Ostendorf. 2022. Incontext learning for few-shot dialogue state tracking.
CoRR, abs/2203.08568.
Sungdong Kim, Sohee Yang, Gyuwan Kim, and SangWoo Lee. 2020. Efficient dialogue state tracking by selectively overwriting memory. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 567–582, Online.
Association for Computational Linguistics.
Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. 2018. Scalable agent alignment via reward modeling: a research direction.
ArXiv, abs/1811.07871.
Shuyang Li, Jin Cao, Mukund Sridhar, Henghui Zhu, Shang-Wen Li, Wael Hamza, and Julian McAuley.
2021. Zero-shot generalization in dialog state tracking through generative question answering. In *Proceedings of the 16th Conference of the European* Chapter of the Association for Computational Linguistics: Main Volume, pages 1063–1074, Online.
Association for Computational Linguistics.
Zhaojiang Lin, Bing Liu, Andrea Madotto, Seungwhan Moon, Zhenpeng Zhou, Paul Crook, Zhiguang Wang, Zhou Yu, Eunjoon Cho, Rajen Subba, and Pascale Fung. 2021. Zero-shot dialogue state tracking via cross-task transfer. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pages 7890–7900, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Bing Liu and Ian Lane. 2017. An end-to-end trainable neural network model with belief tracking for taskoriented dialog. In *Proceedings of Interspeech 2017*,
pages 2506–2510.
Nikola Mrkšic, Diarmuid Ó Séaghdha, Tsung-Hsien ´
Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking.
In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 1777–1788, Vancouver, Canada.
Association for Computational Linguistics.
OpenAI. 2022. ChatGPT: Optimizing language models for dialogue. Accessed 2023-01-13.
OpenAI. 2023. ChatGPT plugins. Accessed 2023-0525.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
Training language models to follow instructions with human feedback. In *Advances in Neural Information* Processing Systems.
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Schemaguided dialogue state tracking task at DSTC8. *CoRR*,
abs/2002.01359v1.
Liliang Ren, Kaige Xie, Lu Chen, and Kai Yu. 2018. Towards universal dialogue state tracking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2780–2786, Brussels, Belgium. Association for Computational Linguistics.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. *ArXiv*, abs/1707.06347.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. In *Advances in Neural Information Processing Systems*,
volume 33, pages 3008–3021. Curran Associates, Inc.
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit,
Xudong Shen, Chitta Baral, Yejin Choi, Hannaneh Hajishirzi, Noah A. Smith, and Daniel Khashabi.
2022. Benchmarking generalization via in-context instructions on 1,600+ language tasks. *CoRR*,
abs/2204.07705.
Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung.
2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 808–819, Florence, Italy.
Association for Computational Linguistics.
Fanghua Ye, Jarana Manotumruksa, Qiang Zhang, Shenghui Li, and Emine Yilmaz. 2021. Slot selfattentive dialogue state tracking. In Proceedings of the Web Conference 2021, pages 1598–1608.
Steve Young, Milica Gašic, Simon Keizer, François ´
Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model:
A practical framework for POMDP-based spoken dialogue management. *Computer Speech & Language*,
24(2):150–174.
Jeffrey Zhao, Raghav Gupta, Yuan Cao, Dian Yu, Mingqiu Wang, Harrison Lee, Abhinav Rastogi, Izhak Shafran, and Yonghui Wu. 2022. Descriptiondriven task-oriented dialog modeling. *CoRR*,
abs/2201.08904.
Victor Zhong, Caiming Xiong, and Richard Socher.
2018. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1458–
1467, Melbourne, Australia. Association for Computational Linguistics.
Qi Zhu, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, Baolin Peng, Zheng Zhang, Michael Heck, Nurul Lubis, Dazhen Wan, Xiaochen Zhu, Jianfeng Gao, Milica Gasic, and Minlie Huang. 2022.
Convlab-3: A flexible dialogue system toolkit based on a unified data format. *CoRR*, abs/2211.17148.
Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B.
Brown, Alec Radford, Dario Amodei, Paul F. Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. *CoRR*,
abs/1909.08593.
## A Task-Defining Prompt P
See Figure 2 for our complete prompt.
## B Example Dialogue
See Figure 3 for an example dialogue that processes MUL1350 for DST.
## B.1 Reasoning About Numbers From Mul2122:
" system ": "" " user ": " Hello ! I am looking for train schedules from
,→ Leicester . Can you help me with that ?"
The following slots were updated by the user :
{
" train - departure ": " Leicester " ,
" train - destination ": "?" , " train - day ": "?" , " train - leaveAt ": "?" , " train - arriveBy ": "?" , " train - book_people ": "?" } " system ": " Sure ! There are several trains leaving from
,→ leicester . What destination , day and time would you
,→ like me to search ?"
" user ": " I like a train going to cambridge on tuesday please
,→ ."
The following slots were updated by the user :
{ " train - destination ": " cambridge " , " train - day ":" tuesday " ,
" train - leaveAt ":" dontcare " ,
" train - arriveBy ":" dontcare "
} " system ": " Okay . It looks as if there are trains leaving
,→ Leicester for Cambridge every hour starting at 5:09
,→ and ending at 23:09 on Tuesday . "
" user ": " Pick one . I need the travel time and arrival time .
,→ Any one after 20:15."
The following slots were updated by the user :
{ " train - leaveAt ":" dontcare " , " train - arriveBy ":"20:15" } " system ": " You have one leaving at 21:09 , what time do you
,→ want to arrive by ?"
" user ": " Thank you that will work fine for me and my husband
,→ "
The following slots were updated by the user :
{" train - book_people ":"2"}
" system ": " The booking was successful . Your reference number
,→ is RWRKP9L6 "
" user ": " I also need a hotel with free wifi and free parking
,→ . "
The following slots were updated by the user :
{ " hotel - parking ":" yes " , " hotel - internet ":" yes " } " system ": " I have several excellent options . Did you have an
,→ area of town in mind ?"
" user ": " No , I just would like a guesthouse . "
The following slots were updated by the user :
{ " hotel - type ":" guest house " , " hotel - area ":" dontcare " }
" system ": " Sure , I ' d suggest Acorn Guest House at 154
,→ Chesterton Road CB41DA . It ' s moderately priced with
,→ 4 stars . Would that work for you ?"
" user ": " Can you please make a reservation for just me , for
,→ 4 nights starting tuesday ?"
The following slots were updated by the user :
{ " hotel - name ":" Acorn Guest House " , " hotel - book_day ":" Tuesday " , " hotel - book_stay ":"4" , " hotel - book_people ":"1" } " system ": " Your reference number is HYEDH9O5 , anything else
,→ I can help you with ?"
" user ": " No thank you , you have been very helpful . Have a
,→ great day ."
An empty JSON list , no new slot is updated by the user .
## B.2 Explaining A Response Referring To P
```
" system ": ""
" user ": " Hi . I am having trouble finding a place to eat
,→ called prezzo . Can you locate it for me and tell me
,→ a little about it please ?"
There were no " slots " updated in the " user " ' s latest
,→ response to the " system ". As per the provided user
,→ sentence , the user is seeking information and is
,→ not providing any details . Thus the response should
,→ be an empty JSON list {}
" system ": " Prezzo is located at 21 - 24 Northampton Road , in
,→ the west . It ' s an italian restaurant . Moderate
,→ price range ."
" user ": " That sounds perfect !"
The " slots " updated by the " user " in this response are the
,→ following :
Copy code
{
" restaurant - name ": " prezzo " ,
" restaurant - area ": " west " ,
" restaurant - food ": " italian " ,
" restaurant - pricerange ": " moderate "
}
It ' s worth noting that these slots are updated only because
,→ system provided that information in the previous
,→ message , if the system have not provided this
,→ information , the updated slots should be filled by
,→ "?"
" system ": " Would you like me to book you a table ?"
" user ": " Not this time but I would like to visit an
,→ attraction while in Centre , can you find one for me
,→ ?"
The " slots " updated by the " user " in this response are the
,→ following :
Copy code
{
" restaurant - book_people ": " dontcare " ,
" restaurant - book_day ": " dontcare " ,
" restaurant - book_time ": " dontcare " ,
" attraction - area ": " centre "
}
It ' s worth noting that restaurant - book_people , restaurant -
,→ book_day , restaurant - book_time are updated with "
,→ dontcare " since the user doesn ' t care about booking
,→ a table .
" system ": " Okay , there ' s about 44 attractions in the centre
,→ of town . Is there something in particular you ' d
,→ like ?"
" user ": " no , but I will need the address ."
The " slots " updated by the " user " in this response are the
,→ following :
Copy code
{
" attraction - type ": "?" ,
" attraction - name ": "?"
}
In this response the user didn ' t give any specific
,→ attraction details , so the " slots " of attraction -
,→ type and attraction - name are filled with "?" as the
,→ user didn ' t specify any preferences .
" system ": " I think you would enjoy Old Schools in the centre
,→ of town . It is an architectural attraction . The
,→ address is trinity lane ."
" user ": " Great , that sounds perfect . Thanks !"
The " slots " updated by the " user " in this response are the
,→ following :
Copy code
{
" attraction - type ":" architectural " ,
" attraction - name ":" Old Schools " ,
" attraction - address ":" trinity lane "
}
The user is indicating that this attraction is the one they
,→ are interested in and thank the system , thus the
,→ slots are updated with the provided information .
" system ": " Post code is cb21tt , Trinity lane . Is there
,→ anything else I can tell you ?"
" user ": " Not today , thank you so much for your help . I
,→ appreciate everything you do ."
The " slots " updated by the user in this response are the
,→ following :
Copy code
{
" attraction - postcode ":" cb21tt "
}
The user is indicating the end of their conversation and
,→ expressing gratitude , so the system does not
,→ receive any new request .
" system ": " Absolutely positively no problem at all ! Thanks
,→ for using our service . Enjoy your time !"
```
## From Mul2405:
```
" user ": " Thanks , I ' m all set here . You can end the
,→ conversation now ."
There were no " slots " updated by the user in this response .
,→ The user is indicating the end of the conversation .
,→ Thus the response should be an empty JSON list {}
```
## C Error Types
```
C.1 Failed carry-over of system-informed
values
From PMUL4050:
" system ": "I ' d recommend the Autumn House . Would you like to
,→ make a booking ?"
" user ": " Yes please . I need the reservation to be for 8
,→ people and 2 nights starting on Tuesday . "
{
" hotel - book_people ":"8" ,
" hotel - book_stay ":"2" ,
" hotel - book_day ":" Tuesday "
}
" system ": " Great . I successfully booked your stay . Your
,→ reference number is 2 OVWNFW1 . Is there anything
,→ else I can help you with ?"
" user ": " I am looking for a particular attraction . Its name
,→ is called lynne strover gallery "
{
" attraction - name ": " lynne strover gallery " ,
" attraction - type ":"?"
}
" system ": " The Lynne Strover Gallery is a wonderful museum
,→ in the West area of town , it is free to enter .
,→ Would you like the phone number and address perhaps
,→ ?"
" user ": " Yes , I would also like to know what kind of
,→ attraction it is and the entrance fee ."
{
" attraction - type ": "?" ,
" attraction - fee ":"?"
}
" system ": " It is a museum and it is free entry ."
" user ": " What area is it in ?"
{
" attraction - area ":" west "
}
" system ": " As stated previously , it is on the west side of
,→ town . Is there anything else I can do for you ?"
" user ": " No that ' s all the help I need . Thanks for your help
,→ . "
{}
```
## C.2 Incomplete Coreference Resolution From Mul1228:
" system ": " I have one hotel in the moderate range , cityroomz
,→ . Would you like me to book it for you ?"
" user ": " Yes , please . Can you book a room for Friday for 1
,→ person , 3 nights ?"
{ " hotel - name ": " cityroomz " , " hotel - book_people ": "1" , " hotel - book_day ": " Friday " , " hotel - book_stay ": "3" } " system ": " Your reservation has been made . Reference number
,→ DTONYTIH . Is there anything else I can help you
,→ with ?"
" user ": " I need to book a taxi to get me to the restaurant
,→ by 21:30."
{
" taxi - destination ": "?" , " taxi - arriveBy ": "21:30" ,
" taxi - leaveAt ": " dontcare " , " taxi - departure ": " dontcare " } " system ": " Will you be departing from your hotel ?" " user ": " No , I ' d like to leave the restaurant by 21:30 , I
,→ need the taxi to take me to the hotel ."
{ " taxi - destination ": " hotel " , " taxi - leaveAt ": "21:30" , " taxi - arriveBy ": " dontcare " , " taxi - departure ": " restaurant " }
## C.3 Overprediction Of **Dontcare** From Sng01873:
" system ": "" " user ": " I need to be picked up from pizza hut city centre
,→ after 04:30"
{ " taxi - leaveAt ": "04:30" ,
" taxi - destination ": " pizza hut city centre " ,
" taxi - departure ": "?" , " taxi - arriveBy ": " dontcare " , " restaurant - book_people ": " dontcare " , " restaurant - book_day ": " dontcare " ,
" restaurant - book_time ": " dontcare " , " restaurant - food ": " dontcare " , " restaurant - pricerange ": " dontcare " , " restaurant - name ": " dontcare " , " restaurant - area ": " dontcare " , " hotel - book_people ": " dontcare " , " hotel - book_day ": " dontcare " , " hotel - book_stay ": " dontcare " , " hotel - name ": " dontcare " , " hotel - area ": " dontcare " , " hotel - parking ": " dontcare " , " hotel - pricerange ": " dontcare " , " hotel - stars ": " dontcare " , " hotel - internet ": " dontcare " , " hotel - type ": " dontcare " , " attraction - type ": " dontcare " , " attraction - name ": " dontcare " ,
" attraction - area ": " dontcare " ,
" train - book_people ": " dontcare " , " train - leaveAt ": " dontcare " ,
" train - destination ": " dontcare " ,
" train - day ": " dontcare " ,
" train - arriveBy ": " dontcare " , " train - departure ": " dontcare " }
## C.4 Ignoring Value Candidates From Mul0540:
" system ": " Booking was successful , the total fee is 188.8
,→ GBP payable at the station . Reference number is : 31 ,→ K4BK2E . Is there anything else you need ?"
" user ": " Yes , I am also looking for a multiple sports
,→ attraction in the city centre . Are you aware of any
,→ ?"
{ " attraction - type ":" multiple sports " , " attraction - area ":" city centre " }
" system ": " Unfortunately , there are no multiple sports
,→ attractions in the area you specified . Is there ,→ some other type of attraction or area I can look
,→ into for you ?"
" user ": " Alright , that ' s fine , how about colleges ? Are there
,→ any of those in the city centre ?"
{ " attraction - type ":" colleges " , " attraction - area ":" city centre " }
## C.5 Hallucinated Slots From Mul0671:
" system ": " Cambridge has 8 moderately priced guesthouses in
,→ the north . I would suggest Acorn Guest House , with
,→ a star rating of 4. Would you like me to book it ,→ ?"
" user ": " Yes please , I would like to book it for 7 people
,→ for 5 nights on Saturday , and I need a reference ,→ number ."
{ " hotel - name ":" Acorn Guest House " ,
" hotel - stars ":"4" , " hotel - book_people ":"7" , " hotel - book_day ":" Saturday " ,
" hotel - book_stay ":"5" ,
" hotel - reference_number ":"?"
} " system ": "I ' m very sorry I was unable to book at acorn
,→ guest house for 5 nights , would you like to try for
,→ a shorter stay ?"
" user ": " Yes , please try to book it for four nights instead
,→ ."
{ " hotel - book_stay ":"4" }
" system ": " Your booking was successful , the reference number
,→ is WAEYAQ0M . May I assist you with anything else ,→ today ?"
" user ": " No , that should be all for now . Thank you for the
,→ help !"
{
" hotel - reference_number ":" WAEYAQ0M "
}
## C.6 Arbitrary Normalization From Mul0524:
" system ": " The cambridge and county folk museum is located
,→ at 2 -3 castle street postcode cb30aq , it has an ,→ entrance fee of 3.50 pounds . "
" user ": " Thanks . Now I need to book a train from Norwich ." { " train - departure ": " Norwich " , " train - destination ": "?" , " train - leaveAt ": "?" , " train - day ": "?" ,
" train - arriveBy ": "?" ,
" train - book_people ":"?" }
" system ": " There are 133 trains leaving norwich . What day
,→ and time are you interested in ?"
" user ": "I ' m going to Cambridge on saturday and want to
,→ arrive by 14:15 please ."
{ " train - destination ": " Cambridge " , " train - day ": " Saturday " , " train - arriveBy ": "14:15" }
```
C.7 Predicting DStinstead of DSdt
From MUL2116:
" system ": ""
" user ": " Yes , can you tell me information on Cambridge
,→ lodgings ? We are looking for free wifi and a nice
,→ guesthouse ."
{
" hotel - name ": "?" ,
" hotel - area ": "?" ,
" hotel - parking ": " dontcare " ,
" hotel - pricerange ": " dontcare " ,
" hotel - stars ": " dontcare " ,
" hotel - internet ": " yes " ,
" hotel - type ": " guest house "
}
" system ": " Sure . What area and price range are you looking
,→ for ?"
" user ": " I don ' t really care about area or price range , the
,→ most important thing for me is free wifi in a
,→ guesthouse that has free parking ."
{
" hotel - name ": "?" ,
" hotel - area ": " dontcare " ,
" hotel - parking ": " yes " ,
" hotel - pricerange ": " dontcare " ,
" hotel - stars ": " dontcare " ,
" hotel - internet ": " yes " ,
" hotel - type ": " guest house "
}
" system ": "I ' ve got over 20 guesthouses that will work with
,→ those requirements . Would you like me to book a
,→ moderately priced 4 star ? "
" user ": " As long as it has free wifi and parking then yes ."
{
" hotel - name ": "?" ,
" hotel - area ": " dontcare " ,
" hotel - parking ": " yes " ,
" hotel - pricerange ": " moderate " ,
" hotel - stars ": "4" ,
" hotel - internet ": " yes " ,
" hotel - type ": " guest house "
}
" system ": " I would recommend the Acorn Guest House on the
,→ north side of town . It has 4 - stars . Should I book
,→ this ?"
" user ": " Yes . Please book it for 8 people for 5 nights
,→ starting from sunday ."
{
" hotel - name ": " Acorn Guest House " ,
" hotel - area ": " north " ,
" hotel - parking ": " yes " ,
" hotel - pricerange ": " moderate " ,
" hotel - stars ": "4" ,
" hotel - internet ": " yes " ,
```
" hotel - type ": " guest house " , " hotel - book_people ": "8" , " hotel - book_day ": " sunday " , " hotel - book_stay ":"5" }
## D "Chatgpt, Are You A Solution Or An Opportunity?"
We queried ChatGPT with our titular question and received the following response at the 5th regeneration.
ChatGPT for Zero - shot Dialogue State Tracking : A Solution or
,→ an Opportunity ?
ChatGPT is a powerful language generation model that has the
,→ ability to generate human - like responses to a
,→ given prompt . It has been used in various natural
,→ language processing ( NLP ) tasks such as question
,→ answering , text summarization , and dialogue ,→ generation .
One potential application of ChatGPT is in zero - shot
,→ dialogue state tracking , which involves tracking
,→ the state of a conversation without any prior
,→ knowledge of the domain or the specific task . This ,→ could be useful in situations where the system ,→ needs to handle a wide range of topics and tasks , ,→ and where it is not feasible to manually annotate a
,→ large amount of training data .
However , there are also challenges with using ChatGPT for
,→ zero - shot dialogue state tracking . For example , it ,→ may not be able to accurately track the state of
,→ the conversation if the input is ambiguous or if
,→ the user ' s intent is not clear . Additionally , the ,→ model ' s responses may not always be appropriate or ,→ relevant to the current conversation .
Overall , while ChatGPT has the potential to be a solution
,→ for zero - shot dialogue state tracking , it also ,→ presents opportunities for further research and ,→ development in this area . It could be combined with ,→ other techniques such as reinforcement learning or
,→ active learning to improve its performance , or ,→ integrated with other NLP models such as dialogue
,→ state tracking models to provide more accurate and ,→ context - aware responses .
Consider the following list of concepts , called " slots " provided to you as a json list .
```
" slots ": {
" taxi - leaveAt ": " the departure time of the taxi " ,
" taxi - destination ": " the destination of the taxi " ,
" taxi - departure ": " the departure of the taxi " ,
" taxi - arriveBy ": " the arrival time of the taxi " ,
" restaurant - book_people ": " the amount of people to book the restaurant for " ,
" restaurant - book_day ": " the day for which to book the restaurant " ,
" restaurant - book_time ": " the time for which to book the restaurant " ,
" restaurant - food ": " the food type of the restaurant " ,
" restaurant - pricerange ": " the price range of the restaurant " ,
" restaurant - name ": " the name of the restaurant " ,
" restaurant - area ": " the location of the restaurant " ,
" hotel - book_people ": " the amount of people to book the hotel for " ,
" hotel - book_day ": " the day for which to book the hotel " ,
" hotel - book_stay ": " the amount of nights to book the hotel for " ,
" hotel - name ": " the name of the hotel " ,
" hotel - area ": " the location of the hotel " ,
" hotel - parking ": " does the hotel have parking " ,
" hotel - pricerange ": " the price range of the hotel " ,
" hotel - stars ": " the star rating of the hotel " ,
" hotel - internet ": " does the hotel have internet " ,
" hotel - type ": " the type of the hotel " ,
" attraction - type ": " the type of the attraction " ,
" attraction - name ": " the name of the attraction " ,
" attraction - area ": " the area of the attraction " ,
" train - book_people ": " the amount of people to book the train for " ,
" train - leaveAt ": " the departure time of the train " ,
" train - destination ": " the destination of the train " ,
" train - day ": " the day for which to book the train " ,
" train - arriveBy ": " the arrival time of the train " ,
" train - departure ": " the departure of the train "
}
Some " slots " can only take a value from predefined list :
" categorical ": {
" hotel - pricerange ": [" cheap " , " moderate " , " expensive "] ,
" hotel - area ": [" north " , " south " , " east " , " west " , " centre "] ,
" hotel - parking ": [" yes " , " no "] ,
" hotel - internet ": [" yes " , " no "] ,
" hotel - type ": [" hotel " , " guest house "] ,
" restaurant - pricerange ": [" cheap " , " moderate " , " expensive "] ,
" restaurant - area ": [" north " , " south " , " east " , " west " , " centre "] ,
" attraction - area ": [" north " , " south " , " east " , " west " , " centre "]
}
Now consider the following dialogue between two parties called the " system " and " user ". Can you tell me which of the " slots "
,→ were updated by the " user " in its latest response to the " system "? Present the updates in JSON format . If no " slots "
,→ were updated , return an empty JSON list . If you encounter " slots " that were requested by the " user " then fill them
,→ with "?". If a user does not seem to care about a discussed " slot " fill it with " dontcare ".
```
## Figure 2: Prompt P.
![12_image_0.png](12_image_0.png)
| ( " ) "slots" were updated in the user's latest response to the system. |
|---------------------------------------------------------------------------------------------|
| ["hotel-pricerange":"cheap","hotel-area":"east"] |
| "system": "I've found three, all of them are guesthouses and all of them have free internet |
| and parking. Would you like me to give your their information |
| "user": "Sure, that sounds great." |
| [3] ["hotel-internet":"yes", "hotel-parking":"yes", "hotel-type":"guest house"] |
| "system": "Okay I will give you the information I have on them." |
| 'user': "Can you book it for 8 people for 4 nights starting on thursday?' |
| ( 3) |
![12_image_1.png](12_image_1.png)
Figure 3: Example of DST with ChatGPT. P is abridged for brevity. The sentence ""slots" were updated [...]" was
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5, Section "Limitations" (unnumbered)
✓ A2. Did you discuss any potential risks of your work?
Section 5, Section "Limitations" (unnumbered), Section "Ethics Statement" (unnumbered)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Entirety Of The Paper
✓ B1. Did you cite the creators of artifacts you used?
Entirety of the paper
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section "Ethics Statement" (unnumbered)
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section "Ethics Statement" (unnumbered)
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section "Ethics Statement" (unnumbered)
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Documentation of artifacts cited
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4
## C ✓ **Did You Run Computational Experiments?** Section 4
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Model is proprietary and runs as black box.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
chen-etal-2023-controllable | Controllable Mixed-Initiative Dialogue Generation through Prompting | https://aclanthology.org/2023.acl-short.82 | Mixed-initiative dialogue tasks involve repeated exchanges of information and conversational control. Conversational agents gain control by generating responses that follow particular dialogue intents or strategies, prescribed by a policy planner. The standard approach has been fine-tuning pre-trained language models to perform generation conditioned on these intents. However, these supervised generation models are limited by the cost and quality of data annotation. We instead prompt large language models as a drop-in replacement to fine-tuning on conditional generation. We formalize prompt construction for controllable mixed-initiative dialogue. Our findings show improvements over fine-tuning and ground truth responses according to human evaluation and automatic metrics for two tasks: PersuasionForGood and Emotional Support Conversations. | # Controllable Mixed-Initiative Dialogue Generation Through Prompting
Maximillian Chen, Xiao Yu, Weiyan Shi, Urvi Awasthi, Zhou Yu Columbia University [email protected]
{xy2437, ws2634, urvi.awasthi, zy2461}@columbia.edu
## Abstract
Mixed-initiative dialogue tasks involve repeated exchanges of information and conversational control. Conversational agents gain control by generating responses that follow particular dialogue intents or strategies, prescribed by a policy planner. The standard approach has been fine-tuning pre-trained language models to perform generation conditioned on these intents. However, these supervised generation models are limited by the cost and quality of data annotation. We instead prompt large language models as a drop-in replacement to finetuning on conditional generation. We formalize prompt construction for controllable mixedinitiative dialogue. Our findings show improvements over fine-tuning and ground truth responses according to human evaluation and automatic metrics for two tasks: PersuasionForGood and Emotional Support Conversations.
## 1 Introduction
Mixed initiative dialogue systems allow all interacting agents to initiate actions to control the interaction. These systems dynamically adapt interaction styles to regain control and progress towards specific goals (Allen et al., 1999; Chu-Carroll, 2000),
unlike others which passively respond to users' input (e.g. some assistants like ChatGPT),
Mixed initiative dialogue systems thus often involve complex policy planning sub-tasks to determine optimal turn-level system dialogue intents (Peng et al., 2018; Hiraoka et al., 2013; Muise et al., 2019; Liu et al., 2020). These policies define when it is optimal for a system to regain initiative
(e.g., when a moderator should interject in a conversation, or when a companion should ask questions or change a conversation topic).
However, "optimal" planned dialogue intents still need to be executed through "optimal" response models. The standard practice in recent dialogue research has been to fine-tune a pretrained language model for conditional generation 951
![0_image_0.png](0_image_0.png)
to achieve semantic control through some combination of innovations in model architectures or learning processes (Liu et al., 2021; Chen et al., 2019).
Such generation approaches still leave room for error. Assuming that there exists a truly optimal dialogue policy planner, a response model may still generate according to the wrong intent (partially due to the fact that dialogue datasets often have annotation errors (Qian et al., 2021; Zang et al.,
2020)). Or, a model may learn to generate correct intents but fail to create a response consistent with conversational context (Chen et al., 2022b).
Additionally, training corpora often differ in demographic and distribution compared to production environments, which can lead to deteriorating response quality (Koh et al., 2021).
We propose using vanilla large pre-trained language models (LLMs) such as GPT-3 (Brown et al.,
2020) as drop-in replacements to traditional finetuned conditional generation models for mixedinitiative dialogue systems. LLMs typically have been trained on massive corpora with large amounts of linguistic variety, making them more robust to overfitting specific tasks. Recent work demonstrates that LLMs have reasonable semantic control through few-shot prompting (Brown et al., 2020; Chen et al., 2023; Meng et al., 2022). Here, we demonstrate how1to systematically prompt LLMs for mixed-initiative dialogue generation. Evaluations yielded strong performance on two popular English mixed-initiative tasks: Emotional Support Conversations (ESC; Liu et al. (2021)) and PersuasionForGood (P4G; Wang et al. (2019b)).
## 2 Related Work
Controllable Generation approaches often involve fine-tuning a model conditioned on control codes (Keskar et al., 2019; Ficler and Goldberg, 2017), additional attribute representations in hidden states (Hoang et al., 2016; Fu et al., 2018) or latent variables (Bowman et al., 2016; Wang et al.,
2019a). Other work has attempted to mitigate the computational cost of fine-tuning, e.g. by training an auxiliary networks to guide the original LM
(Dathathri et al., 2020; Yu et al., 2021; Pascual et al., 2021). Here, we attempt controllable generation that replaces fine-tuning by prompting LLMs.
Prompting in Dialogue Research typically has focused on understanding tasks such as dialogue planning (Kuo and Chen, 2022) or state tracking (Lee et al., 2021; Mi et al., 2022). More recent dialogue research has examined using prompting for generating conversational data with varying levels of control (Kim et al., 2022; Chen et al.,
2022a; Mehri et al., 2022; Chen et al., 2023), citing the difficulty of using vanilla language models in production. Studies focusing on response generation looked at prompting LLMs specifically for knowledge-grounded dialogue generation (Liu et al., 2022; Madotto et al., 2021; Shuster et al.,
2022). Our work is the first to construct an interactive prompt-based mixed initiative dialogue system and evaluate the semantic control of prompting.
## 3 Datasets
We examined ESC (Liu et al., 2021)) and P4G
(Wang et al., 2019b). ESC consists of 1053 conversations between emotional help-seekers and supporters. Each conversation is annotated with the help-seeker's description of their problem, and the type of issues they are facing. Each turn by the supporters is annotated with one of eight emotional support strategies (Table A1). P4G contains 300 annotated conversations between persuaders who attempt to persuade persuadees to donate to a charity called Save the Children. Persuader turns are annotated with one of 10 strategies (Table A2).
## 4 Baselines
In mixed-initiative dialogue, interacting parties continuously exchange control throughout the conversation. However, in order for agents to regain control, they must be able to properly execute items from their conversational agenda, e.g. generating a response that matches a desired strategy/intent.
Liu et al. (2021) fine-tuned BlenderBot (Roller et al., 2021) on ESC using input representations consisting of flattened dialogue history and the predicted emotional support strategy for a specific turn.
The best-performing model in their experimental setting is "Oracle-BlenderBot" which conditions on the ground truth strategy for a given turn.
Chen et al. (2022b) proposed a persuasive dialogue system called RAP, which combined targeted user response with conditional generation. The conditional generation component of RAP involves fine-tuning BART (Lewis et al., 2020) using a penalized loss to force the model to artificially create semantic control through dialogue intents.
## 5 Mixed-Initative Dialogue Prompting
RAP required introducing a dialogue intent classifier to weakly supervise the training process, as there is not an oracle for whether the dialogue intent of a candidate response is correct. But, this confounds errors, as classifiers are imperfect. Moreover, fine-tuning approaches like both RAP and Oracle-BlenderBot involve balancing a tradeoff between response quality and semantic control accuracy. Prompting LLMs avoids both issues as it does not involve adjusting model weights to learn representations of control codes for individual tasks.
In this paper, we systematically prompt InstructGPT "text-davinci-003." Rather than requiring expert-level prompt engineering, we create general prompt templates which directly fill slots using roles and annotations from both ESC and P4G.
Specifically, we split up prompt construction into Task Background and *Conversation History*.
Figure 2 breaks down an example of a prompt for ESC. The Task Background is a paragraph formed from the "emotion type," "problem type," and "situation" annotations provided by the corpus. The Conversation History consists of each prior utterance, prepended by labels for each speaker. The system-side turns are also prefixed by a natural language form of the annotated emotional support strategy, derived from the annotation scheme in Liu et al. (2021) (e.g. "The Therapist acknowledges the Patient's feelings by paraphrasing their situation.").
Figure 2 contains the contextual dialogue turns in order, along with the three support strategies used.
The P4G prompting style is similar. Unlike personalized emotional support conversations, the task does not change, so the Task Background is fixed with relevant factual background information. The Conversation History still interweaves narrative directions for each persuasive strategy (e.g. "The Persuader uses a logical appeal."). Example provided in Figure A1. The natural language intent mappings for both tasks are provided in Tables A1,A2.
## 6 Experiments
We evaluated prompting statically and interactively.
## 6.1 Static Evaluation
We quantified how much semantic and pragmatic control vanilla LLMs can provide in conversation. We randomly sampled 100 responses from ESC (supporters) and P4G (persuaders). Each response's conversational history and strategy annotation was used to generate responses via prompting and fine-tuned models. We used OracleBlenderBot for ESC and RAP's conditional generation module for P4G.
We asked crowdworkers on Amazon Mechanical Turk2to evaluate candidate responses' accuracy with respect to its prescribed dialogue intents, coherence, consistency, and engagingness. We paired the dialogue responses from each source
(fine-tuning, prompting, or ground truth) with the corresponding responses from each of the other 2Details for all human evaluation tasks in Appendix A.
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
sources, allowing us to compute preference winrates between each pair. Each job presented only one pair of responses, in a random order. Additionally, we examined automatic metrics through Distinct-N (N ∈ {3, 4}), as well QuantiDCE (Ye et al., 2021), a BERT-based automatic dialogue coherence metric for open-domain conversation.
Table 1 shows that prompt-generated responses are more highly rated in terms of quality compared to responses generated from competitive fine-tuned dialogue models *as well as ground truth responses*,
in terms of all human evaluation metrics. This is also the case for Distinct-N in both tasks, and QuantiDCE in P4G. Oracle-BlenderBot slightly outperforms the prompt-generated responses in terms of QuantiDCE for ESC, but this difference is not statistically significant. Table 1 also shows that the prompt-generated responses are consistently preferable to the responses generated from fine-tuned dialogue models as well as the ground truth.
Finally, we also see that prompting appears to provide the best semantic control over generated responses. Prompt-generated responses had the highest probability of matching the desired dialogue
| Corpus | Metric | FT | GT | Prompt |
|-----------------|----------|-------|--------|----------|
| Accuracy | 0.81 | 0.85 | 0.88∗ | |
| Coherence | 3.57 | 3.57 | 3.72 | |
| Consistency | 3.63 | 3.60 | 3.80+∗ | |
| Engagingness | 3.55 | 3.61 | 3.81+∗ | |
| Distinct-3 | 0.89 | 0.90 | 0.90 | |
| Distinct-4 | 0.87 | 0.90∗ | 0.91+∗ | |
| QuantiDCE | 3.25 | 3.03 | 3.19 | |
| Win Rates v. FT | 0.56 | 0.52 | | |
| v. GT | 0.44 | 0.64∗ | | |
| v. Prompt | 0.48 | 0.36 | | |
| ESC | Accuracy | 0.88 | 0.83 | 0.89 |
| Coherence | 3.66 | 3.58 | 3.83+∗ | |
| Consistency | 3.69 | 3.56 | 3.71+ | |
| Engagingness | 3.62 | 3.52 | 3.69+ | |
| Distinct-3 | 0.87 | 0.88 | 0.89 | |
| Distinct-4 | 0.88 | 0.88 | 0.88 | |
| QuantiDCE | 3.16 | 3.09 | 3.24+ | |
| Win Rates v. FT | 0.56 | 0.59∗ | | |
| v. GT | 0.48 | 0.55 | | |
| v. Prompt | 0.41 | 0.45 | | |
| P4G | | | | |
intent, even surpassing that of the ground truth utterances in both corpora. This further demonstrates the difficulty of performing annotation for supervised training - the conversational strategies are subjective, and even the ground truth responses may have annotation errors. The prompt-generated responses are generally of higher quality than both fine-tuned models, which may be a result of the aforementioned difficulty of balancing control accuracy with response quality during generation.
## 6.2 Interactive Evaluation
We evaluated prompting as a generation module for mixed-initiative systems. This requires holding fixed other components, including policy planning.
RAP is a recently proposed framework for P4G using an "optimal" persuasive strategy ordering.
But, it built rapport with users by hierarchically integrating social chit-chat and knowledge retrieval with semantically-controlled generation (details in Chen et al. (2022b)). We built a system which replaces RAP's fine-tuned BART module with a module that systematically prompts InstructGPT.
As with the original implementation of RAP, our prompting module conditions on the knowledge
| The chatbot... | RAP (FT) | Prompting |
|-----------------------------------|------------|-------------|
| is competent ↑ | 3.81±1.11 | 4.21±0.84∗∗ |
| is natural ↑ | 3.81±1.19 | 4.17±0.94 |
| is intelligent ↑ | 3.83±1.20 | 4.19±1.05 |
| is well-intentioned ↑ | 4.00±1.09 | 4.29±0.87 |
| is confident ↑ | 3.94±1.13 | 4.35±0.85∗∗ |
| was dishonest ↓ | 2.90±1.42 | 2.70±1.40 |
| is warm ↑ | 3.56±1.31 | 4.04±1.00∗∗ |
| is sincere ↑ | 3.85±1.25 | 4.25±0.90∗ |
| is efficient ↑ | 3.96±1.18 | 4.33±0.75∗ |
| tried to pressure me ↓ | 3.04±1.39 | 3.02±1.23 |
| increased my intent to donate ↑ | 4.00±1.07 | 4.15±0.84 |
| is persuasive ↑ | 3.83±1.14 | 4.06±1.06 |
| is convincing ↑ | 3.77±1.14 | 4.29±0.73∗∗ |
| is a strong reason for donating ↑ | 3.60±1.30 | 4.19±0.81∗∗ |
retrieved for factual question answering3.
We asked crowdworkers to evaluate our system according to the criteria in Table 2. The system using prompting for generation was consistently rated more favorably than RAP, including in terms of convincingness, persuasiveness, and being a strong reason for donation. We discuss conversation examples in Appendix C. We see that our system was robust to a variety of input language patterns.
## 7 Discussion
Prompting yields strong performance in mixedinitiative tasks in the low resource regime4. Promptgenerated responses are often preferable even compared to ground-truth responses in ESC and P4G.
From 17 paired evaluations of ESC where crowdworkers rated ground truth utterances as not matching the ground truth intent annotation, the promptgenerated response was rated as correct 13 times.
However, this is likely because many dialogue corpora are created or annotated by crowdworkers, so the data may vary in quality. While LLMs may generate "better" responses than crowdworkers, we do not expect them to be better than expert therapists.
The results do indicate that prompting may be appropriate for building systems for tasks with limited data. As made evident by our ratings, annotating dialogue intents is a difficult and subjective process prone to errors *which can further propagate* to fine-tuned task models. This could potentially be addressed by the high semantic control demonstrated through prompting, despite not requiring downstream fine-tuning label supervision.
This prompting approach could be applied to other mixed-initiative tasks, including chit-chat and task-oriented dialogue. For instance, many real-world systems such as customer service chatbots already have pre-defined policies for what systems are allowed to say, despite not necessarily having many labeled conversations. A system can be designed as long as there is a policy planner, which could simply be a hierarchical ruleset. While there is some human-effort involved in writing natural language forms of fixed dialogue intents, it is a much less costly process than annotating highquality dialogue data.
## 8 Conclusion
We find encouraging results for prompting on mixed-initiative dialogue tasks, indicating that generated responses are high quality and follow semantic controls. Strong low resource performance opens the possibility of future work building mixedinitiative systems around novel settings which would require subjective data annotation.
## 9 Limitations
Limits of Prompt-based Generation. This work specifically proposes improvements to the controllable generation portion of mixed-initiative dialogue systems. However, dialogue policy planning is still an important problem to consider. In order to evaluate generation improvements, we hold dialogue policies fixed - in the static evaluation, we condition on ground truth dialogue intents, and in the interactive evaluation, we follow the same dialogue intents prescribed by the RAP system. To this end, a mixed-initiative dialogue system *cannot* consist solely of a generation module powered by prompting. There needs to be a set of rules or models that govern how a system can regain control of a conversation; the generation module is just a means of enacting these rules. As discussed in Section 7, prompting is a great option if there is already a pre-existing policy planner.
Due to these limitations, we did not conduct an interactive evaluation in the ESC setting. Emotional support conversations are highly personal, as circumstances vary across individuals. It would have required having study participants pretend to require support regarding a fixed scenario, or for participants to disclose their personal issues, which can raise other ethical concerns. Moreover, dialogue policy planning is not straightforward for emotional support, due to this highly variable nature. Effective support strategy planning requires expert knowledge.
In Section 7, we also discussed that prompting may be appropriate for developing systems for novel tasks in low-resource settings. However, deploying prompt-based systems may be less useful for the purpose of setting new benchmarks on existing leaderboards with a plethora of data. Such setting already have plenty of well-annotated conversations and simple fine-tuned models can often achieve strong performance.
Guardrails. Proper guardrails should be put inplace prior to productionization of any dialogue system, prompt-driven or not. While we witness strong overall response quality both in terms of human evaluation and automatic metrics, language models can generate contradictions. System builders may consider employing guardrails for dialogue consistency (e.g. Jin et al. (2022)) and coherence (e.g. Ye et al. (2021)), among others.
As with any training set, InstructGPT and other LLMs have been trained on finite amounts of data.
InstructGPT has not been trained on data after 2021.
This is also true of training corpora such as P4G
or ESC; these corpora were published in 2019 and 2021, respectively. Particularly in any sensitive environments, guardrails should be put in-place for factual correctness (e.g. Santhanam et al. (2021);
Wang et al. (2020)). RAP attempted to remedy this by incorporating retrieval for factual questions, which we also embedded into our prompting approach, but this knowledge base is also finite. In Section C we discuss one such example (Table A5).
A possible solution is internet retrieval (Komeili et al., 2022), but search engines can also yield misinformation, which leads to hallucination.
Computational Cost of Language Models.
LLMs are computationally expensive, and in the case of models such as InstructGPT, they are not open source. However, in this study, we did not have access to equally powerful open-source models such as OPT 175B, nor the appropriate hardware to load such a model (loading OPT 175B
requires 350 GB of GPU memory). We performed initial experiments with much smaller models which fit our hardware constraints such as GPT-
J 6B, but there was much higher variance in performance. This is supported by the fact that many reasoning capabilities do not seem possible with models smaller than 175B parameters (Wei et al.,
2022b,a). Given our limited budget for human evaluation, we opted to use the best performing LLM
we had access to, InstructGPT.
Prompt Optimality It is possible that we do not use an "optimal" set of prompts as we did not mine prompts or perform soft prompting. However, prompt optimality itself is a problem in dialogue generation, because open-ended dialogue evaluation is a difficult task. Most automatic evaluation metrics do not align well with human ratings in dialogue (Yeh et al., 2021; Liu et al., 2016).
This makes it suboptimal to use as a discriminator in soft prompting, for instance. Most existing work that does search for optimal prompts or tunes prompts works with tasks that have clearly defined automatic evaluation, such as sentiment analysis or table-to-text generation (van de Kar et al., 2022; Li and Liang, 2021; Lester et al., 2021). Moreover, human ratings are expensive and not scalable for systematic optimization.
## 10 Ethics Statement
Chatbot Identities. All study participants were informed that they were speaking to a chatbot, in accordance with law in certain localities (e.g. California's Bot Disclosure Law).
Dangers of Fully Automated Dialogue Systems.
We do not encourage the deployment of fully automatic dialogue systems for tasks such as emotional support in production settings. Bot Disclosure Laws exist because knowledge of chatbot identities affect human perception (Shi et al., 2020), and thus in sensitive situations such as therapy or emotional support, patients may not receive adequate support.
Moreover, there is the possibility of emotional support dialogue systems without proper guardrails introducing harmful or otherwise unethical content, e.g. by mentioning references which could be considered "triggering." Instead, we advise the use of mixed-initiative dialogue systems in a supportive manner, e.g., to assist trained counselors who have the emotional intelligence to recognize what content may be hurtful.
Reproducibility. In this study we used GPT-3, which is not an open-access language model. However, we have clearly described all of the prompts used in our paper.
Data Biases Every dataset, including P4G and ESC, has its own biases. LLMs such as InstructGPT have been trained on large amounts of data but may still not capture language usage of a sufficiently diverse population. While in Appendix C
we see InstructGPT's ability to handle diversity in language, this is something that warrants further interactive study with more extreme cases.
Crowdsourcing. All crowdworkers were paid at a rate of $15 per hour. We did not collect any personal or demographic information about any workers. Our study and data collection process has received IRB approval.
## Acknowledgements
This work is supported by a DARPA PTG grant.
We thank Ta-Chung Chi, Kun Qian, and our anonymous peer-reviewers for their helpful feedback. We also thank Sophie Chen for helpful suggestions on designing our figures.
## References
James E Allen, Curry I Guinn, and Eric Horvitz. 1999.
Mixed-initiative interaction. *IEEE Intelligent Systems and their Applications*, 14(5):14–23.
Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio.
2016. Generating sentences from a continuous space.
In *20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016*, pages 10–21.
Association for Computational Linguistics (ACL).
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Seokhwan Kim, Andy Rosenbaum, Yang Liu, Zhou Yu, and Dilek Hakkani-Tur. 2023. Places: Prompting language models for social conversation synthesis. In *Findings of the Association for Computational Linguistics: EACL 2023*, pages 814–838.
Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Andy Rosenbaum, Seokhwan Kim, Yang Liu, Zhou Yu, and Dilek Hakkani-Tur. 2022a. Weakly supervised data augmentation through prompting for dialogue understanding. In *NeurIPS 2022 Workshop* on Synthetic Data for Empowering ML Research.
Maximillian Chen, Weiyan Shi, Feifan Yan, Ryan Hou, Jingwen Zhang, Saurav Sahay, and Zhou Yu. 2022b.
Seamlessly integrating factual information and social content with persuasive dialogue. In *Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics* and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 399–413, Online only. Association for Computational Linguistics.
Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, and William Yang Wang. 2019. Semantically conditioned dialog response generation via hierarchical disentangled self-attention. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3696–3709.
Jennifer Chu-Carroll. 2000. Mimic: An adaptive mixed initiative spoken dialogue system for information queries. In *Sixth Applied Natural Language Processing Conference*, pages 97–104.
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models:
A simple approach to controlled text generation. In International Conference on Learning Representations.
Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In *Proceedings of the Workshop on Stylistic Variation*,
pages 94–104, Copenhagen, Denmark. Association for Computational Linguistics.
Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: exploration and evaluation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, pages 663–670.
Takuya Hiraoka, Yuki Yamauchi, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura.
2013. Dialogue management for leading the conversation in persuasive dialogue systems. In *2013* IEEE Workshop on Automatic Speech Recognition and Understanding, pages 114–119. IEEE.
Cong Duy Vu Hoang, Trevor Cohn, and Gholamreza Haffari. 2016. Incorporating side information into recurrent neural network language models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1250–1255.
Di Jin, Sijia Liu, Yang Liu, and Dilek Hakkani-Tur.
2022. Improving bot response contradiction detection via utterance rewriting. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 605–614.
Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL:
A conditional transformer language model for controllable generation. *CoRR*, abs/1909.05858.
Hyunwoo Kim, Jack Hessel, Liwei Jiang, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, et al. 2022.
Soda: Million-scale dialogue distillation with social commonsense contextualization. *arXiv preprint* arXiv:2212.10465.
Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. 2021. Wilds:
A benchmark of in-the-wild distribution shifts. In *International Conference on Machine Learning*, pages 5637–5664. PMLR.
Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022.
Internet-augmented dialogue generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 8460–8478.
Hui-Chi Kuo and Yun-Nung Chen. 2022. Zero-shot prompting for implicit intent prediction and recommendation with commonsense reasoning. *arXiv* preprint arXiv:2210.05901.
Chia-Hsuan Lee, Hao Cheng, and Mari Ostendorf. 2021.
Dialogue state tracking with a language model using schema-driven prompting. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 4937–4949.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597.
Yu Li, Josh Arnold, Feifan Yan, Weiyan Shi, and Zhou Yu. 2021. Legoeval: An open-source toolkit for dialogue system evaluation via crowdsourcing. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing: System Demonstrations, pages 317–324.
Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau.
2016. How not to evaluate your dialogue system:
An empirical study of unsupervised evaluation metrics for dialogue response generation. In *Proceedings* of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132.
Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3469–3483.
Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, and Ting Liu. 2020. Towards conversational recommendation over multi-type dialogs. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1036–
1049.
Zihan Liu, Mostofa Patwary, Ryan Prenger, Shrimai Prabhumoye, Wei Ping, Mohammad Shoeybi, and Bryan Catanzaro. 2022. Multi-stage prompting for knowledgeable dialogue generation. In *Findings of* the Association for Computational Linguistics: ACL
2022, pages 1317–1337.
Andrea Madotto, Zhaojiang Lin, Genta Indra Winata, and Pascale Fung. 2021. Few-shot bot: Promptbased learning for dialogue systems. arXiv preprint arXiv:2110.08118.
Shikib Mehri, Yasemin Altun, and Maxine Eskenazi.
2022. Lad: Language models as data for zero-shot dialog. In *Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and* Dialogue, pages 595–604.
Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han.
2022. Generating training data with language models: Towards zero-shot language understanding. In Advances in Neural Information Processing Systems.
Fei Mi, Yasheng Wang, and Yitong Li. 2022. Cins:
Comprehensive instruction for few-shot learning in task-oriented dialog systems. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 36, pages 11076–11084.
Christian Muise, Tathagata Chakraborti, Shubham Agarwal, Ondrej Bajgar, Arunima Chaudhary, Luis A Lastras-Montano, Josef Ondrej, Miroslav Vodolan, and Charlie Wiecha. 2019. Planning for goal-oriented dialogue systems. arXiv preprint arXiv:1910.08137.
Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, and Roger Wattenhofer. 2021. A plug-andplay method for controlled text generation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3973–3997, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *Advances in* neural information processing systems, 32.
Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Kam-Fai Wong. 2018. Deep dyna-q: Integrating planning for task-completion dialogue policy learning. In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 2182–2192.
Kun Qian, Ahmad Beirami, Zhouhan Lin, Ankita De, Alborz Geramifard, Zhou Yu, and Chinnadhurai Sankar. 2021. Annotation inconsistency and entity bias in multiwoz. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 326–337.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, et al. 2021.
Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325.
Sashank Santhanam, Behnam Hedayatnia, Spandana Gella, Aishwarya Padmakumar, Seokhwan Kim, Yang Liu, and Dilek Hakkani-Tur. 2021. Rome was built in 1776: A case study on factual correctness in knowledge-grounded response generation. *arXiv* preprint arXiv:2110.05456.
Weiyan Shi, Xuewei Wang, Yoo Jung Oh, Jingwen Zhang, Saurav Sahay, and Zhou Yu. 2020. Effects of persuasive dialogues: testing bot identities and inquiry strategies. In *Proceedings of the 2020 CHI*
Conference on Human Factors in Computing Systems, pages 1–13.
Kurt Shuster, Mojtaba Komeili, Leonard Adolphs, Stephen Roller, Arthur Szlam, and Jason Weston. 2022. Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion. *arXiv preprint* arXiv:2203.13224.
Mozes van de Kar, Mengzhou Xia, Danqi Chen, and Mikel Artetxe. 2022. Don't prompt, search! miningbased zero-shot learning with language models.
arXiv preprint arXiv:2210.14803.
Jian Wang, Junhao Liu, Wei Bi, Xiaojiang Liu, Kejing He, Ruifeng Xu, and Min Yang. 2020. Improving knowledge-aware dialogue generation via knowledge base question answering. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 34, pages 9169–9176.
Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019a. Topic-guided variational auto-encoder for text generation. In *Proceedings of* the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 166–177, Minneapolis, Minnesota. Association for Computational Linguistics.
Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019b. Persuasion for good: Towards a personalized persuasive dialogue system for social good. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 5635–5649, Florence, Italy. Association for Computational Linguistics.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models.
arXiv preprint arXiv:2206.07682.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b.
Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 conference on empirical methods in natural language* processing: system demonstrations, pages 38–45.
Jing Xu, Arthur Szlam, and Jason Weston. 2022. Beyond goldfish memory: Long-term open-domain conversation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 5180–5197.
Zheng Ye, Liucun Lu, Lishan Huang, Liang Lin, and Xiaodan Liang. 2021. Towards quantifiable dialogue coherence evaluation. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 2718–2729.
Yi-Ting Yeh, Maxine Eskenazi, and Shikib Mehri. 2021.
A comprehensive assessment of dialog evaluation metrics. *arXiv preprint arXiv:2106.03706*.
Dian Yu, Zhou Yu, and Kenji Sagae. 2021. Attribute alignment: Controlling text generation from pretrained language models. In *Findings of the Association for Computational Linguistics: EMNLP 2021*,
pages 2251–2268, Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, and Jindong Chen.
2020. Multiwoz 2.2: A dialogue dataset with additional annotation corrections and state tracking baselines. *ACL 2020*, page 109.
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
## A Human Evaluation Details
We performed both our static and interactive evaluation on Amazon Mechanical Turk. We required that all crowdworkers had a HIT Approval Rate of at least 95%. 322 unique crowdworkers successfully completed the static evaluation task. There were 100 unique conversation turns used, with each candidate response being rated twice in order to pair the three conditions (ground truth, fine-tuning, prompting). 100 unique crowdworkers successfully completed the interactive evaluation task.
For the static evaluations of both ESC and P4G,
the following definitions were provided to the crowdworkers:
- Engaging (1-5): Whether the response is interesting and engaging.
- Coherent (1-5): Whether the response makes sense and is non-repetitive.
- Consistent (1-5): Whether the response is free of inconsistencies and logical fallacies.
Specifically for P4G, the following conversational strategies were defined along with examples:
- Greeting: A greeting from the speaker.
- Source-related inquiry: A question about the charity, Save the Children.
- Task-related inquiry: A question related to the task of donating to Save the Children, e.g.
asking whether the Persuadee has donated to charities in the past or asking about information related to Save the Children.
- Personal-related inquiry: A personal question about the persuadee.
- Credibility appeal: An argument giving credibility to Save the Children.
- Emotional appeal: An argument that elicits an emotional response from the Persuadee.
- Logical appeal: An argument that uses reasoning and evidence to convince the Persuadee, e.g., by using facts to reason that a donation would make a tangible impact.
- Self-modeling: A reflection of the Persuader's own intention to donate to Save the Children.
- Foot-in-the-door: A strategy of starting with small donation requests to facilitate compliance followed by larger requests.
- Personal story: Using narrative examples relating to the Persuader's personal experiences or other anecdotes.
- Propose donation: Asking the Persuadee if they would like to donate to the charity.
- Closing: Ending the conversation.
For ESC, the following support strategies were defined along with examples:
- Question: The Therapist asks the Patient for information to help them articulate their issues.
- Restatement or Paraphrasing: A simple, concise rephrasing of the help-seeker's statements.
- Reflection of Feelings: Acknowledge/articulate and decsribe the help-seeker's feelings.
- Self-disclosure: The Therapist divulges similar experiences they have had.
- Affirmation and Reassurance: Affirm the Patient's strengths, motivation, and capabilities and provide reassurance and encouragement.
- Providing suggestions: Provide suggestions about how to change.
- Information: Provide useful information, often backed with data, facts, or opinions.
- Others: Exchange pleasantries and use other support strategies not listed above.
## B.1 Additional Prompt Details B Implementation Details C Example Conversations & Case Study
in our prompts, we simply append the retrieved knowledge to the end of the prompt. For example, the prompt typically ends with an indicator that the Persuader should speak - "Persuader:". Now, the prompt instead ends with "Persuader: [retrieved knowledge]".
In RAP, the authors used Blender Bot 2.0 (Xu et al., 2022; Komeili et al., 2022) to incorporate social chitchat in order to acknowledge user responses. In our version using prompting for generation, we directly add more instructions into the prompt. We prepend the natural language form of the system-side dialogue intent with "The Persuader acknowledges the Persuadee's response and". For example, a prompt targeting generating a credibility appeal with social acknowledgement would be "The Persuader acknowledges the Persuadee's response and The Persuader uses a credibility appeal."
The full situation given in the prompt example from Figure 2 is as follows: *"I had to quit my job back in* February due to living with someone going through chemo. My town doesn't have many job options other than retail, so I have been trying to earn money for debts online."
The full Task Background for P4G is as follows: *"The following is background information* about Save the Children. Save the Children is headquartered in London, and they work to help fight poverty around the world. Children need help in developing countries and war zones. Small donations like $1 or $2 go a long way to help.
The following is a conversation between a Persuader and a Persuadee about a charity called Save the Children. The Persuader is trying to persuade the Persuadee to donate to Save the Children."
Prompting InstructGPT for P4G cost $0.06 per study participant, on average. We generate using a temperature of 0.70, and frequency penalty of 0.75. Our prompting code is attached and will be made available online upon acceptance.
The persuasion strategies are defined based on Wang et al. (2019b), and the emotional support strategies are defined based on Liu et al. (2021).
For the interactive evaluation, all crowdworkers were randomly assigned a link to a chatbot running either RAP or a prompt-driven system deployed using the LegoEval platform (Li et al., 2021). In total, 48 crowdworkers used the prompt-based system, and 52 crowdworkers used the system powered by RAP after removing those who did not successfully answer the validation question. All crowdworkers agree to interacting with a research prototype which may produce harmful content. They also were required to provide content to the logging of their responses and ratings.
All baseline models were trained using HuggingFace Transformers (Wolf et al., 2020) and PyTorch (Paszke et al., 2019). All experiments used one NVIDIA A6000 GPU.
The rest of the RAP baseline follows the details provided in Chen et al. (2022b). To perform knowledge retrieval, we computed the cosine distance of Sentence-BERT (Reimers and Gurevych, 2019)
embeddings between question-answer mappings derived from the training data, and retrieved the answer to the question that has the lowest cosine distance in semantic meaning from the question asked by the user. In order to use the knowledge Table A3 and Table A4 are examples of users who agreed that the prompt-based chatbot was both persuasive and increased their intention to donate.
They also both found that the chatbot created natural and coherent responses. The user in Table A4 thought that the chatbot's responses were also
| Dialogue Intent | Natural Language Form |
|-------------------------------|---------------------------------------------------------------------------------------------------------|
| Question | The Therapist asks the Patient to elaborate on the situation they just described. |
| Self-disclosure | The Therapist provides a statement relating to the Patient about the situation they just described. |
| Affirmation and Reassurance | The Therapist provides affirmation and reassurance to the Patient on the situation they just described. |
| Providing Suggestions | The Therapist provides suggestions to the Patient on the situation they just described. |
| Others Reflection of feelings | The Therapist acknowledges the Patient's feelings about the situation they described. |
| Information | The Therapist provides factual information to help the Patient with their situation. |
| Restatement or Paraphrasing | The Therapist acknowledges the Patient's feelings by paraphrasing their situation. |
Table A1: Mapping of Supporter conversational strategies to natural language in Emotional Support Conversations.
| Dialogue Intent | Natural Language Form |
|--------------------------|----------------------------------------------------------------------------------|
| Personal Story | The Persuader tells a personal story. |
| Credibility Appeal | The Persuader uses a credibility appeal. |
| Emotion Appeal | The Persuader uses an emotion appeal. |
| Propose Donation | The Persuader asks if the Persuadee would like to make a small donation. |
| Foot-in-the-door | The Persuader tells the Persuadee about how useful even small donations are. |
| Logical Appeal | The Persuader uses a logical appeal. |
| Self-modeling | The Persuader talks about how often they donate to charities. |
| Task-related inquiry | The Persuader asks the Persuadee if they have donated to any charities before. |
| Source-related inquiry | The Persuader asks the Persuadee if they have heard of Save the Children before. |
| Personal-related-inquiry | The Persuader asks the Persuadee if they have kids. |
very logically consistent, but the user in Table A3 provided a neutral opinion.
In Table A3, the user appears engaged from the start. However, they reveal an interest in whether Save the Children is active in Brazil, and admit that they are from Brazil. InstructGPT is able to generate responses which correctly identify that Save the Children is indeed active in Brazil, and able to form coherent anecdotes about this topic. Similarly, the user in Table A4 appears to warm up to the chatbot throughout the conversation. By their fifth turn, they actually admit "i think i would be interested in making a donation" and their responses are more verbose as the conversation continues.
On the other hand, the users in Table A5 and Table A6 both disagreed with the statement that
"The chatbot is persuasive." However, the actual conversation context leading to these statements is quite different. In Table A5, the user seems actively engaged throughout the conversation. They ask several questions, and each time, the system generates a reasonable response. For instance, on Turn 4, the user asks "Are they involved in the situation in Ukraine?" and the system correctly responds, "Yes, they are. Save the Children has been present in Ukraine since 2014 ..." The user even rates the chatbot responses as natural, coherent, and consistent, but they did not find it to be persuasive.
This perhaps speaks to the importance of having personalized persuasive strategy planning, which is an open problem in P4G. Additionally, it should be noted that while the user is likely making reference to the ongoing war in Ukraine at the time of writing, InstructGPT is likely more generally referring to the fact that Save the Children has been present in Ukraine since 20145 which would be a fact that exists in its publicly available pre-training corpora dating back to 2021. While it was not an issue in this scenario, future work may look to consider expanding the Task Background in the prompt to further include critical information to condition on. Additionally, as LLMs continue to be empowered with internet retrieval, up-to-date information retrieval will be more feasible in such applications.
Table A6 is quite different from the other examples displayed. The user looks disengaged from the task, as their responses are provided in all capitals. They do not provide more than two words in a single response, and their responses often are incoherent or contain typos. They did not rate the chatbot as persuasive not coherent. However, it is worth noting that the generated responses appear coherent in all of the examples given, despite the wide range of user-side linguistic variety. This further supports the possibility of using prompting to build applications resistant to distribution shift.
5https://www.savethechildren.net/what-wedo/emergencies/ukraine-crisis
| Party | Utterance |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| System | Hi, how are you doing? |
| User | Hello. I'm fine and you? |
| System | That's great to hear. Have you ever heard of the charity Save the Children before? |
| User | No, can you tell me about the institution? |
| System | It's an organization called Save the Children. They are a global organization that works to fight poverty and help children in developing countries and war zones. They have been around for over a century and have helped millions of children in need. Your donation, however small, could make a huge difference in the lives of these children. |
| User | How can I help this institution? |
| System | Absolutely! You can help by donating whatever you are able to. Every penny counts, and even 1or2 can go a long way in helping these children in need. Your donation could mean the world to them, and it would be greatly appreciated. |
| User | I will help. Can you give me their email address? |
| System | I would like to encourage you do donate a part of your earnings for this HIT to a charity called Save the Children . They are a global organization that works to fight poverty and help children in developing countries and war zones. Your donation could make a big difference in the lives of these children, no matter how small it is. Here is their website address: https://www.savethechildren.org/. Thank you for considering this! |
| User | I tried to help them as best I could. |
| System | Thank you for helping out. It really does mean the world to these children. I donate to charities like Save the Children whenever I can, and it's always a great feeling to know that I'm making a difference in the world. |
| User | Do you know which countries this institution operates in? |
| System | They help children across 120 countries. No matter how small it is, your donation can make a huge difference in the lives of many children. Any amount you can contribute would be greatly appreciated. |
| User | Is Brazil among these 120 countries? |
| System | Yes, Brazil is included in the 120 countries. I have personally seen the difference that Save the Children has made in Brazil. Last year, I had the opportunity to travel there and volunteer with them. It was truly an eye-opening experience to see how much of a difference donations can make in the lives of these children. |
| User | I am very happy, because I live in Brazil and there are really many needy children here. |
| System | I'm glad to hear that. Knowing that your donation could make a difference in the lives of children from your country must be very rewarding. |
| Table A3: Example conversation with a dialogue system using InstructGPT. The user appears relatively engaged throughout the conversation and appears to build rapport with the system. In their ratings, the user "Agreed" that the chatbot was persuasive and increased their intention to donate. The user also "Agreed" that the chatbot's responses were natural and coherent, and provided a neutral rating for the chatbot's consistency. Party Utterance System Hi, how are you doing? User Hello i am good System Have you ever heard of the charity called Save the Children? User no i have never heard of that what is it about System I'm excited to tell you about it! it is an international non-governmental organization that promotes children's rights, provides relief and helps support children in developing countries. Do you have children? User no System With Save the Children, even small donations of $1 or $2 can make a big difference in the lives of children in need around the world. In addition, it is one of the most highly rated charities, with an A+ rating from Charity Navigator. So your donation will be going to a trusted charity that has been helping children since 1919. User that is a very good history! System Yes, it is a very inspiring history. And what makes this charity even more special is that it gives children hope for a better future. Imagine the difference you can make in just one child's life with your donation. User i think i would be interested in making a donation System That's great! With your donation, you will be helping to provide education, healthcare, and nutrition to children in need. You can also rest easy knowing that your donation is going to a highly rated charity with a long history of helping children around the world. User im happy i can make a difference! System It's wonderful that you are willing to make a difference. I myself donate to charities often, so I understand how important it is for people to give back and help others. Table A4: Example conversation with a dialogue system using InstructGPT. The user appears to become more | |
Table A4: Example conversation with a dialogue system using InstructGPT. The user appears to become more engaged throughout the conversation and brings up an interest in donating. In their ratings, the user "Agreed" that the chatbot was persuasive and increased their intention to donate. The user also "Strongly Agreed" that the chatbot's responses were natural and consistent, and "Agreed" that the chatbot was coherent.
| Party | Utterance |
|--------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| System | Hi! How are you doing? |
| User | I'm good. How are you? |
| System | I'm great, thank you for asking! Have you ever heard of Save the Children before? |
| User | I have not. What do they do? |
| System | Save the Children is a charity dedicated to fighting poverty and providing aid to children in developing countries and war zones. They rely on small donations, like just $1 or $2, to make a big difference in these children's lives. |
| User | How long have they been around? |
| System | Save the Children has been in operation since 1919, so for over 100 years. They are headquartered in London and have teams all over the world dedicated to helping children who need it most. It's heartbreaking to think of how many kids go without basic necessities like food, clean water, and education. That's why I try to do what I can by donating a few dollars here and there. |
| User | Are they involved in the situation in Ukraine? |
| System | Yes, they are. Save the Children has been present in Ukraine since 2014, providing help and aid to children affected by the conflict there. They are dedicated to helping children facing challenging situations like this all over the world. Every small donation makes a big difference in these kids' lives. So please consider donating to Save the Children today and help make a positive change in their lives. |
| User | How would one go about donating to them? |
| System | You can donate to Save the Children online or by mail. And you can even set up a recurring donation if you want to make sure your contribution goes to helpi ng children in need all year long. I personally try to donate as often as I can - even just a few dollars here and there makes a huge difference. |
| User | Can I get a link to their official website? |
| System | Absolutely! Here is a link to their official website: www.savethechildren.org. Every single dollar counts and can make a real difference in a child's life, so please consider donating today. Together, we can help make the world a better place for all children! |
| Table A5: Example conversation with a dialogue system using InstructGPT. The user appears relatively engaged | |
Table A5: Example conversation with a dialogue system using InstructGPT. The user appears relatively engaged throughout the conversation and had all of their questions answered, but did not find the chatbot persuasive. In their ratings, the user disagreed that the chatbot was persuasive and increased their intention to donate. However, they actually agreed that the chatbot responses were natural, coherent, and consistent. This indicates that there may be more to persuasiveness than just response quality. different users also may have different natural propensities to provide donations.
| Party | Utterance |
|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| System | Hello, how are you doing? |
| User | HAI |
| System | Have you ever heard of Save the Children? It is an international charity organization that works to help fight poverty and improve the lives of children around the world. |
| User | OK |
| System | Do you have any kids? |
| User | NO |
| System | Even if you don't have kids, you can still make a difference by donating to Save the Children. Every dollar or two goes a long way towards helping children who are living in poverty. |
| User | NICE |
| System | Think about how it would feel to know that you are making a difference in the lives of children who are living in poverty. Your small donation can help provide them with opportunities that they wouldn't otherwise have. |
| User | YES |
| System | You can make a big difference with a small donation. It only takes 1or2 to make an impact. Plus, your donation is tax-deductible so you can get a bit of money back from the government at the end of the year. |
| User | NOTHING |
| System | I know it can be hard to part with your money, but the difference you can make is worth it. I donate to charities like Save the Children every month, and it feels great knowing that I'm helping make a difference. |
| User | GREAT CHANCE |
Table A6: Example conversation with a dialogue system using InstructGPT. The user's responses are not coherent.
In their ratings, the user disagreed that the chatbot was persuasive. They strongly disagreed that the chatbot's responses were coherent. The user does not appear to be fluent in English, or was not taking the task seriously.
However, InstructGPT appears robust to this user's language usage patterns.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
9
✓ A2. Did you discuss any potential risks of your work?
7, 9, 10
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4,5,6
✓ B1. Did you cite the creators of artifacts you used?
4,5,6, References
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
10
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 9, 10 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
1, 3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3
## C ✓ **Did You Run Computational Experiments?** 4, 5, 6
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 6, Appendix A
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix A
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 10, Appendix A
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix A
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Section 10
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
We did not collect any demographic/geographic data. The only filter we used for our study participants on Mechanical Turk was HIT Approval Rate. |
mu-li-2023-enhancing | Enhancing Event Causality Identification with Counterfactual Reasoning | https://aclanthology.org/2023.acl-short.83 | Existing methods for event causality identification (ECI) focus on mining potential causal signals, i.e., causal context keywords and event pairs. However, causal signals are ambiguous, which may lead to the context-keywords bias and the event-pairs bias. To solve this issue, we propose the \textit{counterfactual reasoning} that explicitly estimates the influence of context keywords and event pairs in training, so that we are able to eliminate the biases in inference.Experiments are conducted on two datasets, the result demonstrates the effectiveness of our method. | # Enhancing Event Causality Identification With Counterfactual Reasoning
## Feiteng Mu, Wenjie Li
The Department of Computing, The Hong Kong Polytechnic University, Hong Kong
{csfmu,cswjli}@comp.polyu.edu.hk
## Abstract
Existing methods for event causality identification (ECI) focus on mining potential causal signals, i.e., causal context keywords and event pairs. However, causal signals are ambiguous, which may lead to the context-keywords bias and the event-pairs bias. To solve this issue, we propose the *counterfactual reasoning* that explicitly estimates the influence of context keywords and event pairs in training, so that we are able to eliminate the biases in inference. Experiments are conducted on two datasets, the result demonstrates the effectiveness of our method.
## 1 Introduction
Event causality identification (ECI) aims to identify causal relations between event pairs. For example, given the sentence "The *earthquake* generated a tsunami.", an ECI system should identify that a causal relation holds between the two mentioned events, i.e., earthquake *cause* −→ tsunami. A good ECI
system is able to discover a large number of causal relations from text and hence supports lots of intelligence applications, such as commonsense causal reasoning (Luo et al., 2016), narrative story generation (Mostafazadeh et al., 2016), and many others.
Existing methods focus on mining potential causal signals, including *causal context keywords*
(Liu et al., 2020; Zuo et al., 2021a) and causal event pairs (Zuo et al., 2020, 2021b; Cao et al.,
2021), to enhance ECI. By mining potential causal signals, these methods improve the coverage of unseen events and causal relations, which is the reason for their success. However, they face the risk of amplifying the role of potential signals, resulting in biased inference.
Due to the polysemy of language, causal signals are ambiguous. The occurrence of those signals does not always indicate that causality is established. That is, ambiguous *context keywords* and event pairs may lead to the **context-keywords bias**
and the **event-pairs bias** in ECI. Specifically, in 967
![0_image_0.png](0_image_0.png)
Table 1: The example comes from the development set of EventStroyLine (Caselli and Vossen, 2017).
most cases, *"(earthquake, tsunami)"* in the training set occurs as a causal event pair, but in the sentence which is from the development set, as shown in Table 1, this event pair is not causal. Similarly, ambiguous keywords, such as "generate", do not always indicate causality (Xie and Mu, 2019a,b).
Relying heavily on those ambiguous signals may make an ECI model learn the spurious correlation
(Pearl, 2009) between ambiguous signals and labels. In other words, existing methods may overfit those ambiguous causal signals in training, and tends to predict a causal relation once the ambiguous signals appear when inference.
To solve this issue, it is intuitively to explicitly estimate the influence of context keywords and event pairs in training, so that we can mitigate those biases in inference. Motivated by this idea and existing dataset-debiasing works (Niu et al., 2021; Tian et al., 2022; Qian et al., 2021), we introduce factual and *counterfactual* reasoning for ECI. The factual reasoning takes the entire samples as input, which captures the combined features between context keywords and the event pairs, with the sideeffect of learning features of biases. The *counterfactual* reasoning considers the two situations where only context keywords or event pairs are available. Intuitively, in counterfactual reasoning, a model can only make predictions based on context keywords or event pairs, so that the biases can be identified. In inference, we use counterfactual reasoning to estimate context-keywords bias and eventpairs bias, then subtract the biases from the factual predictions. To achieve this goal, we must locate the exact position of context keywords in a sentence1. But this is difficult because it requires extensive manual annotation. To avoid this, we adopt a model-based strategy. Considering the powerful feature extraction ability of pre-trained language models (PLMs), if we feed an event-removed sentence into PLMs, PLMs should be able to pay the most attention to the important context keywords.
Based on this assumption, we split a sentence into two exclusive parts: an event-masked context and an event pair. They are fed into the counterfactual reasoning module to learn the context-keywords bias and event-pairs bias.
To summarize, we consider the spurious correlation problem in ECI, which may make an ECI
model overfit on ambiguous causal signals. To mitigate this problem, we propose a counterfactual reasoning mechanism for ECI. To the best of our knowledge, this is the first work that studies ECI
from a counterfactual perspective. We conduct extensive experiments on two benchmark datasets.
The result shows that our method is effective.
## 2 Counterfactual Eci
Previous ECI methods may overfit the ambiguous context keywords and event pairs, making biased inferences. We use counterfactual reasoning to eliminate this issue. Our method is depicted in Figure 1, which consists of a factual reasoning module and a counterfactual reasoning module.
## 2.1 Factual Reasoning Module
Factual reasoning learns the influence of entire ECI
samples, following the traditional ECI paradigm.
Here we present two classical methods.
Fine-tuning PLMs For ECI We first fine-tune PLMs as a basic backbone. Given a sentence with a mentioned event pair (denoted as e1 and e2), we use PLMs, e.g., BERT (Devlin et al., 2018), to encode the sentence and the event pair. Then the embeddings of [CLS], e1 and e2 2are concatenated and applied with a non-linear transformation to obtain the hidden representation of the factual reasoning:
hECI = *tanh*(W⊤
f ([h[CLS]; he1; he2])), (1)
where W⊤
f ∈ R3d×d, hECI ∈ Rd, d is the hidden size of BERT. hECI is then projected with a linear layer W⊤
p ∈ Rd×2to make a binary classification:
PECI = *sof tmax*(W⊤
p hECI). (2)
1The positions of event pairs are already annotated.
2An event is annotated as a text span, so the averagepooling operation is applied to obtain the event embedding.
Figure 1: In the upper part, we split a sample into an event
![1_image_0.png](1_image_0.png)
pair and an event-masked context. In the bottom part, we show the training and inference process of our method.
Knowledge-Enhanced ECI Existing works prove that knowledge is helpful for ECI. So we develop a knowledge-enhanced backbone. Given
(e1, e2), we retrieve the related knowledge tuples3for e1 and e2 respectively, namely Kei =
{τ 1 ei
, τ 2 ei
, · · · , τ Ni ei}, where i = 1, 2 denotes the event index, τ = (*h, t*) denotes a knowledge tuple
(head, tail), N1 and N2 is the number of knowledge tuples. We obtain the knowledge-enhanced features of e1 and e2 by average-pooling on the embeddings of corresponding knowledge tuples:
$$\mathbf{h}_{e_{i}}^{K}={\frac{1}{N_{i}}}\sum_{j=1}^{N_{i}}\mathbf{W}_{k}^{\top}[\mathbf{h}_{e_{i}}^{j};\mathbf{t}_{e_{i}}^{j}],$$
], (3)
where i = 1, 2, h and t denote the embeddings of a tuple (h, t), Wk ∈ R2d×dis trainable. Then the knowledge-enhanced event representations h K
e1 and h K
e2 are concatenated with hECI (Equation 1), and input into a MLP to make a binary classification:
P
K
ECI = *sof tmax*(MLP([hECI; h K
e1
; h K
e2
])). (4)
Finally, the cross-entropy loss is applied to PECI
and P
K
ECI to train the two backbones. Factual reasoning learns combined features between the context and the event pair, but biases may be entangled into the combined features. Next, we propose counterfactual reasoning to capture the entangled biases.
## 2.2 Counterfactual Reasoning Module
To estimate the context-keywords bias and the event-pairs bias in training, we split a sentence into two exclusive parts: an event-masked context and an event pair. For each part, we use counterfactual reasoning to estimate the corresponding bias.
## 2.2.1 Estimating Context-Keywords Bias
We consider the counterfactual situation where only the event-masked context is available. We input the context into PLMs, and let PLMs automatically attend to the important context keywords. The [CLS]
token embedding h[CLS]is used as the representation of the event-masked context. Note that h[CLS]
3Details can be seen in Appendix A.
is different from h[CLS](Equation 1) because the event pair is removed in the current situation. We obtain the hidden state of the current situation by:
hC = *tanh*(W⊤
f ([h[CLS]; ΦE; ΦE])), (5)
where Wf is the shared parameter (Equation 1),
ΦE ∈ Rdis a learnable constant, and represents the void input events. The insight of this setting is that if we have no information about the event pair, we would like to make inference by random guess.
Then hC is projected to make binary classification:
PC = *sof tmax*(W⊤
C hC ), (6)
where WC is trainable, PC estimates the influence of the context-keywords bias.
## 2.2.2 Estimating Event-Pairs Bias
Next, we consider the counterfactual situation where only the event pair (e1, e2) is available.
Through PLMs, we get the event embeddings of he1 and he2
. Note that he1 and he2 is different from he1 and he2
(Equation 1) because the context is invisible in the current situation. We obtain the hidden state of the current situation by:
hE = *tanh*(W⊤
f ([ΦC ; he1; he2])), (7)
where ΦC is a learnable constant, and represents the void input context. Then hE is projected with a linear layer to make binary classification:
PE = *sof tmax*(W⊤
E hE), (8)
where WE is trainable, PE estimates the influence of the event-pairs bias.
## 2.3 Training And De-Biased Inference
We jointly train the factual and counterfactual reasoning modules, the final loss is:
Loss = LossF actual + αLossC + *βLoss*E. (9)
Loss*F actual* is over PECI or P
K
ECI. *Loss*C is over PC and *Loss*E is over PE. α and β are two tradeoff coefficients that balance the two types of biases.
Note that we share the encoding process (Equation 1) between factual and counterfactual modules, but we do not backpropagate *Loss*C and *Loss*E to the encoder, as shown in Figure 1. This is because we require the counterfactual reasoning module to make predictions only based on the event-masked context or the event pair, and has no information about the missing part.
After training, the counterfactual reasoning module will learn the bias-estimation mechanism.
Therefore, we can make de-biased inference by:
y ← argmaxy
(P*F actual* − αPC − βPE), (10)
## 3 Experiment 3.1 Experimental Settings
Datasets include EventStoryLine (ESL) (Caselli and Vossen, 2017) and Causal-TimeBank (CTB)
(Mirza et al., 2014). ESL contains 22 topics, and 1770 of 7805 event pairs are causally related. CTB
contains 184 documents, and 318 of 7608 event pairs are causally related. We conduct the 5-fold and 10-fold cross-validation on ESL and CTB respectively. The last two topics of ESL are used as the development set for two tasks. All of this is the same as previous works for fairness. Evaluation metrics are Precision (P), Recall (R) and F1-score
(F1). All parameters are searched according to the F1 on the Dev set. The compared baselines include KMMG (Liu et al., 2020), KnowDis (Zuo et al., 2020), LearnDA (Zuo et al., 2021b), LSIN
(Cao et al., 2021) and CauSeRL (Zuo et al., 2021a).
When implementing our factual reasoning models, we adopt BERT(base), which is same as previous methods. We denote our two factual backbones as BERT and BERTK. Details about experimental settings can be seen in Appendix B.
## 3.2 Overall Result And Ablation Study
The overall result is shown in Table 2. We have the following observations. (1) BERTK has a similar result with compared baselines, and performs better than BERT. This coincides with previous works that knowledge is helpful for ECI. (2) Our CF-ECI method achieves consistent improvement when deployed on BERT or BERTK. This shows the effectiveness of our method. (3) Compared with the previous methods, our method has a higher precision score. This is because we make a de-biased inference, which is able to reduce the false-positive predictions, hence improve the precision. (4) Utilizing knowledge may reduce the precision score, because irrelevant knowledge may be introduced.
This coincides with LSIN (Zuo et al., 2021a).
Ablation Study We conduct ablation study to investigate the influence of context-keywords de-biasing (§ 2.2.1) and event-pairs de-biasing
(§ 2.2.2). The result is shown in Table 2. No matter what backbone (BERT or BERTK) is used, after ablating "EPB" or "CKB", the ablated variant has a performance drop. This indicates that ambiguous context-keywords and event-pairs have adversely influence of ECI. By making de-biased inference, our CF-ECI achieves the best performance. In addition, we observe that the context-keywords bias is more severe than the event-pairs bias, which indicates that the trained models tend to use superficially keywords for inference. The possible reason is that this strategy inevitably leverages ambiguous keywords that are potential biases, though it can capture some causal keywords as good evidence.
| Models | ESL | CTB | | | | |
|---------------------------------|-------|-------|-------|------|------|-------|
| P(%) R(%) F1(%) P(%) R(%) F1(%) | | | | | | |
| KMMG | 41.9 | 62.5 | 50.1 | 36.6 | 55.6 | 44.1 |
| KnowDis | 39.7 | 66.5 | 49.7 | 42.3 | 60.5 | 49.8 |
| LearnDA | 42.2 | 69.8 | 52.6 | 41.9 | 68.0 | 51.9 |
| CauSeRL | 41.9 | 69.0 | 52.1 | 43.6 | 68.1 | 53.2 |
| LSIN | 47.9 | 58.1 | 52.5 | 51.5 | 56.2 | 52.9 |
| This Paper | | | | | | |
| BERT | 45.8 | 57.4 | 50.9 | 49.8 | 50.3 | 50.1 |
| BERTK | 43.2 | 65.8 | 52.2 | 48.3 | 54.5 | 51.2 |
| CF-ECIBERT | 48.7 | 59.0 | 53.4∗ | 54.1 | 53.0 | 53.5∗ |
| CF-ECIBERTK 47.1 | 66.4 | 55.1∗ | 50.5 | 59.9 | 54.8 | |
| Ablation Experiment | | | | | | |
| CF-ECIBERT : w/o EPB | 47.7 | 57.6 | 52.2 | 51.7 | 53.6 | 52.6 |
| : w/o CKB | 48.0 | 56.7 | 52.0 | 51.1 | 52.5 | 51.8 |
| CF-ECIBERTK : w/o EPB 46.8 | 63.8 | 54.0 | 50.8 | 56.4 | 53.4 | |
| : w/o CKB | 47.0 | 62.6 | 53.7 | 50.2 | 56.3 | 53.1 |
## 3.3 Further Discussion
| Methods | ESL | CTB | | |
|-------------|-------|-------|-------|-------|
| Dev | Test | Dev | Test | |
| BERT | 17.75 | 16.71 | 20.47 | 21.02 |
| CF-ECIBERT | 02.40 | 02.09 | 02.71 | 02.64 |
| BERTK | 17.08 | 15.70 | 20.46 | 21.04 |
| CF-ECIBERTK | 02.44 | 02.25 | 02.81 | 02.77 |
Table 3: The model unfairness result (lower is better)
on the dev-set and test-set of ESL and CTB.
Bias Analysis (Sweeney and Najafian, 2019; Qian et al., 2021) point out that the unfairness of a trained model can be measured by the imbalance of the predictions produced by the model. Following
(Qian et al., 2021), we use the metric *imbalance*
![3_image_0.png](3_image_0.png)
Figure 2: F1 scores (%) of identifying unseen events.
![3_image_1.png](3_image_1.png)
divergence (ID) to evaluate whether a predicted distribution P is unfair: ID(*P, U*) = JS(P||U),
where JS(·) denotes the JS divergence of P and the uniform distribution U. To evaluate the unfairness of a trained model M, we calculate its ID over all dev or test samples: ID(M) =
1 |D| Px∈D JS(P(x), U), where P(x) can be the output distribution of a factual (§ 2.1) or counterfactual (§ 2.2) model. As shown in Table 3, when deployed on different backbones, our method can obviously and consistently reduce the ID metric.
This indicates that our method is helpful to eliminate two kinds of biases.
Identifying Unseen Events We explore the ability of our method to identify unseen events.
We first randomly select 1/3 of ESL documents as the training set, then divide the remaining documents into (1) "Both Seen", where two events of a sample appear in training data; (2) "One Unseen",
where only one event of a sample exists in training data; (3) "Both Unseen", where both events are unobserved during training. From Figure 2, we have following observations. (1) CF-ECI has a significant improvement on the "Both Unseen" set, compared with BERT. (2) CF-ECI*BERT*K performs better than CF-ECI*BERT* on the "Both Seen" set.
Visualization We depict the heatmaps of predictions by BERT and CF-ECI*BERT* respectively, in Figure 3. BERT pays the most attention to the words: "*eqrthquake, spark, quake, tsunami*", and gives a causal prediction with the 97.9% probability. In contrast, CF-ECI*BERT* dispersedly attends to words and does not find enough causal evidence, hence it gives a non-causal prediction.
## 4 Related Work
Event Causality Identification There are mainly two types of ECI works: document-level ECI (Gao et al., 2019; Phu and Nguyen, 2021) and sentencelevel ECI. In this work, we pay attention to the sentence-level ECI. (Liu et al., 2020) propose to mask event mentions to mine event-agnostic causal patterns. (Zuo et al., 2021a) devises self-supervised methods to learn context-specific causal patterns from external causal statements. (Zuo et al., 2020, 2021b) utilize causal event pairs to find useful data from external resources. Nevertheless, these methods rely on ambiguous causal signals, and may learn the spurious correlations between ambiguous causal signals and labels. Different from these works, we regard ECI from a counterfactual perspective, and devise a counterfactual inference module to the spurious correlations in ECI.
Counterfactual Reasoning Counterfactual data augmentation is a data-level manipulation, which is effective to mitigate biases in datasets (Wei and Zou, 2019; Kaushik et al., 2019). However, it needs extra manual cost of data annotation. A recent trend is counterfactual reasoning, which imagines the situation that what will the prediction be if seeing only the biased part in the input. In this way, the biases can be distilled and eliminated in the inference. This strategy avoids data annotation, and is adopted by many works (Niu et al., 2021; Tian et al., 2022; Qian et al., 2021). Motivated by these works, we devise the counterfactual reasoning module to make a de-baised ECI inference.
## 5 Conclusion
We discuss the issue of context-keywords and eventpairs biases in ECI. To mitigate this problem, we propose the counterfactual reasoning which explicitly estimates the influence of the biases, so that we can make a de-biased inference. Experimental results demonstrate the significant superiority of our method. The robustness and explainability of our method are also verified by further studies.
## 6 Limitations
First, we only access limited computation resources and perform continual pre-training from BERT (Devlin et al., 2018), which is not general enough for every event-related reasoning task. Second, counterfactual reasoning makes our approach conservative in identifying causal relationships, so our method has a higher precision. However, some potential causal relationships will be discarded.
How to achieve a good trade-off between precision and coverage is a problem. In addition, the way we utilize knowledge is relatively simple, and it is very likely that we have not made full use of knowledge. Designing more complex knowledgeenhanced methods may lead to better results.
## 7 Ethical Considerations
This work does not involve any sensitive data, but only crowd-sourced datasets released in previous works, including Event-StoryLine (Caselli and Vossen, 2017) and Causal-TimeBank (Mirza et al., 2014). We believe that our research work meets the ethics of ACL.
## 8 Acknowledgements
We thank the anonymous reviewers for their encouraging feedback. This work is supported by Research Grants Council of Hong Kong(PolyU/15207920, PolyU/15207821) and National Natural Science Foundation of China
(62076212).
## References
Pengfei Cao, Xinyu Zuo, Yubo Chen, Kang Liu, Jun Zhao, Yuguang Chen, and Weihua Peng. 2021. Knowledge-enriched event causality identification via latent structure induction networks. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 4862–4872, Online.
Association for Computational Linguistics.
Tommaso Caselli and Piek Vossen. 2017. The event storyline corpus: A new benchmark for causal and temporal relation extraction. In *Proceedings of the* Events and Stories in the News Workshop, pages 77–
86.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Lei Gao, Prafulla Kumar Choubey, and Ruihong Huang.
2019. Modeling document-level causal structures for event causal relation identification. In *Proceedings* of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 1808–1817.
Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton.
2019. Learning the difference that makes a difference with counterfactually-augmented data. *arXiv* preprint arXiv:1909.12434.
Jian Liu, Yubo Chen, and Jun Zhao. 2020. Knowledge enhanced event causality identification with mention masking generalizations. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pages 3608–3614.
Zhiyi Luo, Yuchen Sha, Kenny Q Zhu, Seung-won Hwang, and Zhongyuan Wang. 2016. Commonsense causal reasoning between short texts. In *Fifteenth* International Conference on the Principles of Knowledge Representation and Reasoning.
Paramita Mirza, Rachele Sprugnoli, Sara Tonelli, and Manuela Speranza. 2014. Annotating causality in the tempeval-3 corpus. In *EACL 2014 Workshop on* Computational Approaches to Causality in Language
(CAtoCL), pages 10–19. Association for Computational Linguistics.
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In *Proceedings of the 2016* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849, San Diego, California. Association for Computational Linguistics.
Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. 2021. Counterfactual vqa: A cause-effect look at language bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12700–
12710.
Judea Pearl. 2009. Causal inference in statistics: An overview. *Statistics surveys*, 3:96–146.
Minh Tran Phu and Thien Huu Nguyen. 2021. Graph convolutional networks for event causality identification with rich document-level structures. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 3480–3490.
Chen Qian, Fuli Feng, Lijie Wen, Chunping Ma, and Pengjun Xie. 2021. Counterfactual inference for text classification debiasing. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 5434–5445.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-first AAAI conference on artificial intelligence.
Chris Sweeney and Maryam Najafian. 2019. A transparent framework for evaluating unintended demographic bias in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1662–1667, Florence, Italy. Association for Computational Linguistics.
Bing Tian, Yixin Cao, Yong Zhang, and Chunxiao Xing.
2022. Debiasing nlu models via causal intervention and counterfactual reasoning.
Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. *arXiv preprint arXiv:1901.11196*.
Zhipeng Xie and Feiteng Mu. 2019a. Boosting causal embeddings via potential verb-mediated causal patterns. In *IJCAI*, pages 1921–1927.
Zhipeng Xie and Feiteng Mu. 2019b. Distributed representation of words in cause and effect spaces. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7330–7337.
Xinyu Zuo, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Weihua Peng, and Yuguang Chen. 2021a.
Improving event causality identification via selfsupervised representation learning on external causal statement. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 2162–2172, Online. Association for Computational Linguistics.
Xinyu Zuo, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Weihua Peng, and Yuguang Chen. 2021b.
LearnDA: Learnable knowledge-guided data augmentation for event causality identification. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 3558–3571, Online.
Association for Computational Linguistics.
Xinyu Zuo, Yubo Chen, Kang Liu, and Jun Zhao. 2020.
KnowDis: Knowledge enhanced data augmentation for event causality detection via distant supervision.
In *Proceedings of the 28th International Conference* on Computational Linguistics, pages 1544–1550, Barcelona, Spain (Online). International Committee on Computational Linguistics.
## A Details About Knowledge Retrieving
Following (Liu et al., 2020), we leverage external knowledge to further improve ECI. We use ConceptNet (Speer et al., 2017) as knowledge base.
In ConceptNet, knowledge is structured as graph, where each node corresponds a concept, and each edge corresponds to a semantic relation. For e1 and e2, we search their related knowledge, i.e.,
matching an event with the tokens of concepts in ConceptNet. Events and concepts are Lemmatized with the Spacy toolkit to improve the rate of matching. We only consider 12 semantic relations that are potentially useful for ECI: CapableOf, Causes, CausesDesire, UsedFor, HasSubevent, HasPrerequisite, Entails, ReceivesAction, UsedFor, CreatedBy, MadeOf, and Desires. For each relation, we retrieve at most two knowledge relations according to the weights of relations.
## B Details About Experimental Settings B.1 Compared Baselines
- KMMG (Liu et al., 2020), which proposes a mention masking generalization method and also utilizes the external knowledge.
- KnowDis (Zuo et al., 2020), a dataaugmentation method that utilizes the distantly labeled training data. - LearnDA (Zuo et al., 2021b), a dataaugmentation method with iteratively generating new examples and classifying event causality in a dual learning framework.
- LSIN (Cao et al., 2021), a latent-structure induction network to leverage the external knowledge;.
- CauSeRL (Zuo et al., 2021a), a selfsupervised framework to learn contextspecific causal patterns from external causal corpora.
## B.2 Implementation Details
Due to the data imbalance problem, we adopt a over-sampling strategy for training. The early-stop is used due to the small scale of datasets. We use the Adam optimizer and linearly decrease learning rate to zero with no warmup. We use PyTorch toolkit to conduct all experiments on the Arch Linux with RTX3090 GPU. All the hyperparameter for two tasks are searched according to the F1 score on the development set. For reproduction, we set the random seed to 42 for all experiments. The searched parameters for two datasets are shown in Table 4.
| Parameters | ESL | CTB |
|---------------|-------|-------|
| Batch Size | 32 | 32 |
| Learning Rate | 5e-5 | 5e-5 |
| Drop-rate | 0.3 | 0.2 |
| α | 0.15 | 0.25 |
| β | 0.35 | 0.25 |
Table 4: The used hyperparameters for two datasets.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 3
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3.1 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Statistics of Datasets are reported in Appendix B
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B.3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix B.3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
hou-etal-2023-contrastive | Contrastive Bootstrapping for Label Refinement | https://aclanthology.org/2023.acl-short.84 | Traditional text classification typically categorizes texts into pre-defined coarse-grained classes, from which the produced models cannot handle the real-world scenario where finer categories emerge periodically for accurate services. In this work, we investigate the setting where fine-grained classification is done only using the annotation of coarse-grained categories and the coarse-to-fine mapping. We propose a lightweight contrastive clustering-based bootstrapping method to iteratively refine the labels of passages. During clustering, it pulls away negative passage-prototype pairs under the guidance of the mapping from both global and local perspectives. Experiments on NYT and 20News show that our method outperforms the state-of-the-art methods by a large margin. | # Contrastive Bootstrapping For Label Refinement
Shudi Hou†and **Yu Xia**†and **Muhao Chen**‡and **Sujian Li**†
†Key Laboratory of Computational Linguistics, MOE, Peking University
‡University of Southern California
{housd, yuxia, lisujian}@pku.edu; [email protected]
## Abstract
Traditional text classification typically categorizes texts into pre-defined coarse-grained classes, from which the produced models cannot handle the real-world scenario where finer categories emerge periodically for accurate services. In this work, we investigate the setting where fine-grained classification is done only using the annotation of coarse-grained categories and the coarse-to-fine mapping. We propose a lightweight contrastive clustering-based bootstrapping method to iteratively refine the labels of passages. During clustering, it pulls away negative passage-prototype pairs under the guidance of the mapping from both global and local perspectives. Experiments on NYT
and 20News show that our method outperforms the state-of-the-art methods by a large margin.1
## 1 Introduction
Traditional text classification often categorize into a set of coarse-grained classes, which falls short in real-world scenarios where finer categories emerge.
To this end, coarse-to-fine text classification is introduced (Mekala et al., 2021), which performs fine-grained classification given only annotation of coarse-grained categories and the coarse-to-fine mapping. Then, it finetunes a pre-trained language model for each coarse prototype.2 However, this two-step method could be sub-optimal. For example, it is vulnerable to the noise which is propagated and accumulated through the pipeline. Besides, it requires finetuning and saving a pre-trained language model for each coarse prototype which is heavyweight.
To this end, we propose a lightweight bootstrapping method based on contrastive clustering to iter-
![0_image_0.png](0_image_0.png)
atively refine the labels of passages.3 To be more specific, the method starts with an epoch of warmup on the weakly-labeled dataset. During warm-up, it pulls away negative passage-prototype pairs under the guidance of the mapping from both global and local perspectives, *i.e.*, coarse inter-cluster and fine inter-cluster perspectives. After the warm-up, the distances between clusters are not significant which causes misclassification. Instead of continuing training on the weakly-labeled dataset which might greatly increase the noise (Figure 1(b)), we perform a bootstrapping process which finetunes the model on the selected dataset and updates the selected dataset by the finetuned model alternately.
To mitigate the noise, we propose a selection strategy to identify high-quality pairs in terms of similarity and distinction. To further boost our method, we adopt a modified similarity metric from (Lample et al., 2018) and use the gloss knowledge to augment the prototype representation. As shown in
(Figure 1(c)), the resulting clusters are well separated with less noise.
Our contributions are summarized as follows:
- We propose a lightweight bootstrapping method based on contrastive clustering to ad-3We focus on passage-level classification as it is consistent with prior studies (Mekala et al., 2021). Though, without loss of generality, the studied problem as well as the proposed method can be extended to classifying natural language text in other granularities.
dress the problem of coarse-to-fine text classification.
- Our method outperforms the state-of-the-art methods on two widely-used datasets. Further analysis verifies the effectiveness of our proposed techniques.
## 2 Proposed Method
This section describes the technical details of the proposed method, starting with the task description.
## 2.1 Task Description
We follow the task definition of coarse-to-fine text classification in previous work (Mekala et al.,
2021). Given n passages {p1*, ..., p*n} with their corresponding coarse-grained labels {c1*, ..., c*n},
along with the coarse-to-fine mapping T , our goal is to assign a fine-grained label to each passage.
The key notations used in our paper are defined as follows: (1) C = {C1, C2*, ...,* Cm} denotes the coarse prototypes. (2) F = {F1, F2*, ...,* Fk} denotes the fine prototypes. (3) T : *C → F* denotes the coarse-to-fine mapping, a surjective mapping which separates F into |C| non-overlapping partitions. (4) Spf = T (ci) denotes the fine-grained candidate prototype of pi, which is also dubbed as p for simplicity. (5) Snf = F/Spf denotes fine prototypes not belonging to T (ci). (6) Snc = C/ci denotes coarse prototypes in C other than ci.
## 2.2 Our Method
Training Process As illustrated in Figure 2, we start with an epoch of warm-up, during which we optimize two contrastive losses Lglobal, L*local* on the weakly-labeled dataset and only the L*global* on the unlabeled dataset. The two contrastive losses are detailed in the following paragraphs. Then, we conduct several epochs of bootstrapping with the above model. At each bootstrapping step, we first select a small set of passages on which labels are predicted with high confidence by the model.
Then, we finetune the model on the selected dataset with the same losses as warm-up. We repeat the finetuning and the selection alternately.
Initial Weak Supervision Following previous work, we consider samples that exclusively contain the label surface name as their respective weak supervision. More details can be referred to the prior study.
![1_image_0.png](1_image_0.png)
Candidates
Passage and Prototype Representation We encode passages {p1*, ..., p*n} and all prototypes *C ∪F*
into the same embedding space with a pretrained language model. The resulting passage representation and prototype representation are denoted as p and l respectively. During the training process, the prototype representations are dynamically updated to fit the current passage representations.
Specifically, we use the last hidden representation of [CLS] as their representations.
Similarity Metric Cosine similarity is often used to measure semantic similarity of embedding representations. However, in high-dimensional spaces, some "hub" vectors may be close to many other vectors while some other vectors are instead being isolated. For example, a passage's representation p may get high cosine with a large number of labels in Spf due to such hubness issues. In this case, a high similarity score does not necessarily lead to a high discrepancy among labels. Selecting a highlyscored label from the hub as the seed is potentially detrimental to our pairing-based method. Inspired by cross-domain similarity local scaling (Lample et al., 2018), we adopt a modified similarity metric c(p,l) to prevent passage vectors from becoming hubs:
$$c(\mathbf{p},\mathbf{l})=\cos(\mathbf{p},\mathbf{l})-KNN(\mathbf{p})\tag{1}$$ $$KNN(\mathbf{p})=\frac{1}{K}\sum\max_{\mathbf{l}\in\mathcal{F}}K\left\{\cos(\mathbf{p},\mathbf{l})\right\}\tag{2}$$ where $KNN(.)$ denotes $K$ nearest neighbors.
Warm-up Viewing a passage as an anchor, we expect that its semantic similarity to the correct fine-grained prototype should be closer than any other fine-grained candidate prototypes. We regard the distance in the representation space as the similarity. Specifically, we optimize the following margin ranking loss:
$$\mathcal{L}_{global}=\frac{1}{|S_{pf}|}\sum_{\begin{subarray}{c}l\in S_{pf}\\ l^{\prime}\in S_{nf}\end{subarray}}\max\{c(\mathbf{p},\mathbf{l})-c(\mathbf{p},\mathbf{l}^{\prime})+\gamma,0\}\tag{3}$$
where γ is a hyper-parameter denoting the margin. We use all fine candidate prototypes in Spf as positive examples and randomly sample the same number of prototypes from Snf as negative examples. We view this loss as a global loss to cluster samples according to their coarse labels (Figure 3).
For instances labeled in the initial weak supervision stage, we adopt another margin ranking loss:
$$\begin{array}{c}{{{\mathcal{L}}_{l o c a l}=\operatorname*{max}\{s e c_{-}m a x-c({\boldsymbol{p}},l)+\sigma,0\}}}\\ {{\qquad\qquad s e c_{-}m a x=\operatorname*{max}_{l^{\prime}\in S_{p f},l^{\prime}!=l}c({\boldsymbol{p}},l^{\prime})}}\end{array}$$
′)(5)
We regard this loss as a local loss to cluster samples according to their fine-grained labels (Figure 1 (a)).
Bootstrapping After the warm-up, representations show an inclination to form clusters. Yet, the distances between them are not significant enough to separate the classes. To further get compact clusters, we perform bootstrapping which finetunes the model on the selected dataset and updates the selected dataset by the finetuned model alternately.
Instead of using the initial weak supervision which might greatly increase the noise as observed, we propose a selection strategy to select high-quality passage-prototype pairs. Specifically, we assign a pseudo label to each passage by their similarity (Eq.(6)). Apart from **similarity**, we assume high-quality pairs should also be **discriminative**
(Eq.(7)):
.$$ l=\arg\max_{l\in S_{pf}}c(\pmb{p},l)$$ $c(\pmb{p},l)-\max_{l'\in S_{pf},l'l=l}c(\pmb{p},l')>\beta$ is a threshold updated at each epoch ?
where β is a threshold updated at each epoch. We construct a confident set CS with top r% pairs satisfying these two conditions. We update β with the lowest similarity in CS. Then, we optimize Eq.(4)
and Eq.(3) on CS and the rest passages accordingly.
Gloss Knowledge Since the surface names alone can not well represent the semantics of labels, we enrich them with external semantic knowledge. To be more specific, we select the first two sentences in each surface name's first Wikipedia webpage to augment the original surface name with a predefined template (Table 3). We adopt the format of
"template, surface name, gloss" and use the last hidden representation of [CLS] as their representation.
![2_image_0.png](2_image_0.png)
$\mathbf{a}$
$$({\boldsymbol{\mathsf{S}}})$$
Prediction It is worth noticing that applying our similarity metric c(p,l) do not change the relative ranking among labels in Spf compared with the cosine similarity. For simplicity, we use cosine similarity for prediction.
$$l=a r g\operatorname*{max}_{l\in S_{p f}}\cos(p,l)\qquad\qquad(8)$$
## 3 Experiments
In this section, we describe the experimental evaluation for the proposed method.
## 3.1 Datasets And Metrics
For a fair comparison with prior work, we use the same hierarchical datasets used by We report both Macro-F1 and Micro-F1 for evaluation on the following two datasets.
$\mathbf{f}_{\perp}$
$$\mathbf{\Pi}(T)$$
The 20 Newsgroups (20News) The passages in 20News was organized into 5 coarse-grained newsgroups and 20 fine-grained newsgroups corresponding to different topics (Table 2). Passages in 20News were partitioned evenly across the 20 different fine-grained newsgroups.4 Following (Mekala et al., 2021), we omitted the 3 miscellaneous newsgroups ("misc.forsale,"
"talk.politics.misc" and "talk.religion.misc") and expanded the abbreviation to full words.
The New York Times (NYT) This dataset contains 5 coarse-grained topics and 25 subtopics (Table 2). The NYT dataset is highly skewed with the coarse-grained topic "sports" containing more than 80% passages.
## 3.2 Main Results
We compare our model with the previous work
(Mekala et al., 2021), as well as several zeroshot weakly supervised text classification methods 4http://qwone.com/~jason/20Newsgroups/
| NYT | 20News | | | |
|---------------------------|-----------------|-----------------|-----------------|-----------------|
| Mi-F1(%) | Ma-F1(%) | Mi-F1(%) | Ma-F1(%) | |
| LOT-Class | 79.26 | 63.16 | 56.38 | 54.80 |
| X-Class | 58.15 | 60.50 | 52.95 | 53.47 |
| C2F | 89.23 | 84.36 | 75.77 | 75.24 |
| C2F w/ our select ⋆ | 89.64 | 82.72 | 77.20 | 76.41 |
| Ours | 92.64 | 89.90 | 77.64 | 77.22 |
| w/o fine | 91.15 (↓ 1.49) | 84.90 (↓ 5.00) | 74.34 (↓ 3.30) | 73.78 (↓ 3.44) |
| w/o bootstrap | 89.49 (↓ 3.15) | 82.50 (↓ 7.40) | 76.01 (↓ 1.63) | 75.46 (↓ 3.30) |
| w/o gloss | 89.91 (↓ 2.73) | 80.48 (↓ 9.42) | 72.68 (↓ 4.86) | 70.31 (↓ 6.91) |
| w/o select | 87.56 (↓ 5.08) | 81.98 (↓ 8.02) | 79.74 (↑ 2.10) | 79.21 (↑ 1.99) |
| w/o similarity | 89.25 (↓ 3.39) | 82.44 (↓ 7.46) | 61.21 (↓ 16.43) | 54.76 (↓ 22.46) |
| w/ Manhattan similarity † | 33.45 (↓ 59.19) | 39.47 (↓ 50.43) | 41.83 (↓ 35.81) | 36.50 (↓ 40.72) |
| w/ Euclidean similarity ‡ | 92.46 (↓ 0.18) | 89.17 (↓ 0.73) | 72.11 (↓ 5.53) | 70.65 (↓ 6.57) |
(Wang et al., 2021b; Meng et al., 2020a) following previous works. We reproduce them using their implementation.567 As shown in Table 1, our method outperforms the baselines by 5.67% in Micro-F1 and 5.54% in Macro-F1 on the NYT dataset, as well as 3.97% in Micro-F1 and 3.04% in Macro-F1 on 20News dataset.
## 3.3 Analysis
To verify the effectiveness of different model components , we conduct ablation studies to test each of those.
Effect of Bootstrapping The "w/o bootstrap" results in Table 1 report the performance with warm-up only. These results are consistently lower than those with bootstrapping. Specifically, bootstrapping improves the warm-up by 3.15% MicroF1, 7.40% Macro-F1 and 1.63% Micro-F1, 3.30%
Macro-F1 on NYT and 20News respectively. Figure 1(a)(c) shows passage representations are more separated from each other.
Effect of Selection Strategy We replace the selection strategy in bootstrapping with the initial weakly-labeled samples. From the "w/o bootstrap" results in Table 1, we can see that, our selection strategy brings an improvement of 4.26% MicroF1, 7.46% Macro-F1 on NYT. It is better to use the seed dataset on 20News. We hypothesize that this observation is because the seed dataset has a more balanced label distribution than our selected high-quality samples on 20News. We also incorporate our selection strategy to the C2F baseline in the bootstrapping stage. As shown in Table 1 row "C2F
w/ our select," this strategy improves the performance of C2F by 1.43% Micro-F1, 1.17% MacroF1 on 20News and 0.41% Micro-F1 on NYT, exhibiting the effectiveness of our strategy.
Effect of Similarity Metric We replace our similarity metric with the cosine similarity. From Table 1 "w/o similarity" we can see that, our similarity metric brings along an improvement of 3.39%
in Micro-F1, 7.46% in Macro-F1 on NYT, and 16.43% in Micro-F1 and 22.46% in Macro-F1 on 20News. From Figure 4, we can see that 63% of samples belonging to the "Law Enforcement" prototype are misclassified using the cosine similarity. However, 18% are misclassified using our similarity metric, verifying its effectiveness. Besides, results for "w/ Manhattan similarity" and
"w/ Euclidean similarity" show that alternating cosine similarity in c(p,l) causes performance drops of 35.81% (5.53%) in Micro-F1, 40.72% (6.57%) in Macro-F1 and 50.19% (0.18%) in Micro-F1, 50.43% (0.73%) in Macro-F1 on 20News and NYT
data, further proving the effectiveness of our similarity metric.
Effect of Gloss Knowledge We remove the gloss knowledge and use the label surface name only.
Comparing the "w/o gloss" results in Table 1 with the full-setting ones, we observe that the gloss knowledge brings an improvement of 2.73% in Micro-F1, 9.42% in Macro-F1 on NYT and 4.86%
in Micro-F1, 6.91% in Macro-F1 on 20News. Figure 5 further shows the effect of gloss knowledge on different prototypes.
![4_image_0.png](4_image_0.png)
Extending to the setting without coarse-to-fine mapping We extend our method to the setting without the coarse-to-fine mapping. In other words, the only supervision is the gold coarse labels. We modify L*global* as follows:
$${\mathcal{L}}_{c\_g l o b a l}=\operatorname*{max}\{c(\mathbf{p},\mathbf{l_{c}})-c(\mathbf{p},\mathbf{l_{c}^{\prime}})+\gamma,0\}\;\;(9)$$
where we use the golden coarse label lc as the positive example and randomly sample one coarse label l′c from Snc as the negative example. The "w/o fine" results in Table 1 show that the performance does not degrade much when the association between coarse and fine-grained labels does not exist, showing the feasibility of our method in a more general setting.
## 4 Related Work
Previous works in weakly supervised text classification have explored different kinds of weak supervision. (1) a set of related keywords. (Mekala and Shang, 2020) augment and disambiguate the initial seed words with contextualized and highly labelindicative keywords. (Meng et al., 2020b) identify keywords for classes by querying replacements for class names using BERT and pseudo-labels the documents by heuristics with the selected keywords.
(2) a few labeled documents. (Tang et al., 2015)
represent the labeled documents and different levels of word co-occurrence information as a largescale text network. (Meng et al., 2018) propose a pseudo-document generator that leverages the seed labeld documents to generate pseudo-labeled documents for model pre-training. (3) label surface names. (Wang et al., 2021b) propose an adaptive representation learning method to obtain label and document embedding, and cluster them to pseudolabel the corpus. Our setting is different from theirs in that we use coarse-grained annotation to improve the fine-grained text classification.
Contrastive learning (He et al., 2020; Chen et al.,
2020; Khosla et al., 2020) aims at learning representations by contrasting the positive pairs and negative pairs. In NLP, existing works can be primarily categorized into two distinct streams. Unsupervised contrastive learning seeks to contrast grouped or perturbed instances to generate more robust representation of unlabeled textual data (Gao et al., 2021; Wei et al., 2021; Kim et al., 2021; Wang et al., 2021a). On the contrary, supervised contrastive learning (Suresh and Ong, 2021; Zhou et al., 2021; Yu et al., 2021; Huang et al., 2022) is label-aware and seeks to create representations for differently labeled data with more discrepancy.
Our work has shown that supervised contrastive learning incorporating label names, with minimal external knowledge, improves the model's performance in label refinement.
## 5 Conclusion
In this paper, we study the task of coarse-to-fine text classification. We propose a novel contrastive clustering-based bootstrapping method to refine the label in an iterative manner. Experiments on two real-world datasets for coarse-to-fine text classification verify the effectiveness of our method. Future work could consider extending this method to other fine-grained decision-making tasks that could potentially benefit from coarse-grained labels, such as various kinds of lexical semantic typing tasks
(Huang et al., 2022). Another meaningful direction is to consider incorporating other partial-label learning techniques (Zhang et al., 2016) that are relevant to coarse-to-fine prediction tasks.
## Limitations
Our paper has the following limitations: (1) In realworld applications, the label hierarchy may be more than two levels. It is worth extending our method to such a setting and empirically verifying it. (2) Our selection strategy simply takes top r% confident samples, which might result in class imbalance problem. Alleviating the imbalance problem may further improve our performance. We leave them as future work.
## Acknowledgement
We appreciate the reviewers for their insightful comments and suggestions. We would like to express our gratitude to the authors of the C2F paper
(Mekala et al., 2021) for their collective effort in open-sourcing the dataset and code. Their released materials played a vital role in our research.
Shudi Hou, Yu Xia and Sujian Li were supported by National Key R&D Program of China (No.
2020AAA0109703). Muhao Chen was supported by the National Science Foundation of United States Grant IIS 2105329, a subaward of the INFER Program through UMD ARLIS, an Amazon Research Award and a Cisco Research Award.
## References
Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake VanderPlas, Arnaud Joly, Brian Holt, and Gaël Varoquaux. 2013.
API design for machine learning software: experiences from the scikit-learn project. In ECML PKDD
Workshop: Languages for Data Mining and Machine Learning, pages 108–122.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings* of Machine Learning Research, pages 1597–1607.
PMLR.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Pro-
ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
James Y. Huang, Bangzheng Li, Jiashu Xu, and Muhao Chen. 2022. Unified semantic typing with meaningful label inference. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2642–2654, Seattle, United States. Association for Computational Linguistics.
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In *Advances in Neural* Information Processing Systems, volume 33, pages 18661–18673. Curran Associates, Inc.
Taeuk Kim, Kang Min Yoo, and Sang-goo Lee. 2021.
Self-guided contrastive learning for BERT sentence representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2528–2540, Online. Association for Computational Linguistics.
Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018.
Word translation without parallel data. In *International Conference on Learning Representations*.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692.
Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. *ArXiv*,
abs/1711.05101.
Dheeraj Mekala, Varun Gangal, and Jingbo Shang.
2021. Coarse2Fine: Fine-grained text classification on coarsely-grained annotated data. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 583–594, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Dheeraj Mekala and Jingbo Shang. 2020. Contextualized weak supervision for text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 323–
333.
Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han.
2018. Weakly-supervised neural text classification.
In *proceedings of the 27th ACM International Conference on information and knowledge management*,
pages 983–992.
Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, and Jiawei Han. 2020a. Text classification using label names only: A language
model self-training approach. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9006–9017, Online. Association for Computational Linguistics.
Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, and Jiawei Han. 2020b.
Text classification using label names only: A language model self-training approach. arXiv preprint arXiv:2010.07245.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Varsha Suresh and Desmond Ong. 2021. Not all negatives are equal: Label-aware contrastive loss for fine-grained text classification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4381–4394, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jian Tang, Meng Qu, and Qiaozhu Mei. 2015. Pte:
Predictive text embedding through large-scale heterogeneous text networks. In *Proceedings of the 21th* ACM SIGKDD international conference on knowledge discovery and data mining, pages 1165–1174.
Dong Wang, Ning Ding, Piji Li, and Haitao Zheng.
2021a. CLINE: Contrastive learning with semantic negative examples for natural language understanding. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2332–2342, Online. Association for Computational Linguistics.
Zihan Wang, Dheeraj Mekala, and Jingbo Shang. 2021b.
X-class: Text classification with extremely weak supervision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3043–3053, Online. Association for Computational Linguistics.
Xiangpeng Wei, Rongxiang Weng, Yue Hu, Luxi Xing, Heng Yu, and Weihua Luo. 2021. On learning universal representations across languages. In *International* Conference on Learning Representations.
Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, and Chao Zhang. 2021. Fine-tuning pretrained language model with weak supervision: A
contrastive-regularized self-training approach. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1063–1077, Online. Association for Computational Linguistics.
Min-Ling Zhang, Bin-Bin Zhou, and Xu-Ying Liu. 2016.
Partial label learning via feature-aware disambiguation. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, page 1335–1344, New York, NY, USA. Association for Computing Machinery.
Wenxuan Zhou, Fangyu Liu, and Muhao Chen. 2021.
Contrastive out-of-distribution detection for pretrained transformers. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 1100–1111, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
## A Dataset Statistics
We list the statistics of the datasets in Table 2.
## B Templates
We list the templates used in Table 3.
## C Effect Of Gloss Knowledge On Different Prototypes
We show the confusion matrix over all fine prototypes in Figure 5.
## D Implementation Details
We use RoBERETa-base (Liu et al., 2019) as the encoder. The models are trained on one GeForce RTX 3090 GPU. We set the batch size as 8. We do one epoch of warmup and four epochs of bootstrapping. We use the predictions from the last epoch as the final predictions. We use AdamW (Loshchilov and Hutter, 2017) as the optimizer. r is set as 15 for NYT and 1 for 20News. γ and σ are set as 0.05 for both NYT and 20News. We run our model 3 times using different random seeds. We used t-SNE
(Pedregosa et al., 2011; Buitinck et al., 2013) for the visualization in this paper.
## E Selection Of R
We select the value of r from set {1, 5, 10, 15, 20}.
For each coarse prototype Ci, we calculate the ratio of initial weak supervision WCi in category Cito the total number of instance ICi in Ci, we denote the ratio as RCi = WCi
/ICi
. After that, we select the r closest to min Ci∈C
{RCi}. As shown in Table 4a and Table 4b, the minimal RCi in NYT dataset is 13.43%, closest to 15, while the minimal RCi in 20News dataset is 2.05%, closest to 1.
| Dataset | Passage | |C| | |F| | Coarse Prototype | Fine Prototype |
|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------|---------------------------------|--------------------------------------------------------------------------------|
| 20News | 16468 | 5 | 17 | computer, politics, recreation, | graphics, windows, ibm, mac, x window, mideast, guns, autos, motorcycles, |
| religion, science | baseball, hockey, christian, atheism, encryption, electronics, medicine, space dance, music, movies, television, economy, energy companies, international | | | | |
| NYT | 11744 | 5 | 26 | arts, business, politics, | business, stocks and bonds, abortion, federal budget, gay rights, gun control, |
| science, sports | immigration, law enforcement, military, surveillance, the affordable care act, cosmos, environment, baseball, basketball, football, golf, hockey, soccer, tennis | | | | |
Table 2: Dataset Statistics.
| Dataset | Template |
|-----------|--------------------------------------------------------------------------------------------|
| NYT | 1 : The news is about, 2 : The news is related to, 3 : The topic of this passage is |
| 20News | 1 : The topic of this post is , 2 : They are discussing , 3 : This post mainly talks about |
Table 3: Three variants of templates used to concatenate the gloss knowledge and the surface name. The first template is best for NYT and the third template is best for 20News.
| Ci | WCi | ICi | RCi (%) | Ci | WCi | ICi | RCi (%) |
|--------------------------------------------------|-----------------------------------------------------|-------|-----------|------------|-------|-------|-----------|
| arts | 184 | 1043 | 17.64 | computer | 100 | 4880 | 2.05 |
| business | 132 | 983 | 13.43 | politics | 56 | 1850 | 3.03 |
| politics | 216 | 989 | 21.84 | recreation | 924 | 3976 | 23.24 |
| science | 42 | 90 | 46.67 | religion | 150 | 1976 | 8.35 |
| sports | 1890 | 8639 | 21.88 | science | 100 | 3951 | 2.53 |
| (a) Ratio of the initial weak supervision in NYT | (b) Ratio of the initial weak supervision in 20News | | | | | | |
Table 4: Ratio of the initial weak supervision
![7_image_0.png](7_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
After Conclusion A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3.1
✓ B1. Did you cite the creators of artifacts you used?
3.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We obtain the license and will not distribute it .
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We use the dataset following their intended use.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix D
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix D and E
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix D
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix D
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
shode-etal-2023-nollysenti | {N}olly{S}enti: Leveraging Transfer Learning and Machine Translation for {N}igerian Movie Sentiment Classification | https://aclanthology.org/2023.acl-short.85 | Africa has over 2000 indigenous languages but they are under-represented in NLP research due to lack of datasets. In recent years, there have been progress in developing labelled corpora for African languages. However, they are often available in a single domain and may not generalize to other domains. In this paper, we focus on the task of sentiment classification for cross-domain adaptation. We create a new dataset, Nollywood movie reviews for five languages widely spoken in Nigeria (English, Hausa, Igbo, Nigerian Pidgin, and Yoruba). We provide an extensive empirical evaluation using classical machine learning methods and pre-trained language models. By leveraging transfer learning, we compare the performance of cross-domain adaptation from Twitter domain, and cross-lingual adaptation from English language. Our evaluation shows that transfer from English in the same target domain leads to more than 5{\%} improvement in accuracy compared to transfer from Twitter in the same language. To further mitigate the domain difference, we leverage machine translation from English to other Nigerian languages, which leads to a further improvement of 7{\%} over cross-lingual evaluation. While machine translation to low-resource languages are often of low quality, our analysis shows that sentiment related words are often preserved. | # Nollysenti: Leveraging Transfer Learning And Machine Translation For Nigerian Movie Sentiment Classification
Iyanuoluwa Shode† David Ifeoluwa Adelani‡ Jing Peng† **Anna Feldman**†
†Montclair State University, USA, and ‡University College London, United Kingdom
{shodei1,pengj,feldmana}@montclair.edu, [email protected]
## Abstract
Africa has over 2000 indigenous languages but they are under-represented in NLP research due to lack of datasets. In recent years, there have been progress in developing labelled corpora for African languages. However, they are often available in a single domain and may not generalize to other domains. In this paper, we focus on the task of sentiment classification for crossdomain adaptation. We create a new dataset, NollySenti—based on the Nollywood movie reviews for five languages widely spoken in Nigeria (English, Hausa, Igbo, Nigerian-Pidgin, and Yorùbá). We provide an extensive empirical evaluation using classical machine learning methods and pre-trained language models.
Leveraging transfer learning, we compare the performance of cross-domain adaptation from Twitter domain, and cross-lingual adaptation from English language. Our evaluation shows that transfer from English in the same target domain leads to more than 5% improvement in accuracy compared to transfer from Twitter in the same language. To further mitigate the domain difference, we leverage machine translation (MT) from English to other Nigerian languages, which leads to a further improvement of 7% over cross-lingual evaluation. While MT
to low-resource languages are often of low quality, through human evaluation, we show that most of the translated sentences preserve the sentiment of the original English reviews.
## 1 Introduction
Nigeria is the sixth most populous country in the world1and the most populous in Africa with over 500 languages (Eberhard et al., 2021). These languages are spoken by millions of speakers, and the four most spoken indigenous languages (Hausa, Igbo, Nigerian-Pidgin (Naija), and Yorùbá) have more than 25 million speakers but they are still under-represented in NLP research (Adebara and 1https://www.census.gov/popclock/print.php?
component=counter Abdul-Mageed, 2022; van Esch et al., 2022). The development of NLP for Nigerian languages and other African languages is often limited by a lack of labelled datasets (Adelani et al., 2021b; Joshi et al.,
2020). While there have been some progress in recent years (Eiselen, 2016; Adelani et al., 2022b; NLLB-Team et al., 2022; Muhammad et al., 2023; Adelani et al., 2023), most benchmark datasets for African languages are only available in a single domain, and may not transfer well to other target domains of interest (Adelani et al., 2021a).
One of the most popular NLP tasks is sentiment analysis. In many high-resource languages like English, sentiment analysis datasets are available across several domains like social media posts/tweets (Rosenthal et al., 2017), product reviews (Zhang et al., 2015; He and McAuley, 2016)
and movie reviews (Pang and Lee, 2005; Maas et al., 2011). However, for Nigerian languages, the only available dataset is NaijaSenti (Muhammad et al., 2022) - a Twitter sentiment classification dataset for four most-spoken Nigerian languages.
It is unclear how it transfers to other domains.
In this paper, we focus on the task of sentiment classification for cross-domain adaptation. We create the first sentiment classification dataset for Nollywood movie reviews known as **NollySenti**
- a dataset for five widely spoken Nigerian languages (English, Hausa, Igbo, Nigerian-Pidgin, and Yorùbá). Nollywood is the home for Nigerian movies that depict the Nigerian people and reflect the diversities across Nigerian cultures. Our choice of this domain is because Nollywood is the second-largest movie and film industry in the world by annual output2, and the availability of Nollywood reviews on several online websites. However, most of these online reviews are only in English.
To cover more languages, we asked professional translators to translate about 1,000-1,500 reviews from English to four Nigerian languages, similar to Winata et al. (2023). Thus, **NollySenti** is a **parallel**
multilingual sentiment corpus for five Nigerian languages that can be used for both *sentiment classification* and *evaluation of machine translation*
(MT) models in the user-generated texts domain —
which is often scarce for low-resource languages.
Additionally, we provide several supervised and transfer learning experiments using classical machine learning methods and pre-trained language models. By leveraging transfer learning, we compare the performance of cross-domain adaptation from the Twitter domain to the Movie domain, and cross-lingual adaptation from English language.
Our evaluation shows that transfer from English in the same target domain leads to more than 5%
improvement in accuracy compared to transfer from the Twitter domain in the same target language. To further mitigate the domain difference, we leverage MT from English to other Nigerian languages, which leads to a further improvement of 7% over cross-lingual evaluation. While MT to low-resource languages are often of low quality, through human evaluation, we show that most of the translated sentences preserve the sentiment in the original English reviews. For reproducibility, we have released our datasets and code on Github3.
## 2 Related Work
African sentiment datasets There are only a few sentiment classification datasets for African languages such as Amharic dataset (Yimam et al.,
2020), and NaijaSenti (Muhammad et al., 2022)—
for Hausa, Igbo, Nigerian-Pidgin, and Yorùbá. Recently, Muhammad et al. (2023) expanded the sentiment classification dataset to 14 African languages.
However, all these datasets belong to the social media or Twitter domain. In this work, we create a new dataset for the Movie domain based on human translation from English to Nigerian languages, similar to the NusaX parallel sentiment corpus for 10 Indonesia languages (Winata et al.,
2023).
MT for sentiment classification In the absence of training data, MT models can be used to translate texts from a high-resource language like English to other languages, but they often introduce errors that may lead to poor performance (Refaee and Rieser, 2015; Poncelas et al., 2020). However, 3https://github.com/IyanuSh/NollySenti they do have a lot of potentials especially when translating between high-resource languages like European languages, especially when combined with English (Balahur and Turchi, 2012, 2013). In this paper, we extend MT for sentiment classification to four low-resource Nigerian languages. This paper is an extension of the YOSM paper (Shode et al., 2022) - A Yorùbá movie sentiment corpus.
## 3 Languages And Data 3.1 Focus Languages
We focus on four Nigerian languages from three different language families spoken by 30M-120M.
Hausa belongs to the Afro-Asiatic/Chadic language family with over 77 million speakers (Eberhard et al., 2021). It is a native to Nigeria, Niger, Chad, Cameroon, Benin, Ghana, Togo, and Sudan. However, the significant population for the language reside in northern Nigeria. Hausa is an agglutinative language in terms of morphology and tonal with two tones - low and high. It is written with two major scripts: Ajami (an Arabic-based script) and Boko script (based on Latin script) - the most widely used. The Boko script make use of all the Latin letters except for "p,q,v, and x" including the following additional letters "á, â, Î, ¯,
kw, Îw, gw, ky, Îy, gy, sh, and ts".
Igbo belongs to the Volta–Niger sub-group of the Niger-Congo language family with over 31 million speakers (Eberhard et al., 2021). It is native language to South-Eastern Nigeria, but also spoken in Cameroon and Equatorial Guinea in Central Africa.
Igbo is an agglutinative language in terms of its sentence morphology and tonal with two tones -
high and low. The language utilizes 34 Latin letters excluding "c,q and x", however, it includes additional letters "ch, gb, gh, gw, kp, kw, nw, ny, o., o˙,
u.and sh".
Nigerian-Pidgin aka Naija is from the English Creole Atlantic Krio language family with over 4 million native speakers and 116 million people second language speakers. It is a broken version of Nigerian English that is also a creole because it is used as a first language in certain ethnic communities (Mazzoli, 2021). It serves as a common language for all as it facilitates communication between several ethnicities. Naija has 26 letters similar to English with an analytical sentence morphology.
Yorùbá belongs to the Volta–Niger branch of the Niger-Congo language family with over 50 million speakers (Eberhard et al., 2021) thus making it the third most spoken indigenous African language. Yorùbá is native to South-Western Nigeria, Benin and Togo, and widely spoken across West Africa and Southern America like Sierra Leone, Côte d'Ivoire, The Gambia, Cuba, Brazil, and some Caribbean countries. Yorùbá is an isolating language in terms of its sentence morphology and tonal with three lexical tones - high, mid and low
- that are usually marked by diacritics which are used on syllabic nasals and vowels. Yorùbá orthography comprises 25 Latin letters which excludes
"c, q, v, x, and z" but includes additional letters "gb, e., s.and o.".
## 3.2 Nollysenti Creation
Unlike Hollywood movies that are heavily reviewed with hundreds of thousands of reviews all over the internet, there are fewer reviews about Nigerian movies despite their popularity. Furthermore, there is no online platform dedicated to writing or collecting movie reviews written in the four indigenous Nigerian languages. We only found reviews in English. Here, we describe the data source for the Nollywood reviews and how we created parallel review datasets for four Nigerian languages.
## 3.2.1 Data Source
Table 1 shows the data source for the NollySenti review dataset. We collected 1,018 positive reviews
(POS) and 882 negative reviews (NEG). These reviews were accompanied with ratings and were sourced from three popular online movie review platforms - IMDB, **Rotten Tomatoes** and, **Letterboxd**. We also collected reviews and ratings from four Nigerian websites like **Cinemapointer**,
Nollyrated. Our annotation focused on the classification of the reviews based on the ratings that the movie reviewer gave the movie. We used a rating scale to classify the POS or NEG reviews and defined ratings between 0-4 to be in the NEG
category and 7-10 as POS.
## 3.2.2 Human Translation
We hire professional translators in Nigeria and ask them to translate 1,010 reviews randomly chosen from the 1,900 English reviews. Thus, we have a parallel review dataset in English and other Nigerian languages and their corresponding ratings. For quality control, we ask a native speaker per language to manually verify the quality of over 100 randomly selected translated sentences, and we confirm that they are good translations, and they are not output of Google Translate (GT).4 All translators were properly remunerated according to the country rate5. In total, we translated 500 POS reviews and 510 NEG reviews. We decided to add 10 more NEG reviews since they are often shorter –
like one word e.g. ("disappointing").
## 4 Experimental Setup
Data Split Table 2 shows the data split into Train, Dev and **Test** splits. They are 410/100/500 for hau, ibo and pcm. To further experiment with the benefit of adding more reviews, we translate 490 more reviews for yor. The ratio split for yor is 900/100/500, while for eng is 1,300/100/500. We make use of the same reviews for Dev and **Test** for all languages. For our experiments of transfer learning and machine translation, we make use of all the training reviews for English (i.e 1,300). We make use of a larger test set (i.e. 500 reviews) for hau, ibo and pcm because the focus of our analysis is on zero-shot transfer, we followed similar data split as XCOPA (Ponti et al., 2020), COPA-HR (Ljubesic and Lauc, 2021) and NusaX datasets. The small training examples used in NollySenti provides an opportunity for researchers to develop more data efficient cross-lingual methods for under-resourced languages since this is a more realistic scenario.
## 4.1 Baseline Models
Here, we train sentiment models using classical machine learning models like Logistic regression and Support Vector Machine (SVM) and *fine-tune* several pre-trained language models (PLMs). Unlike classical ML methods, PLMs can be used for crosslingual transfer and often achieve better results (Devlin et al., 2019; Winata et al., 2023). We fine-tune the following PLMs: mBERT (Devlin et al., 2019),
XLM-R (Conneau et al., 2020), mDeBERTaV3 (He et al., 2021), AfriBERTa (Ogueji et al., 2021), and AfroXLMR (Alabi et al., 2022). The last two PLMs have been pre-trained or adapted to all the focus languages. For XLM-R and AfroXLMR, we make use of the base versions. The classical ML methods were implemented using Scikit-Learn (Pedregosa et al., 2011). Appendix B provides more details.
| No. | Ave. Length | Data source | | | | | | |
|-----------------------------------------------------------------------------------------|---------------|---------------|------|-----------------|------------|-------------|------------|--------|
| Sentiment | Reviews | (No. words) | IMDB | Rotten Tomatoes | LetterBoxd | Cinemapoint | Nollyrated | Others |
| positive | 1018 | 35.0 | 493 | 107 | 81 | 154 | 181 | 2 |
| negative | 882 | 20.7 | 292 | 140 | 101 | 269 | 74 | 6 |
| Total | 1900 | - | 785 | 247 | 182 | 423 | 255 | 8 |
| Table 1: Data source, number of movie reviews per source, and average length of reviews | | | | | | | | |
Table 1: **Data source, number of movie reviews per source, and average length of reviews**
| Train | Dev | Test | | | |
|---------------|-------|--------|------|-----|-----|
| Language | pos | neg | all | all | all |
| English (eng) | 1018 | 882 | 1300 | 100 | 500 |
| Hausa (hau) | 200 | 210 | 410 | 100 | 500 |
| Igbo (ibo) | 200 | 210 | 410 | 100 | 500 |
| Naija (pcm) | 200 | 210 | 410 | 100 | 500 |
| Yorùbá (yor) | 450 | 450 | 900 | 100 | 500 |
Table 2: **Dataset split.** The DEV and TEST split have equal number samples in positive and negative classes
## 4.2 Zero-Shot Adaptation 4.2.1 Transfer Learning
Cross-domain adaptation We train on the Twitter domain and perform cross-domain adaptation to the Nollywood movie domain. We make use of the NaijaSenti dataset for training. The datasets consist of between 12k-19k tweets for each of the Nigerian languages, 30 folds larger than our dataset.
Cross-lingual adaptation We train on two English datasets: (1) IMDB (Maas et al., 2011) - with 25,000 reviews and (2) NollySenti English with 1,300 reviews. The resulting models are evaluated on the test set of the remaining Nigerian languages.
## 4.2.2 Machine Translation
Lastly, we make use of MT to mitigate the domain difference. We make use of NLLB (NLLB-Team et al., 2022)
6for hau, ibo, and yor languages.
NLLB is a multilingual MT trained on 200 languages and dialects. It includes the three Nigerian languages except for Nigerian-Pidgin. For Nigerian-Pidgin, we make use of a pre-trained eng→pcm MT model by Adelani et al. (2022a) –
trained on both religious and news domain.
## 5 Results 5.1 Baseline Results
Table 3 provides the baseline results using both logistic regression, SVM, and several PLMs. All baselines on average have over 80% accuracy.
However, in all settings (i.e. all languages and number of training samples, N=400,900, and 1300),
6https://huggingface.co/facebook/
nllb-200-distilled-600M
PLMs exceed the performance of classical machine learning methods by over 5 − 7%. In general, we find Africa-centric PLMs (AfriBERTa-large and AfroXLMR-base) have better accuracy than massively multilingual PLMs pre-trained on around 100 languages. Overall, AfriBERTa achieves the best result on average, but slightly worse for English and Nigerian-Pidgin (an English-based creole language) since it has not been pre-trained on the English language.
## 5.2 Zero-Shot Evaluation Results
We make use of AfriBERTa for the zero-shot evaluation since it gave the best result in Table 3 (see avg.
excl. eng). Table 4 shows the zero-shot evaluation.
Performance of Cross-domain adaptation We obtained an impressive zero-shot result by evaluating a Twitter sentiment model (i.e. Twitter
(lang)) on movie review (73.8 on average). All have over 70 except for yor.
Performance Cross-lingual adaptation We evaluated two sentiment models, trained on either imdb or NollySenti (eng) English reviews. Our result shows that the adaptation of imdb has similar performance as the cross-domain adaptation, while the NollySenti (eng) exceeded the performance by over +6%. The imdb model (i.e imdb (eng))
was probably worse despite the large training size due to a slight domain difference between Hollywood reviews and Nollywood reviews - may be due to writing style and slight vocabulary difference among English dialects (Blodgett et al.,
2016). An example of a review with multiple indigenous named entities including a NEG sentiment is "**'Gbarada'** is a typical **Idumota** *'Yoruba* film' with all the craziness that come with that subsection of Nollywood. " that may not frequently occur in Hollywood reviews. Another observation is that the performance of pcm was unsurprisingly good for both setups (84.0 to 86.2) because it is an English-based creole.
Machine Translation improves adaptation To mitigate the domain difference, we found that by
hau ibo pcm yor ave
Twitter (lang) 76.7 78.4 74.1 66.0 73.8±0.6
IMDB (eng) 71.3 71.2 84.0 66.4 73.2±2.2 NollySenti (eng) 80.2 78.9 86.2 72.8 79.5±2.9
machine translation (en → **lang**)
IMDB (lang, N=25k) 86.8 83.8 86.8 82.0 83.0±1.0 NollySenti (lang, N=410) 84.0 86.3 81.2 83.0 83.6±0.6
NollySenti (lang) 88.3 86.5 87.0 **84.0** 86.4±0.2
NollySenti (eng+lang) **89.5 86.8 87.2** 83.8 86.8±0.3
Supervised 87.2 88.4 88.3 90.9 88.7±0.3
Table 4: **Zero-shot scenario using AfriBERTa-large:**
cross-domain (Twitter -> Movie), cross-lingual experiments (eng -> lang) and review generation using machine translation (Meta's NLLB and MAFAND (Adelani et al., 2022a) eng→pcm model)
| Parameter | eng | hau | ibo | pcm | yor | | | | | |
|-----------------|-------|-------|--------|-------|-------|-------|-------|-------|----------|-----------------|
| Model | size | N=410 | N=1300 | N=410 | N=410 | N=410 | N=410 | N=900 | avg | avg (excl. eng) |
| LogisticReg | <20K | 79.2 | 84.2 | 78.8 | 81.8 | 83.4 | 78.8 | 80.1 | 81.0±0.2 | 80.8±0.2 |
| SVM | <20K | 79.0 | 85.2 | 79.0 | 80.6 | 83.6 | 79.7 | 81.9 | 81.3±0.6 | 81.0±0.6 |
| mBERT | 172M | 90.3 | 92.6 | 80.0 | 82.4 | 89.1 | 84.8 | 87.8 | 87.0±0.5 | 85.2±0.5 |
| XLM-R-base | 270M | 93.2 | 94.1 | 76.8 | 83.6 | 90.8 | 83.9 | 86.0 | 86.9±0.5 | 84.2±0.5 |
| mDeBERTaV3 | 276M | 94.2 | 95.1 | 83.7 | 87.1 | 91.8 | 82.2 | 87.4 | 88.8±0.5 | 86.4±0.5 |
| AfriBERTa-large | 126M | 86.2 | 89.5 | 87.2 | 88.4 | 88.3 | 85.9 | 90.9 | 88.1±0.3 | 88.1±0.3 |
| AfroXLMR-base | 270M | 92.3 | 94.1 | 84.2 | 85.6 | 91.0 | 83.8 | 88.4 | 88.5±0.8 | 86.6±0.8 |
Table 5: **Automatic** (N=410) and **Human evaluation**
(N=100) of the MT generated reviews from TRAIN split.
automatically translating N=410 reviews using a pre-trained MT model improved the average zeroshot performance by over +4%. With additional machine translated reviews (N=1300), the average performance improved further by +3%. Combining all translated sentences with English reviews does not seem to help. Our result is quite competitive to the supervised baseline (−1.9%). As an additional experiment, we make use of MT to translate 25k IMDB reviews, the result was slightly worse than NollySenti (lang). This further confirms the slight domain difference in the two datasets.
| Lang. | BLEU | CHRF | Adequacy | sentiment preservation |
|---------|--------|--------|------------|--------------------------|
| hau | 13.6 | 40.8 | 4.4 | 92.0% |
| ibo | 9.8 | 33.4 | 3.8 | 92.0% |
| pcm | 26.4 | 53.0 | 4.6 | 96.0% |
| yor | 3.53 | 16.9 | 4.0 | 89.5% |
## Sentiment Is Often Preserved In Mt Translated
reviews Table 5 shows that despite the low BLEU
score (< 15) for hau, ibo and yor, native speakers (two per language) of these languages rated the machine translated reviews in terms of content preservation or adequacy to be much better than average (3.8 to 4.6) for all languages on a Likert scale of 1-5. Not only does the MT models preserve content, native speakers also rated their output to preserve more sentiment (i.e. achieving at least of 90%) even for some translated texts with low adequacy ratings. Appendix C provides more details on the human evaluation and examples.
## 6 Conclusion
In this paper, we focus on the task of sentiment classification for cross-domain adaptation. We developed a new dataset, **NollySenti** for five Nigerian languages. Our results show the potential of both transfer learning and MT for developing sentiment classification models for low-resource languages.
As a future work, we would like to extend the creation of movie sentiment corpus to more African languages.
## Limitations
One of the limitations of our work is that we require some form of good performance of machine translation models to generate synthetic reviews for sentiment classification. While our approach seems to work well for some low-resource languages like yor with BLEU score of 3.53, it may not generalize to other sequence classification tasks like question answering where translation errors may be more critical.
## Ethics Statement
We believe our work will benefit the speakers of the languages under study and the Nollywood industry.
We look forward to how this dataset can be used to improve the processes of the Nollywood industry and provide data analytics on movies.
We acknowledge that there maybe some bias introduced due to manually translating the dataset from English, but we do not see any potential harm in releasing this dataset. While the texts were crawled online, they do not contain personal identifying information.
## Acknowledgements
This material is partly based upon work supported by the National Science Foundation under Grant Numbers: 2226006, 1828199, and 1704113. We appreciate Aremu Anuoluwapo for coordinating and verifying the translation of the reviews to the Nigerian languages. We appreciate the collective efforts of the following people: Bolutife Kusimo, Oluwasijibomi Owoka, Oluchukwu Igbokwe, Boluwatife Omoshalewa Adelua, Chidinma Adimekwe, Edward Agbakoba, Ifeoluwa Shode, Mola Oyindamola, Godwin-Enwere Jefus, Emmanuel Adeyemi, Adeyemi Folusho, Shamsuddeen Hassan Muhammad, Ruqayya Nasir Iro and Maryam Sabo Abubakar for their assistance during data collection and annotation, thank you so much. David Adelani acknowledges the support of DeepMind Academic Fellowship programme.
Finally, we thank the Spoken Language Systems Chair, Dietrich Klakow at Saarland University for providing GPU resources to train the models.
## References
Ife Adebara and Muhammad Abdul-Mageed. 2022. Towards afrocentric NLP for African languages: Where we are and where we can go. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3814–3841, Dublin, Ireland. Association for Computational Linguistics.
David Adelani, Jesujoba Alabi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, Dietrich Klakow, Peter Nabende, Ernie Chang, Tajuddeen Gwadabe, Freshia Sackey, Bonaventure F. P.
Dossou, Chris Emezue, Colin Leong, Michael Beukman, Shamsuddeen Muhammad, Guyo Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, Eric Peter Wairagala, Muhammad Umair Nasir, Benjamin Ajibade, Tunde Ajayi, Yvonne Gitau, Jade Abbott, Mohamed Ahmed, Millicent Ochieng, Anuoluwapo Aremu, Perez Ogayo, Jonathan Mukiibi, Fatoumata Ouoba Kabore, Godson Kalipe, Derguene Mbaye, Allahsera Auguste Tapo, Victoire Memdjokam Koagne, Edwin Munkoh-Buabeng, Valencia Wagner, Idris Abdulmumin, Ayodele Awokoya, Happy Buzaaba, Blessing Sibanda, Andiswa Bukula, and Sam Manthalu. 2022a. A few thousand translations go a long way! leveraging pre-trained models for African news translation. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 3053–3070,
Seattle, United States. Association for Computational Linguistics.
David Adelani, Graham Neubig, Sebastian Ruder, Shruti Rijhwani, Michael Beukman, Chester PalenMichel, Constantine Lignos, Jesujoba Alabi, Shamsuddeen Muhammad, Peter Nabende, Cheikh M. Bamba Dione, Andiswa Bukula, Rooweither Mabuya, Bonaventure F. P. Dossou, Blessing Sibanda, Happy Buzaaba, Jonathan Mukiibi, Godson Kalipe, Derguene Mbaye, Amelia Taylor, Fatoumata Kabore, Chris Chinenye Emezue, Anuoluwapo Aremu, Perez Ogayo, Catherine Gitau, Edwin MunkohBuabeng, Victoire Memdjokam Koagne, Allahsera Auguste Tapo, Tebogo Macucwa, Vukosi Marivate, Mboning Tchiaze Elvis, Tajuddeen Gwadabe, Tosin Adewumi, Orevaoghene Ahia, Joyce Nakatumba-Nabende, Neo Lerato Mokono, Ignatius Ezeani, Chiamaka Chukwuneke, Mofetoluwa Oluwaseun Adeyemi, Gilles Quentin Hacheme, Idris Abdulmumin, Odunayo Ogundepo, Oreen Yousuf, Tatiana Moteu, and Dietrich Klakow. 2022b.
MasakhaNER 2.0: Africa-centric transfer learning for named entity recognition. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4488–4508, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
David Adelani, Dana Ruiter, Jesujoba Alabi, Damilola Adebonojo, Adesina Ayeni, Mofe Adeyemi, Ayodele Esther Awokoya, and Cristina España-Bonet.
2021a. The effect of domain and diacritics in Yoruba–
English neural machine translation. In *Proceedings of Machine Translation Summit XVIII: Research* Track, pages 61–75, Virtual. Association for Machine Translation in the Americas.
David Ifeoluwa Adelani, Jade Abbott, Graham Neubig, Daniel D'souza, Julia Kreutzer, Constantine Lignos, Chester Palen-Michel, Happy Buzaaba, Shruti Rijhwani, Sebastian Ruder, Stephen Mayhew, Israel Abebe Azime, Shamsuddeen H. Muhammad, Chris Chinenye Emezue, Joyce Nakatumba-Nabende, Perez Ogayo, Aremu Anuoluwapo, Catherine Gitau, Derguene Mbaye, Jesujoba Alabi, Seid Muhie Yimam, Tajuddeen Rabiu Gwadabe, Ignatius Ezeani, Rubungo Andre Niyongabo, Jonathan Mukiibi, Verrah Otiende, Iroro Orife, Davis David, Samba Ngom, Tosin Adewumi, Paul Rayson, Mofetoluwa Adeyemi, Gerald Muriuki, Emmanuel Anebi, Chiamaka Chukwuneke, Nkiruka Odu, Eric Peter Wairagala, Samuel Oyerinde, Clemencia Siro, Tobius Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akinode, Deborah Nabagereka, Maurice Katusiime, Ayodele Awokoya, Mouhamadane MBOUP, Dibora Gebreyohannes, Henok Tilaye, Kelechi Nwaike, Degaga Wolde, Abdoulaye Faye, Blessing Sibanda, Orevaoghene Ahia, Bonaventure F. P. Dossou, Kelechi Ogueji, Thierno Ibrahima DIOP, Abdoulaye Diallo, Adewale Akinfaderin, Tendai Marengereke, and Salomey Osei. 2021b. MasakhaNER: Named entity recognition for African languages. *Transactions*
of the Association for Computational Linguistics, 9:1116–1131.
David Ifeoluwa Adelani, Marek Masiak, Israel Abebe Azime, Jesujoba Oluwadara Alabi, Atnafu Lambebo Tonja, Christine Mwase, Odunayo Ogundepo, Bonaventure F. P. Dossou, Akintunde Oladipo, Doreen Nixdorf, Chris C. Emezue, Sana AlAzzawi, Blessing K. Sibanda, Davis David, Lolwethu Ndolela, Jonathan Mukiibi, Tunde Oluwaseyi Ajayi, Tatiana Moteu Ngoli, Brian Odhiambo, Abraham Toluwase Owodunni, Nnaemeka C.
Obiefuna, Shamsuddeen Hassan Muhammad, Saheed Salahudeen Abdullahi, Mesay Gemeda Yigezu, Tajuddeen Rabiu Gwadabe, Idris Abdulmumin, Mahlet Taye Bame, Oluwabusayo Olufunke Awoyomi, Iyanuoluwa Shode, Tolulope Anu Adelani, Habiba Abdulganiy Kailani, Abdul-Hakeem Omotayo, Adetola Adeeko, Afolabi Abeeb, Anuoluwapo Aremu, Olanrewaju Samuel, Clemencia Siro, Wangari Kimotho, Onyekachi Raphael Ogbu, Chinedu E. Mbonu, Chiamaka Ijeoma Chukwuneke, Samuel Fanijo, Jessica Ojo, Oyinkansola F.
Awosan, Tadesse Kebede Guge, Sakayo Toadoum Sari, Pamela Nyatsine, Freedmore Sidume, Oreen Yousuf, Mardiyyah Oduwole, Ussen Abre Kimanuka, Kanda Patrick Tshinu, Thina Diko, Siyanda Nxakama, Abdulmejid Tuni Johar, Sinodos Gebre, Muhidin A. Mohamed, S. A. Mohamed, Fuad Mire Hassan, Moges Ahmed Mehamed, Evrard Ngabire, and Pontus Stenetorp. 2023. MasakhaNEWS: News topic classification for african languages. *ArXiv*,
abs/2304.09972.
Jesujoba O. Alabi, David Ifeoluwa Adelani, Marius Mosbach, and Dietrich Klakow. 2022. Adapting pretrained language models to African languages via multilingual adaptive fine-tuning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4336–4349, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Alexandra Balahur and Marco Turchi. 2012. Multilingual sentiment analysis using machine translation?
In Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis, pages 52–60, Jeju, Korea. Association for Computational Linguistics.
Alexandra Balahur and Marco Turchi. 2013. Improving sentiment analysis in Twitter using multilingual machine translated data. In *Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013*, pages 49–55, Hissar, Bulgaria. INCOMA Ltd. Shoumen, BULGARIA.
Su Lin Blodgett, Lisa Green, and Brendan O'Connor.
2016. Demographic dialectal variation in social media: A case study of African-American English.
In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1119–1130, Austin, Texas. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
David M. Eberhard, Gary F. Simons, and Charles D. Fennig (eds.). 2021. Ethnologue: Languages of the world. twenty-third edition.
Roald Eiselen. 2016. Government domain named entity recognition for South African languages. In *Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)*, pages 3344–3348, Portorož, Slovenia. European Language Resources Association (ELRA).
Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021.
Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. *ArXiv*, abs/2111.09543.
Ruining He and Julian McAuley. 2016. Ups and downs:
Modeling the visual evolution of fashion trends with one-class collaborative filtering. In *Proceedings of* the 25th International Conference on World Wide Web, WWW '16, page 507–517, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP
world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics.
Nikola Ljubesic and Davor Lauc. 2021. Bertic - the ´
transformer language model for bosnian, croatian, montenegrin and serbian. *ArXiv*, abs/2104.09243.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Maria Mazzoli. 2021. The ideological debate on naijá and its use in education. *English World-Wide*,
42(3):299–323.
Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Abinew Ali Ayele, Nedjma Djouhra Ousidhoum, David Ifeoluwa Adelani, Seid Muhie Yimam, Ibrahim Said Ahmad, Meriem Beloucif, Saif M.
Mohammad, Sebastian Ruder, Oumaima Hourrane, Pavel Brazdil, Felermino D'ario M'ario Ant'onio Ali, Davis C. Davis, Salomey Osei, Bello Shehu Bello, Falalu Ibrahim, Tajuddeen Rabiu Gwadabe, Samuel Rutunda, Tadesse Destaw Belay, Wendimu Baye Messelle, Hailu Beshada Balcha, Sisay Adugna Chala, Hagos Tesfahun Gebremichael, Bernard Opoku, and Steven Arthur. 2023. Afrisenti: A twitter sentiment analysis benchmark for african languages.
ArXiv, abs/2302.08956.
Shamsuddeen Hassan Muhammad, David Ifeoluwa Adelani, Sebastian Ruder, Ibrahim Sa'id Ahmad, Idris Abdulmumin, Bello Shehu Bello, Monojit Choudhury, Chris Chinenye Emezue, Saheed Salahudeen Abdullahi, Anuoluwapo Aremu, Alípio Jorge, and Pavel Brazdil. 2022. NaijaSenti: A nigerian Twitter sentiment corpus for multilingual sentiment analysis. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 590–602, Marseille, France. European Language Resources Association.
NLLB-Team, Marta Ruiz Costa-jussà, James Cross, Onur cCelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Alison Youngblood, Bapi Akula, Loïc Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon L. Spruit, C. Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzm'an, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation. *ArXiv*,
abs/2207.04672.
Kelechi Ogueji, Yuxin Zhu, and Jimmy Lin. 2021.
Small data? no problem! exploring the viability of pretrained multilingual language models for lowresourced languages. In *Proceedings of the 1st Workshop on Multilingual Representation Learning*, pages 116–126, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115–124, Ann Arbor, Michigan. Association for Computational Linguistics.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Alberto Poncelas, Pintu Lohar, James Hadley, and Andy Way. 2020. The impact of indirect machine translation on sentiment classification. In Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 78–88, Virtual. Association for Machine Translation in the Americas.
Edoardo Maria Ponti, Goran Glavaš, Olga Majewska, Qianchu Liu, Ivan Vulic, and Anna Korhonen. 2020. ´
XCOPA: A multilingual dataset for causal commonsense reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2362–2376, Online. Association for Computational Linguistics.
Eshrag Refaee and Verena Rieser. 2015. Benchmarking machine translated sentiment analysis for Arabic tweets. In *Proceedings of the 2015 Conference of* the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 71–78, Denver, Colorado. Association for Computational Linguistics.
Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017.
SemEval-2017 task 4: Sentiment analysis in Twitter.
In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502–
518, Vancouver, Canada. Association for Computational Linguistics.
Iyanuoluwa Shode, David Ifeoluwa Adelani, and Anna Feldman. 2022. yosm: A new yoruba sentiment corpus for movie reviews.
Daan van Esch, Tamar Lucassen, Sebastian Ruder, Isaac Caswell, and Clara Rivera. 2022. Writing system and speaker metadata for 2,800+ language varieties. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 5035–5046, Marseille, France. European Language Resources Association.
Genta Indra Winata, Alham Fikri Aji, Samuel Cahyawijaya, Rahmad Mahendra, Fajri Koto, Ade Romadhony, Kemal Kurniawan, David Moeljadi, Radityo Eko Prasojo, Pascale Fung, Timothy Baldwin, Jey Han Lau, Rico Sennrich, and Sebastian Ruder.
2023. NusaX: Multilingual parallel sentiment dataset for 10 Indonesian local languages. In *Proceedings* of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 815–834, Dubrovnik, Croatia. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz,
and Jamie Brew. 2019. Huggingface's transformers:
State-of-the-art natural language processing. *ArXiv*,
abs/1910.03771.
Seid Muhie Yimam, Hizkiel Mitiku Alemayehu, Abinew Ayele, and Chris Biemann. 2020. Exploring Amharic sentiment analysis from social media texts:
Building annotation tools and classification models. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 1048–
1060, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 649–657, Cambridge, MA, USA. MIT Press.
## A Focus Languages
We focus on four Nigerian languages from three different language families. **Hausa** (hau) is from the Afro-Asiatic/Chadic family spoken by over 77 million (M) people. **Igbo** (ibo) and **Yorùbá** (yor) are both from Niger-Congo/ Volta-Niger family spoken by 30M and 46M respectively. While **NigerianPidgin** (pcm) is from the English Creole family, spoken by over 120M people. The Nigerian-Pidgin is ranked the 14th most spoken language in the world7. All languages make use of the Latin script.
Except for Nigerian-Pidgin, the remaining are tonal languages. Also, Igbo and Yorùbá make extensive use of diacritics in texts which are essential for the correct pronunciation of words and for reducing ambiguity in understanding their meanings.
## B Hyper-Parameters For Plms
For fine-tuning PLMs, we make use of HuggingFace transformers (Wolf et al., 2019). We make use of maximum sequence length of 200, batach size of 32, number of epochs of 20, and learning rate of 5e − 5 for all PLMs.
## C Human Evaluation
To verify the performance of the MT model, we hire at least two native speakers of each Nigerian indigenous languages - three native Igbo speakers, four native Yorùbá speakers, four native speakers of Nigerian Pidgin and two Hausa native speakers.
The annotators were individually given 100 randomly selected translated reviews in Excel sheets to report the adequacy and sentiment preservation 7https://www.ethnologue.com/guides/ethnologue200
(1: if they preserve sentiment, 0:otherwise) of the MT outputs. Alongside the sheets, the annotators are given an annotation guideline to guide them during the course of the annotation. Asides that the annotators are of the Nigerian descent as well as native speakers of the selected languages, their minimum educational experience is a bachelor's degree which qualifies them to efficiently read, write and comprehend the annotation materials and data to be annotated.
To measure the consistency of our annotators, we added repeated 5 examples out of the 100 examples.
Our annotators were consistent with their annotation. We measure the inter-agreement among the two annotators per task. For adequacy, the annotators achieved Krippendorff's alpha scores of 0.675, 0.443, 0.41, 0.65 for Hausa, Igbo, Nigerian-Pidgin, and Yorùbá respectively. Similarly, for sentiment preservation, Krippendorff's alpha scores of 1.0, 0.93, 0.48, and 0.52 for Hausa, Igbo, NigerianPidgin, and Yorùbá respectively. In general, annotators reviewed the translated texts to have adequacy of 3.8 and 4.6. Nigerian-Pidgin (4.6) achieved better adequacy result as shown in Table 5 because of her closeness to English language, Igbo was rated to have a lower adequacy score (3.8). Overall, all annotators rated the translated sentences to preserve sentiment at least in 90% of the time i.e 90 out of 100 translations preserve the original sentiment in the English sentence.
## C.1 Qualitative Analysis
The human evaluation is to verify the manually verify the quality of over 100 randomly selected translated sentences manually. Also, the reports from the annotators were automatically computed to support our claim that sentiment is usually preserved in MT outputs. The examples listed in Table 6 are extracted during the annotation process.
The examples illustrate the noticeable mistakes in MT outputs. The annotators are expected to give a rating scale between 1-5 if the randomly selected machine translated review is adequately translated and a binary 0-1 rating scale if the sentiment of the original review is retained in the the randomly selected machine translated review.
The examples that are listed in Table 6 buttress our claim that MT outputs are not completely accurate as some translations in the target languages are missing thereby affecting the complete idea and meaning of the movie review that is originally
| English Translation | Target Language Translation | Literal Translation of Target language |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|
| Target Language: Yorùbá Incorrect translation, sentiment not preserved. In the absence of such a perfect storm, avoid stabbing your wallet in the heart with this 'Dagger'. Definitely not recommended Níwòn bí k'o ti sí 'ijì líle tó dára, má s.e | In the absence of a great storm, do not | |
| fi "Dagger" yìí pa owó re. ní o. kàn re. . | use this "Dagger" to kill your money in the heart | |
| Incorrect translation, sentiment preserved. Citation the movie. Perfect Movie. Loved every second of the movie. Wished it didn't end Mo fé. rà gbogbo ìs.é. jú tí mo fi n´ s.e fíìmù | I enjoyed every second that I used to | |
| náà, mo fé. kí ó máà parí | make this movie. Wished it did not end | |
| Incorrect and Incomplete translation, sentiment not preserved Funny Funny Funny. Oh mehn, this movie is super funny. if you are looking for a movie to lift your mood up then this is the right movie for you . Orinrinrinrinrinrin... | song (MT output is nonsensical) | |
| Target Language: Igbo Incorrect translation, sentiment not preserved. Fifty minutes is spent advertising a holiday resort in Lagos, Movie closes. Money down the drain. Not recommended. O. bu. ru. na i . na-eme ihe ndi . a, i . ga-enwe | Do these things to leave it | |
| ike i .hapu. ya. | | |
| Incorrect translation, sentiment preserved. Temi Otedola's performance was truly stunning. I thoroughly enjoyed the layers that the story had and the way that each key piece of information was revealed. Ihe a o mere to. ro. m ezigbo u. to. , o. naato.kwa m u. to. otú e si ko.waa ihe ndi . di . mkpa. | I thoroughly enjoyed the layers that the story had and the way that each key piece of information was revealed. | |
| Incorrect and Incomplete translation, sentiment not preserved Nice cross-country movie. The only thing that I don't like about this movie is the way there was little or no interaction with the Nigerian or Indian environment. Beautiful romantic movie . Ihe m na-adi .ghi . amasi . na fim a bu. na o. . ihe jiko. ro. ya na ndi .a ma o. di .ghi . Nai . jiri bu. ndi . India. | The only thing that I don't like about this movie is the way there was little or no interaction with the Nigerian or Indian environment | |
| Target Language: PCM - Nigerian Pidgin Incorrect translation, sentiment preserved. Nice cross-country movie . The only thing that I don't like about this movie is the way there was little or no interaction with the Nigerian or Indian environment. Beautiful romantic movie . The only thing wey I no like about this film na because e no too get interaction with Nigerian or Indian people. | The only thing that I don't like about this movie is the way there was little or no interaction with the Nigerian or Indian people. | |
| Incorrect translation, sentiment preserved. A flawed first feature film , but it Fear first feature film, but e show plenti | Fear was featured in the film firstly but | |
| shows a great deal of promise | promise. | it shows a great deal of promise |
| Incorrect and Incomplete translation, sentiment not preserved Spot On!!! Definitely African movie of the year, enjoyed every minute of the 2hours 30minutes Na almost every minute of the 2hours 30minutes wey dem take play for Africa film dem dey play. | It is almost every minute of the 2hours 30minutes that they play African movie they play | |
| Table 6: Examples of translation mistakes observed and impact on the sentiment. The Gray color identifies | | |
Table 6: **Examples of translation mistakes observed and impact on the sentiment**. The Gray color identifies
the sentiment portion of the review
written in English, which eventually could lead to losing the sentiment of the movie review. Also, as shown in Table 6, the sentiments of some reviews are preserved regardless of the incorrect or missing translations and the idea or meaning of the review is not totally lost.
## C.2 Annotation Guideline
We provide the annotation guideline on Github8.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6 (Limitation)
✓ A2. Did you discuss any potential risks of your work?
6 (Ethics Statement)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract; 1 - Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
3,4,5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
3
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3, 5, 6 (Ethics Statement)
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 6 (Ethics Statement)
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3
## C ✓ **Did You Run Computational Experiments?** 4,5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4,5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4, 5
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 3, Appendix (C)
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
3, Appendix (C)
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
3, Appendix (C)
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
6 (Ethics Statement)
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
3, Appendix (C) |
mensah-etal-2023-trading | Trading Syntax Trees for Wordpieces: Target-oriented Opinion Words Extraction with Wordpieces and Aspect Enhancement | https://aclanthology.org/2023.acl-short.86 | State-of-the-art target-oriented opinion word extraction (TOWE) models typically use BERT-based text encoders that operate on the word level, along with graph convolutional networks (GCNs) that incorporate syntactic information extracted from syntax trees. These methods achieve limited gains with GCNs and have difficulty using BERT wordpieces. Meanwhile, BERT wordpieces are known to be effective at representing rare words or words with insufficient context information. To address this issue, this work trades syntax trees for BERT wordpieces by entirely removing the GCN component from the methods{'} architectures. To enhance TOWE performance, we tackle the issue of aspect representation loss during encoding. Instead of solely utilizing a sentence as the input, we use a sentence-aspect pair. Our relatively simple approach achieves state-of-the-art results on benchmark datasets and should serve as a strong baseline for further research. |
## Trading Syntax Trees For Wordpieces: Target-Oriented Opinion Words Extraction With Wordpieces And Aspect Enhancement
Samuel Mensah Computer Science Department University of Sheffield, UK
[email protected] Kai Sun BDBC and SKLSDE
Beihang University, China [email protected] Nikolaos Aletras Computer Science Department University of Sheffield, UK
[email protected]
## Abstract
State-of-the-art target-oriented opinion word extraction (TOWE) models typically use BERTbased text encoders that operate on the word level, along with graph convolutional networks
(GCNs) that incorporate syntactic information extracted from syntax trees. These methods achieve limited gains with GCNs and have difficulty using BERT wordpieces. Meanwhile, BERT wordpieces are known to be effective at representing rare words or words with insufficient context information. To address this issue, this work trades syntax trees for BERT
wordpieces by entirely removing the GCN component from the methods' architectures. To enhance TOWE performance, we tackle the issue of aspect representation loss during encoding.
Instead of solely utilizing a sentence as the input, we use a sentence-aspect pair. Our relatively simple approach achieves state-of-the-art results on benchmark datasets and should serve as a strong baseline for further research.
## 1 Introduction
Target-oriented opinion word extraction (TOWE;
Fan et al. (2019)) is a subtask in aspect-based sentiment analysis (ABSA; Pontiki et al. (2014b)),
which aims to identify words that express an opinion about a specific target (or aspect) in a sentence. For instance, in the sentence "Such an awesome **surfboard**.", a TOWE model should identify *"awesome"* as the opinion word for the given aspect **surfboard**. TOWE provides explicit aspectopinion pairs which can be used to improve results in downstream tasks such as opinion summarization (Kim et al., 2011) and information extraction (Pontiki et al., 2014b; Tang et al., 2016; Sun et al., 2023).
Currently, many TOWE methods (Veyseh et al.,
2020; Chen et al., 2020; Jiang et al., 2021; Feng et al., 2021; Mensah et al., 2021) use pretrained BERT (Devlin et al., 2018), to encode the input
| 1. Sentence: | Such an awesome surfboard |
|----------------|-----------------------------------------------------------------------------------------------------|
| Wordpieces: | 'such', 'an', 'awesome', 'surf', '##board' |
| 2. Sentence: | A great snowboard which holds edges well when riding on snow. |
| Wordpieces: | 'A', 'great', 'snow', '##board', 'which', 'holds', 'edges', 'well', 'when', 'riding', 'on', 'snow'. |
Table 1: Sentences demonstrating contextual understanding through shared wordpieces. The table shows each sentence and its corresponding BERT wordpiece sequence. Aspect words are bold-typed and opinion words are italicized. The shared wordpiece '\#\#board' helps in decoding the meaning of "surfboard".
sentence. BERT has the ability to effectively capture context, which can improve TOWE performance. However, many of these methods are rather complex, as they often incorporate syntax tree information using a graph convolutional network
(GCN) (Kipf and Welling, 2017). For instance, Veyseh et al. (2020) uses an ordered-neuron LSTM (Shen et al., 2018) encoder with a GCN while Jiang et al. (2021) applies an attention-based relational GCN on the syntax tree. Mensah et al. (2021)
applies a BiLSTM (Hochreiter and Schmidhuber, 1997) on BERT embeddings and incoporate syntax information via a GCN.
While incorporating syntax information through GCNs has been shown to provide some performance gains in TOWE, these are usually limited (Mensah et al., 2021). Moreover, modeling subword tokens with a GCN can be challenging because the syntax tree consists of whole words rather than subword tokens like wordpieces (Schuster and Nakajima, 2012; Devlin et al., 2018). Models based on subword tokens strike a good balance between character- and word-based encoders. They are able to effectively learn representations of rare words or words with insufficient context information. Consider the example in Table 1. The context 999 information for "surfboard" is limited, making it difficult to understand its meaning without additional context. However, both aspects share the wordpiece "\#\#board", which allows the meaning of "surfboard" to be partially understood by using information from the context of "snowboard". In this case, "riding" is related to both aspects through the shared wordpiece, enabling the representation of "surfboard" to be improved.
In this paper, we propose a substantial simplification for syntax-aware TOWE models (Veyseh et al.,
2020; Jiang et al., 2021; Mensah et al., 2021) by replacing the syntax tree with subword information while maintaining good prediction performance.
This is accomplished by removing the GCN from these architectures and using BERT wordpieces instead. Additionally, we address the issue of aspect representation degradation during encoding. This degradation negatively affects TOWE performance by reducing the availability of semantic information about the aspect for determining the opinion words to extract. To solve this problem, we propose using a sentence-aspect pair as input rather than just a sentence, similar to the approach used by Tian et al. (2021) for aspect-based sentiment classification. Through extensive experimentation, we found that our simple approach achieves state-of-the-art
(SOTA) results by outperforming the method proposed by Mensah et al. (2021) without the need of a GCN component.
## 2 Task Formalization
The TOWE task aims to identify an opinion word in a sentence S = {w1*, . . . , w*ns } with respect to an aspect wa ∈ S. The sentence is typically tokenized into a sequence of tokens at different levels of granularity (e.g. subwords or whole words),
T = {t1*, . . . , t*nt}, with ta ∈ T denoting a subsequence of the aspect wa and ns ≤ nt. The goal is to assign one of three tags (I, O, or B) to each token using the IOB format (Ramshaw and Marcus, 1995), which indicates whether the word is at the Inside, Outside or Beginning of the opinion word relative to the aspect.
## 3 Syntax-Aware Approaches To Towe
Typically, syntax-aware approaches to TOWE (Veyseh et al., 2020; Jiang et al., 2021; Mensah et al.,
2021) employ a text encoder that utilizes pretrained BERT (Devlin et al., 2018) and position embeddings (Zeng et al., 2014) (or category embeddings (Jiang et al., 2021)) to learn whole word representations that are aware of the aspect's location in text. These approaches also include a GCN
that operates on a syntax tree in order to incorporate syntactic information into the model.
## Ordered-Neuron Lstm Gcn (Ong): Veyseh
et al. (2020) combine an ordered neuron LSTM
(ON-LSTM; Shen et al. (2018)) and a GCN for TOWE. The ON-LSTM layer is an LSTM variant that considers the order of elements in the input sequence (including BERT and position embeddings) when modeling dependencies between them.
The GCN encodes syntactic structural information into the representations obtained by the ON-LSTM
layer.
BERT+BiLSTM+GCN: Mensah et al. (2021)
replaces the ON-LSTM of the ONG model with a BiLSTM to better capture short-term dependencies between aspect and opinion words.
Attention-based Relational GCN (ARGCN):
Jiang et al. (2021) combine contextualized embedding obtained using BERT with a category embedding (i.e., IOB tag embedding) to incorporate aspect information. They subsequently use a relational GCN (Schlichtkrull et al., 2018) and BiLSTM to respectively incorporate syntactic and sequential information for TOWE classification.
## 4 Trading Syntax Trees For Wordpieces
Mensah et al. (2021) have recently demonstrated that the use of a GCN to incorporate syntax tree information has little impact in TOWE model performance. Meanwhile, the GCN presents challenges when using subword tokens, as previously mentioned. Therefore, we propose a simplified version of the TOWE model that omits the GCN
component from syntax-aware approaches and instead uses subword tokens as the input to the BERT
component. In this work, we use BERT's Wordpieces (Devlin et al., 2018) as the subword representation because they are highly informative, having been derived from the BERT pretraining process. However, methods such as Byte-Pair Encoding (BPE) (Sennrich et al., 2016) can also be used, as we will see later in the experiments.
## 4.1 Formatting Bert Input
Given sentence S, the BERT wordpiece tokenizer segments S into a sequence of wordpieces T =
Models Granularity Lap14 Res14 Res15 Res16 Avg ONG word 75.77 82.33 78.81 86.01 80.73 ONG w/o GCN word 74.17 84.10 78.33 84.87 80.37 ONG(S) w/o GCN wordpiece 79.79 86.63 80.72 88.30 83.86
ONG(S,A) w/o GCN wordpiece 81.70 88.70 **82.55** 91.18 86.03
ARGCN word 76.36 85.42 78.24 86.69 81.68
ARGCN w/o R-GCN word 76.38 84.36 78.41 84.61 80.94
ARGCN(S) w/o R-GCN wordpiece 80.08 85.92 81.36 89.72 84.27
ARGCN(S,A) w/o R-GCN wordpiece 81.37 88.18 82.49 90.82 85.72
BERT+BiLSTM+GCN word 78.82 85.74 80.54 87.35 83.11
BERT+BiLSTM word 78.25 85.60 80.41 86.94 82.80
BERT+BiLSTM(S) wordpiece 80.45 86.27 80.89 89.80 84.35
BERT+BiLSTM(S,A) wordpiece **82.59** 88.60 82.37 91.25 **86.20**
{t1, t2*, . . . , t*nt}. The BERT input for S is then formatted as follows:
T
(S) = {[CLS]*, T,* [SEP]} (1)
where [CLS] and [SEP] are special tokens that mark the boundaries of the sentence.
While this format may be adequate for some NLP tasks, it can be problematic for learning good aspect representations in aspect-based sentiment classifica- tion (Tian et al., 2021). To mitigate this issue, we adopt the approach of Tian et al. (2021)
and reformat the BERT input by using a sentenceaspect pair T
(S,A), which combines T
(S)and ta
(i.e. the aspect subsequence) along with special tokens.
## T (S,A) = {[Cls], T, [Sep], Ta, [Sep]} (2) 4.2 Classification And Optimization
The input T
(S,A)consists of two parts: T
(S)and ta. Since ta only serves to enhance the aspect representation in T
(S), sequence labeling is done on T
(S) only. During sequence labeling, we follow the common approach of predicting based on the first wordpiece representation of a word. For instance, given the word "surfboard" that consists of the wordpieces "surf" and "\#\#board" which both are learned during encoding, only the representation of "surf" is fed to a softmax classifier to predict the tag for the whole word. The cross-entropy function is minimized for each word in the training set.
## 5 Experiments And Results
We experiment with the following baselines:
ARGCN, BERT+BiLSTM+GCN and ONG. We use the suffixes (S) or (S,A) to indicate whether the modified versions of these methods uses a wordpiece sentence or wordpiece sentence-aspect pair as input, respectively. We used the publicly available code and optimal hyperparameter settings from the authors of ARGCN1and BERT+BiLSTM+GCN.2 We have implemented ONG model variants ourselves using the suggested hyperparameter configurations from the authors.3 Following previous work
(Fan et al., 2019), we use the same experimental setup and evaluate on the Laptop dataset (Lap14)
and the Restaurant datasets (Res14, Res15, Res16)
(Pontiki et al., 2014a, 2015, 2016). The result reported for each dataset is the average over Micro F1 scores obtained from five different runs. Each run uses a different random seed to ensure the stability of our results.
## 5.1 F1 Performance Comparison
The results, shown in Table 2, indicate that removing the GCN component from syntax-aware approaches does not substantially impact their performance, with average decreases in performance of 0.36, 0.74, and 0.31, respectively. However, we observed a large improvement in model performance when using wordpieces, as indicated by the models with the (S) suffix. It is possible that BERT captures enough syntax information already
(Clark et al., 2019) and, therefore, using GCNs to exploit syntax trees does not substantially improve 1https://github.com/samensah/encoders_
towe_emnlp2021 2https://github.com/wcwowwwww/
towe-eacl 3https://github.com/samensah/
Towe-TradeSyntax4WP
![3_image_0.png](3_image_0.png)
Table 3: F1 performance of BERT-BiLSTM(S) with and without aspect masking.
performance on the task. This suggests that it may be beneficial to prioritize wordpieces over syntax trees to allow BERT to fully utilize rare and out-ofvocabulary words. We also discovered that using a sentence-aspect pair as input resulted in better performance than using only the sentence for the models, as indicated by the results of models with the (S,A) suffix. We believe that this may be due to the aspect information being lost or degraded during the encoding process for models with the (S)
suffix. Among the methods, BERT+BiLSTM(S,A) had the highest average F1 score of 86.2.
## 5.2 Influence Of Aspect Representation
To determine if the aspect representation is degraded during encoding, we evaluate BERT+BiLSTM(S) with and without aspect masking. The results, shown in Table 3, show that masking the aspect representation had only a minimal impact on performance, with a decrease in performance of 0.44 (Lap14), 0.16 (Res14),
0.47 (Res15), and 1.2 (Res16). These findings suggest that the aspect information has limited contribution and requires enhancement to improve performance, as demonstrated by the improved results of BERT+BiLSTM(S,A).
## 5.3 Qualitative Analysis
We examined the performance of BERT+BiLSTM,
BERT+BiLSTM(S), and BERT+BiLSTM(S,A) on three case examples, as shown in Table 4.
The results show that the BERT+BiLSTM and BERT+BiLSTM(S) models struggled to identify opinion words that were farther away from the aspect, particularly in the first and second cases where the opinion words "beautiful" and "fresh" were missed. Upon further investigation, we discovered that these opinion words were closer to the aspect's co-referential term "it". The model struggled to determine what "it" referred to due to degradation of the aspect representation, leading to the missed identification of the opinion words.
However, BERT+BiLSTM(S,A) was able to recover these opinion words due to its ability to enhance the aspect representation. In the third case example, the use of wordpieces was beneficial as the opinion word "minimally" was not present in the training set, but its wordpiece "\#\#ly," was associated with 15 opinion words in the training set. BERT+BiLSTM(S) and BERT+BiLSTM(S,A)
were able to identify the opinion word "minimally" in the test set by leveraging the context of "\#\#ly,".
## 6 Impact Of Bpe Subword Representations
We previously examined the use of wordpiece representations derived from pretrained BERT for TOWE models. In this section, we look into using Byte Pair Encoding (BPE) (Sennrich et al., 2016)
as an alternative method for subword representation, which is inspired by data compression techniques (Gage, 1994). It is worth noting that BPE
representations are generally not obtained from pretrained BERT. However, since RoBERTa is pretrained using BPE, and RoBERTa is a variant of BERT, we can still explore the impact of using BPE
representations in TOWE models. To do this, we replace the BERT component in our best model, BERT+BiLSTM(S,A), with RoBERTa, developing the model RoBERTa+BiLSTM(S,A). The results of RoBERTa+BiLSTM(S,A) and its variations are shown in Table 5.
Note, while RoBERTa+BiLSTM(S,A) and RoBERTa+BiLSTM(S) use BPE subword token representations as input, RoBERTa+BiLSTM and RoBERTa+BiLSTM+GCN operate on the wordlevel. Our findings support the notion that GCNs have a limited impact on performance, as demonstrated by a relatively small decrease in average F1 score when comparing RoBERTa+BiLSTM+GCN
to RoBERTa+BiLSTM. On the other hand, using BPE representations instead of GCN resulted in a substantial improvement in model performance of +5.27 when comparing RoBERTa+BiLSTM and RoBERTa+BiLSTM(S). The results indicate that syntax trees via GCNs may not be necessary and can be replaced by subword representations such as BPE for better performance in TOWE. Additionally, the performance of RoBERTa+BiLSTM(S)
can be further improved by using BPE-based sentence-aspect pairs, as seen by the +1.75 performance gain in RoBERTa+BiLSTM(S,A).
## 6.1 State-Of-The-Art Models
Finally, we compare the performance of BERT+BiLSTM(S,A) with recent methods,
| Sentence | BERT+BiLSTM | BERT+BiLSTM(S) | BERT+BiLSTM(S,A) |
|--------------------------------------------------------------------------------------------------------------|-----------------|------------------|------------------------|
| The OS is fast and fluid, everything is organized and it's just beautiful. | fast, fluid | fast, fluid | fast, fluid, beautiful |
| Certainly not the best sushi in new york, however, it is always fresh, and the place is very clean, sterile. | fresh | not the best | not the best, fresh |
| Although somewhat load, the noise was minimally intrusive | loud, intrusive | loud, minimally intrusive | loud, minimally intrusive. |
Table 4: Case Study: Evaluating the model performance on different case examples. Aspect words are bold-typed and opinion words are italicized.
Model Lap14 Res14 Res15 Res16 Avg
RoBERTa-BiLSTM(S,A) 82.77 88.27 83.84 91.06 86.49
RoBERTa-BiLSTM(S) 81.10 86.95 82.21 88.70 84.74 RoBERTa-BiLSTM 75.87 81.38 75.94 84.70 79.47
RoBERTa-BiLSTM+GCN 77.57 82.09 77.85 85.37 80.72
Table 5: F1 Performance of RoBERTa models to investigate the use of BPE subword representations.
Model Lap14 Res14 Res15 Res16 Avg IOG 71.35 80.02 73.25 81.69 76.58
LOTN 72.02 82.21 73.29 83.62 77.79
SDRN+BERT* 73.69 83.10 76.38 85.40 79.64 ONG 75.77 82.33 78.81 86.01 80.73 ARGCN 76.36 85.42 78.24 86.69 81.68 BERT+BiLSTM+GCN 78.82 85.74 80.54 87.35 83.11
QD-OWSE 80.35 87.23 80.71 88.14 84.11
TSMSA 82.18 86.37 81.64 89.20 84.85
BERT-BiLSTM (S,A) 82.59 88.60 82.37 91.25 **86.20**
including IOG (Fan et al., 2019), LOTN (Wu et al., 2020), SDRN+BERT (Chen et al., 2020), BERT+BiLSTM+GCN (Mensah et al., 2021), QD-OWSE (Gao et al., 2021), TSMSA (Feng et al., 2021). The results of this comparison are shown in Table 6. Among these methods, the recent proposed methods QD-OWSE and TSMSA,
which both use BERT as a basis for their approach, achieved competitive results with ours. QD-OWSE
uses a generated question-answer pair as BERT
input, while TSMSA uses multi-head attention to identify opinion words. These methods go on to demonstrate that BERT can capture sufficient syntax information for this task, even without the use of syntax trees. However, BERT+BiLSTM(S,A) achieved the best results, with F1 scores 82.59
(Lap14), 88.6 (Res14), 82.37 (Res15) and 91.25
(Res16), setting a new SOTA for the task.
## 7 Conclusion
We demonstrated that replacing GCNs with BERT
wordpieces while enhancing the aspect representation achieves SOTA results in syntax-aware TOWE
approaches. The aspect enhancement method serves as a "prompt" for the model. We intend to explore prompt-based learning (Brown et al.,
2020) to further improve the aspect representation.
## 8 Limitations
Currently, our approach does not effectively leverage syntax tree information via GCNs, a commonly used method for incorporating syntax trees in this task. Further research is required to determine the most effective way to integrate syntax tree information into TOWE models.
## Acknowledgements
This work was supported by the Leverhulme Trust under Grant Number: RPG\#2020\#148.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang, and Ziming Chi. 2020. Synchronous double-channel recurrent network for aspect-opinion pair extraction.
In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 6515–
6524.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT
look at? an analysis of BERT's attention. In *Proceedings of the 2019 ACL Workshop BlackboxNLP:*
Analyzing and Interpreting Neural Networks for NLP,
pages 276–286, Florence, Italy. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina N. Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186.
Zhifang Fan, Zhen Wu, Xinyu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling.
In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2509–2518.
Yuhao Feng, Yanghui Rao, Yuyao Tang, Ninghua Wang, and He Liu. 2021. Target-specified sequence labeling with multi-head self-attention for target-oriented opinion words extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1805–1815.
Philip Gage. 1994. A new algorithm for data compression. *C Users Journal*, 12(2):23–38.
Lei Gao, Yulong Wang, Tongcun Liu, Jingyu Wang, Lei Zhang, and Jianxin Liao. 2021. Question-driven span labeling model for aspect–opinion pair extraction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12875–12883.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Computation*, 9(8):1735–
1780.
Junfeng Jiang, An Wang, and Akiko Aizawa. 2021.
Attention-based relational graph convolutional network for target-oriented opinion words extraction.
In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1986–1997.
Hyun Duk Kim, Kavita Ganesan, Parikshit Sondhi, and ChengXiang Zhai. 2011. Comprehensive review of opinion summarization.
Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
OpenReview.net.
Samuel Mensah, Kai Sun, and Nikolaos Aletras. 2021.
An empirical study on leveraging position embeddings for target-oriented opinion words extraction.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 9174–9179, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, et al. 2016. Semeval2016 task 5: Aspect based sentiment analysis. In *International workshop on semantic evaluation*, pages 19–30.
Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015.
Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 486–
495.
Maria Pontiki, Dimitrios Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014a. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation
(SemEval 2014), page 27–35.
Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014b. Semeval-2014 task 4: Aspect based sentiment analysis. In *Proceedings of the 8th* International Workshop on Semantic Evaluation, SemEval@COLING 2014, Dublin, Ireland, August 2324, 2014, pages 27–35. The Association for Computer Linguistics.
Lance A. Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Third Workshop on Very Large Corpora, VLC@ACL
1995, Cambridge, Massachusetts, USA, June 30, 1995.
Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling.
2018. Modeling relational data with graph convolutional networks. In *The Semantic Web*, pages 593–
607, Cham. Springer International Publishing.
Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149–5152. IEEE.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.
Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron C. Courville. 2018. Ordered neurons: Integrating tree structures into recurrent neural networks.
In *International Conference on Learning Representations*.
Kai Sun, Richong Zhang, Mensah Samuel, Aletras Nikolaos, Yongyi Mao, and Xudong Liu. 2023. Selftraining through classifier disagreement for crossdomain opinion target extraction. In *Proceedings of* the ACM Web Conference 2023, pages 1594–1603.
Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 214–
224.
Yuanhe Tian, Guimin Chen, and Yan Song. 2021.
Aspect-based sentiment analysis with type-aware graph convolutional networks and layer ensemble.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2910–2922.
Amir Pouran Ben Veyseh, Nasim Nouri, Franck Dernoncourt, Dejing Dou, and Thien Huu Nguyen. 2020.
Introducing syntactic structures into target opinion word extraction with deep learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online,
November 16-20, 2020, pages 8947–8956. Association for Computational Linguistics.
Zhen Wu, Fei Zhao, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2020. Latent opinions transfer network for target-oriented opinion words extraction. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI
2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9298–
9305. AAAI Press.
Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th international conference on computational linguistics: technical papers, pages 2335–2344.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
There are no risks
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 5
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 5
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
jimerson-etal-2023-unhelpful | An (unhelpful) guide to selecting the best {ASR} architecture for your under-resourced language | https://aclanthology.org/2023.acl-short.87 | Advances in deep neural models for automatic speech recognition (ASR) have yielded dramatic improvements in ASR quality for resource-rich languages, with English ASR now achieving word error rates comparable to that of human transcribers. The vast majority of the world{'}s languages, however, lack the quantity of data necessary to approach this level of accuracy. In this paper we use four of the most popular ASR toolkits to train ASR models for eleven languages with limited ASR training resources: eleven widely spoken languages of Africa, Asia, and South America, one endangered language of Central America, and three critically endangered languages of North America. We find that no single architecture consistently outperforms any other. These differences in performance so far do not appear to be related to any particular feature of the datasets or characteristics of the languages. These findings have important implications for future research in ASR for under-resourced languages. ASR systems for languages with abundant existing media and available speakers may derive the most benefit simply by collecting large amounts of additional acoustic and textual training data. Communities using ASR to support endangered language documentation efforts, who cannot easily collect more data, might instead focus on exploring multiple architectures and hyperparameterizations to optimize performance within the constraints of their available data and resources. | # An (Unhelpful) Guide To Selecting The Right Asr Architecture For Your Under-Resourced Language
Robbie Jimerson RIT
[email protected] Zoey Liu University of Florida [email protected] Emily Prud'hommeaux Boston College [email protected]
## Abstract
Advances in deep neural models for automatic speech recognition (ASR) have yielded dramatic improvements in ASR quality for resource-rich languages, with English ASR
now achieving word error rates comparable to that of human transcribers. The vast majority of the world's languages, however, lack the quantity of data necessary to approach this level of accuracy. In this paper we use four of the most popular ASR toolkits to train ASR
models for eleven languages with limited ASR
training resources: eleven widely spoken languages of Africa, Asia, and South America, one endangered language of Central America, and three critically endangered languages of North America. We find that no single architecture consistently outperforms any other. These differences in performance so far do not appear to be related to any particular feature of the datasets or characteristics of the languages.
These findings have important implications for future research in ASR for under-resourced languages. ASR systems for languages with abundant existing media and available speakers may derive the most benefit simply by collecting large amounts of additional acoustic and textual training data. Communities using ASR to support endangered language documentation efforts, who cannot easily collect more data, might instead focus on exploring multiple architectures and hyperparameterizations to optimize performance within the constraints of their available data and resources.
## 1 Introduction
The majority of significant academic and industry research on automatic speech recognition (ASR)
(Povey et al., 2011; Hinton et al., 2012; Amodei et al., 2016; Watanabe et al., 2018; Baevski et al.,
2020) has been evaluated on a small set of English language datasets (Panayotov et al., 2015; Godfrey et al., 1992). Word error rates (WER) for English ASR now approach those of human transcriptionists (Baevski et al., 2020; Radford et al., 2022), and speakers of English can now reliably use ASR for text entry when using mobile devices. This level of accuracy, however, is attainable only for the handful of the world's 7000 languages that, like English, have abundant training resources.
Most of the world's languages, even ones spoken by tens of millions of speakers, currently lack datasets prepared specifically for training ASR
models. The datasets that do exist are typically much smaller than English ASR datasets that have been available for decades, with no more than a few dozen hours of acoustic training data. As the Common Voice project (Ardila et al., 2020) has shown, collecting large amounts of data for widely spoken languages is possible, but using this kind of platform is likely to be impractical for the roughly 40% of the world's languages that are endangered
(Eberhard et al., 2022). A similar percentage of languages - again, even many that are widely spoken –
lack an established writing system, which presents other obstacles to building large ASR corpora.
Fortunately, existing methods for training accurate ASR models for English and other highresource languages can be adapted to low-resource settings. Some toolkits include recipes for smaller datasets that require the training of fewer parameters. Other approaches rely on fine-tuning acoustic models pre-trained on massive multilingual speech datasets. Most recent work using these approaches, however, does not compare the performance of multiple competitive architectures across multiple diverse small ASR datasets. Thus, while we have access to transformative technology that can be harnessed to build reasonable models for languages with limited resources, we do not know which of the popular architectures is "better" or whether features of a particular dataset or language might make one architecture more suitable than another.
In this paper we explore four different popular ASR architectures, three of which are currently considered state of the art, that can be used even in 1008
| Language | HH:MM | # Speakers | # LM tokens | Audio quality | Audio source |
|------------|-------------|----------------|---------------|-----------------|-----------------|
| 17:17 | 8 2 | 96K | variable | read speech | |
| test | 02:00 16:49 | 14 2 | 600K | high | read speech |
| test | 00:55 10:00 | N/A N/A | 3M | variable | read speech and |
| test | 01:45 | broadcast news | | | |
| 09:57 | 11 11 | 76K | variable | fieldwork | |
| test | 02:04 07:35 | 25 4 | 990K | high | read speech |
| test | 01:45 07:00 | 17 6 | 200K | high | broadcast news |
| test | 01:00 06:06 | 1 1 | 41K | variable | fieldwork |
| test | 01:31 03:23 | 7 4 | 18K | variable | fieldwork |
| test | 00:51 03:00 | N/A N/A | 8.1K | variable | conversations |
| test | 00:45 00:29 | N/A N/A | 4K | variable | fieldwork |
| test | 00:11 00:19 | N/A N/A | 1.2K | variable | read speech |
| test | 00:07 | | | | |
low-resource settings: a hybrid DNN (Vesely et al. ` ,
2013); two approaches for fine-tuning from a multilingual pre-trained acoustic model (Conneau et al.,
2020; Radford et al., 2022); and an end-to-end approach designed specifically for small datasets (Shi et al., 2021). We train models for eleven datasets for under-resourced languages, which are diverse in their linguistic properties, mechanisms for collection, relative sizes, and recording quality.
We find that no single approach to training ASR
models in low-resource settings consistently outperforms any other, with the most outdated method turning out to be the most accurate surprisingly often. While unsatisfying in some ways, these results can help guide ASR researchers and language community members to select the architecture that is most compatible with their objectives and that can be feasibly supported with their available financial and personnel resources. For widely spoken languages, where the goal of developing an ASR
system is likely to be to support a voice-based app or a personal digital assistant, the best use of financial resources might be to collect large amounts of additional data in order to take advantage of stateof-the-art high-resource architectures. Linguists and members of endangered language communities hoping to use ASR to document and preserve their language cannot easily gather more data, and thus might see more benefit from carefully experimenting with multiple architectures to identify the approach that provides the best results for their particular language or existing dataset.
## 2 Related Work
Although most of the notable advances in ASR
have focused on English and a few other languages with abundant data, there has been substantial inter-
| Language | Language | Language | Morphological | | |
|------------|-----------------|-------------------------|-----------------|----|----|
| Name | Family | Status | Properties | | |
| Bemba | Niger-Congo | education (4) | agglutinative | Y | 27 |
| Wolof | Niger-Congo | wider communication (3) | agglutinative | N | 41 |
| Swahili | Niger-Congo | national (1) | agglutinative | N | 37 |
| Seneca | Iroquoian | endangered (8a) | polysynthetic | N | 23 |
| Fongbe | Niger-Congo | wider communication (3) | isolating | Y | 33 |
| Iban | Austronesian | wider communication (3) | agglutinative | N | 25 |
| Hupa | Eyak-Athabaskan | endangered (8b) | polysynthetic | N | 44 |
| Oneida | Iroquoian | endangered (8a) | polysynthetic | N | 17 |
| Quechua | Quechuan | wider communication (3) | agglutinative | N | 33 |
| Bribri | Chibchan | endangered (6b) | agglutinative | Y | 32 |
| Guarani | Tupian | national (1) | polysynthetic | N | 31 |
est in ASR for languages with minimal training resources for quite some time (Besacier et al., 2014).
Much of the work from the 2010s focused on the languages of the IARPA Babel project (Thomas et al., 2013; Miao et al., 2013; Cui et al., 2014; Grézl et al., 2014). Research initiated with the Babel datasets on methods of transfer learning and data augmentation in low-resource settings has continued apace (Khare et al., 2021; Vanderreydt et al.,
2022; Guillaume et al., 2022b). With the success of the Kaldi toolkit, researchers began to collect and freely distribute their own Kaldi-ready datasets for under-resourced and endangered languages, several of which are explored in this paper (Gauthier et al., 2016; Laleye et al., 2016; Gelas et al., 2012; Juan et al., 2015; Pulugundla et al., 2018). More recent work has explored training monolingual endto-end models with substantially larger datasets than those used here (Shi et al., 2021), as well as transfer learning and fine-tuning from pretrained multilingual (Guillaume et al., 2022a; Sikasote and Anastasopoulos, 2022) or English models (Thai et al., 2020).
## 3 Datasets
Five of the datasets explored here are freely available datasets built by researchers, sometimes in collaboration with speech communities, specifically for training ASR models for widely spoken but under-resourced languages of the global South: Bemba (Sikasote and Anastasopoulos, 2022), Fongbe
(Laleye et al., 2016), Wolof (Gauthier et al., 2016),
Swahili (Gelas et al., 2012), and Iban (Juan et al., 2014, 2015). Three datasets (Quechua, Bribri, Guarani) were created from existing recordings for the 2022 AmericasNLP Workshop Shared Task 1.
The remaining datasets for three endangered languages of North America (Hupa, Oneida, and Seneca) were created using existing linguistic and community fieldwork recordings available to the authors through the affiliation of one of the authors with one of these communities and the generosity of the community elders.
While nearly any recorded speech can be transcribed and used to train an ASR system, a common approach for building a new ASR dataset is to ask speakers of the language to read aloud provided texts, which obviates the laborious task of transcription. With this strategy, speakers are often recorded in a studio or similarly controlled environment, resulting in more consistent recording quality.
Alternatively, datasets can be created from existing audio data such as radio broadcasts or linguistic fieldwork recordings. Such recordings are often already transcribed but need to be segmented and 1http://turing.iimas.unam.mx/americasnlp/2022_st.html time-aligned with the transcripts, which must often be done by hand. Table 1 provides details about these sorts of characteristics of the datasets, as well as information about the quantity of the training data for the acoustic and language models.
Information about the linguistic characteristics of the eleven languages is provided in Table 2.
Seven of these languages are widely spoken by millions of people, and some have institutional or government recognition; one is endangered with around 7,000 speakers; and three are critically endangered with very few (perhaps only one, in the case of Hupa) first-language speakers and no more than a hundred second language learners. A diverse set of morphological, phonological, and phonetic features and properties are represented among these languages, and we note that they are all quite different typologically from most high-resource languages, including not only English and Chinese but also the major European languages.
## 4 Asr Architectures
The goal of this work is to explore whether any one of several popular and state-of-the-art ASR
architectures is especially well suited for building models with small amounts of training data. We train models on the the eleven datasets described in Section 3 using four different architectures:
- A hybrid DNN (Vesely et al. ` , 2013) implemented within the Kaldi toolkit (Povey et al.,
2011), following Karel's DNN recipe2 which uses a variety of feature optimizations including RMB pretraining, frame cross-entropy training, and MBR sequence-discriminative training. Decoding was performed with a trigram language model.
- A transducer-based end-to-end model for small datasets within ESPnet2 (Watanabe et al., 2018), following the recipe for Yoloxochitl Mixtec (Shi et al., 2021).
- Fine-tuning from a multilingual acoustic model using Wav2Vec2 XLSR-53 (Conneau et al., 2020), decoding both with and without a trigram language model and using the parameterizations specified in the Hugging Face Wav2Vec XLSR-53 tutorial.3 2https://kaldi-asr.org/doc/dnn1.html 3https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
- Fine-tuning from the medium multilingual acoustic model with Whisper (Radford et al.,
2022), using the parameterizations specified in the Hugging Face Whisper tutorial.4
Training and testing were carried out on a university high-performance computing cluster. Training times ranged between 2 and 24 hours depending on the architecture and dataset.
## 5 Results
Figure 1 shows the word error rates (WER) for four of the five approaches (Kaldi DNN, Wav2Vec XLSR with and without a language model (LM),
and Whisper) when trained and tested on each of the eleven datasets. Note that prior baselines reported in the papers associated with the datasets for Wolof, Swahili, Fongbe, Hupa, and Iban, using non-s.o.t.a. architectures, and Bemba, using a slightly different configuration of Wav2Vec XLSR,
are lower than the best reported architecture here. No prior WER results have been reported for the Oneida, Quechua, Bribri, and Guarani datasets.
We observe a large variation in WER across languages, which should not be surprising given the great variability in the quantity of training data, the type and audio quality of data collected, and the linguistic features of these languages. Datasets of less than 3 hours had consistently high WERs, but across the other datasets, there does not appear to be a clear relationship between amount of audio training data and WER. Though not shown in Figure 1, ESPnet yielded the worst performance by far for all languages, with only Wolof, the second largest dataset, achieving a WER below 65%.
Again, this is not surprising given that this ESPnet recipe (Shi et al., 2021) was proposed for a much larger 60-hour indigenous language dataset.
More interestingly, we see no consistent ranking of the remaining four approaches across the eleven datasets. Using an LM during decoding with Wav2Vec XLSR always yields some improvement in WER over not using an LM, but the differences are often quite small. Notably, Swahili, which has the largest LM, sees only a tiny reduction in WER
when that LM is used during decoding. The Kaldi hybrid DNN, despite being outdated, outperforms more than one of its state of the art rivals for Seneca, Fongbe, Iban, and Quechua. Whisper is dramatically better than other models for Wolof and Hupa, 4https://huggingface.co/blog/fine-tune-whisper
![4_image_0.png](4_image_0.png)
but substantially worse for Fongbe and Quechua.
Though closely related and typologically similar, Seneca and Oneida show very different patterns, as do Fongbe and Wolof, two related languages with datasets recorded under similar conditions. The WER for Swahili is relatively stable across architectures, while WER is quite variable for Wolof, Hupa, Fongbe, and Oneida.
The rankings do not appear to be related to the method of speech collection (read vs. spontaneous)
or the consistency of audio quality. In addition, whether or not a language is tonal, like Bemba, Fongbe, and Bribri, does not appear to predict the relative rankings of the four architectures.
We do note, however, two potential patterns, which merit further investigation with a larger set of languages. First, Fongbe, the only language of the eleven with isolating morphology (i.e., limited affixation) is one of only two languages where Whisper yielded the highest WER of the four systems. Second, the three languages with the largest phonesets, Wolof, Swahili, and Hupa, yielded the same relative ranking, with Whisper performing the best and Kaldi the worst. Although there is certainly not enough information here to draw conclusions, it is plausible that the design of a particular training architecture or the content of the pretrained models could render a system more appropriate for a language with a particular linguistic property.
## 6 Conclusions
Under-resourced language communities, whether large or small, need to know how to invest their limited resources when developing an ASR system for their language. Our findings suggest, unfortunately, that there are no obvious or simple guidelines to follow. Our future work will expand the set of languages explored here in order to establish connections between expected model performance and linguistic features and dataset characteristics. We also plan to explore the impact of language model size and domain on ASR accuracy and the relationship between language model and morphology.
## Limitations
One limitation of this work is that we have included results for only eleven languages. Training ASR
models, even on small datasets, requires significant computing and financial resources. Second, there are not that many freely available and well prepared ASR datasets that are readily compatible with all four ASR architectures. We sought to select a diverse set of languages and datasets with varying features in order to provide, we hope, a reasonable snapshot of how the state of the art performs in low-resource settings.
## Ethics Statement
The Hupa, Oneida, and Seneca datasets were recorded with the approval of participating universities' IRBs and with the enthusiastic cooperation of the elders and other linguistic consultants.
The datasets for the remaining languages were downloaded from public Web pages. The Bribri dataset, like those of other endangered languages, was created using linguistic fieldwork recordings.
Of the others, some were collected by recruiting participants to read text (Wolof, Fongbe, Bemba, Guarani); others consist of transcribed radio and television broadcasts (Iban, Quechua); and the Swahili dataset includes both types of data. While the participants who provided recordings by reading text presumably gave consent for their voices to be used for ASR research, it is unlikely that speakers recorded in the course of a radio or television broadcast provided consent explicitly for their voices to be used in an ASR dataset. We expect, however, given that members of the speech community participated in these data collection projects, that ethical concerns were carefully considered.
## Acknowledgements
We are grateful for the continued support from the Hupa and Seneca indigenous communities. We would like to especially thank Mrs. Verdena Parker, of the Hoopa Valley Tribe, and Mrs. Sandy Dowdy, of the Seneca Nation of Indians, for their generous and valuable input and support. This material is based upon work supported by the National Science Foundation under Grant \#2127309 to the Computing Research Association and Grant \#1761562.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the Computing Research Association.
## References
Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, et al. 2016.
Deep speech 2 : End-to-end speech recognition in english and mandarin. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pages 173–182.
Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. 2020. Common voice: A massivelymultilingual speech corpus. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4218–4222.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
In *Advances in Neural Information Processing Systems*, volume 33, pages 12449–12460. Curran Associates, Inc.
Laurent Besacier, Etienne Barnard, Alexey Karpov, and Tanja Schultz. 2014. Automatic speech recognition for under-resourced languages: A survey. Speech Communication, 56:85–100.
Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. 2020.
Unsupervised cross-lingual representation learning for speech recognition. arXiv preprint arXiv:2006.13979.
Xiaodong Cui, Brian Kingsbury, Jia Cui, Bhuvana Ramabhadran, Andrew Rosenberg, Mohammad Sadegh Rasooli, Owen Rambow, Nizar Habash, and Vaibhava Goel. 2014. Improving deep neural network acoustic modeling for audio corpus indexing under the IARPA Babel program. In Fifteenth Annual Conference of the International Speech Communication Association.
David M Eberhard, Gary F. Simons, and Charles D.
Fennig. 2022. *Ethnologue: Languages of the World.*
Twenty-fifth edition. SIL International.
Elodie Gauthier, Laurent Besacier, Sylvie Voisin, Michael Melese, and Uriel Pascal Elingui. 2016. Collecting resources in sub-Saharan African languages for automatic speech recognition: a case study of Wolof. In Proceedings of the Tenth International Conference on Language Resources and Evaluation
(LREC'16), pages 3863–3867, Portorož, Slovenia.
European Language Resources Association (ELRA).
Hadrien Gelas, Laurent Besacier, and Francois Pellegrino. 2012. Developments of Swahili resources for an automatic speech recognition system. In SLTU
- Workshop on Spoken Language Technologies for Under-Resourced Languages, Cape-Town, Afrique Du Sud.
John J Godfrey, Edward C Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. In Acoustics, Speech, and Signal Processing, IEEE International Conference on, volume 1, pages 517–520. IEEE Computer Society.
Frantisek Grézl, Martin Karafiát, and Karel Vesely. 2014.
Adaptation of multilingual stacked bottle-neck neural network structure for new language. In *Acoustics, Speech and Signal Processing (ICASSP), 2014* IEEE International Conference on, pages 7654–7658.
IEEE.
Séverine Guillaume, Guillaume Wisniewski, Cécile Macaire, Guillaume Jacques, Alexis Michaud, Benjamin Galliot, Maximin Coavoux, Solange Rossato, Minh-Châu Nguyên, and Maxime Fily. 2022a. Finetuning pre-trained models for automatic speech recognition, experiments on a fieldwork corpus of japhug
(trans-himalayan family). In *Proceedings of the Fifth* Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 170–178, Dublin, Ireland. Association for Computational Linguistics.
Séverine Guillaume, Guillaume Wisniewski, Benjamin Galliot, Minh-Châu Nguyên, Maxime Fily, Guillaume Jacques, and Alexis Michaud. 2022b. Plugging a neural phoneme recognizer into a simple language model: a workflow for low-resource setting.
In *Proc. Interspeech 2022*, pages 4905–4909.
Harald Hammarström, Robert Forkel, Martin Haspelmath, and Sebastian Bank. 2022. *Glottolog 4.7*. Max Planck Institute for Evolutionary Anthropology.
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N
Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. *IEEE Signal processing* magazine, 29(6):82–97.
Sarah Samson Juan, Laurent Besacier, Benjamin Lecouteux, and Mohamed Dyab. 2015. Using resources from a closely-related language to develop asr for a very under-resourced language: A case study for iban. In *Proceedings of INTERSPEECH*, Dresden, Germany.
Sarah Samson Juan, Laurent Besacier, and Solange Rossato. 2014. Semi-supervised G2P bootstrapping and its application to ASR for a very under-resourced language: Iban. In Proceedings of Workshop for Spoken Language Technology for Under-resourced
(SLTU).
Shreya Khare, Ashish Mittal, Anuj Diwan, Sunita Sarawagi, Preethi Jyothi, and Samarth Bharadwaj.
2021. Low Resource ASR: The Surprising Effectiveness of High Resource Transliteration. In *Proc.*
Interspeech 2021, pages 1529–1533.
Frejus A. A. Laleye, Laurent Besacier, Eugene C. Ezin, and Cina Motamed. 2016. First Automatic Fongbe Continuous Speech Recognition System: Development of Acoustic Models and Language Models. In Federated Conference on Computer Science and Information Systems.
Yajie Miao, Florian Metze, and Shourabh Rawat. 2013.
Deep maxout networks for low-resource speech recognition. In *Automatic Speech Recognition and* Understanding (ASRU), 2013 IEEE Workshop on, pages 398–403. IEEE.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an ASR
corpus based on public domain audio books. In *2015* IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206–5210.
IEEE.
Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, CONF. IEEE Signal Processing Society.
Bhargav Pulugundla, Murali Karthick Baskar, Santosh Kesiraju, Ekaterina Egorova, Martin Karafiát, Lukás Burget, and Jan Cernocky. 2018. BUT System for `
Low Resource Indian Language ASR. In The Annual Conference of the International Speech Communication Association (Interspeech), pages 3182–3186.
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022.
Robust speech recognition via large-scale weak supervision. *arXiv preprint arXiv:2212.04356*.
Jiatong Shi, Jonathan D. Amith, Rey Castillo García, Esteban Guadalupe Sierra, Kevin Duh, and Shinji Watanabe. 2021. Leveraging end-to-end ASR for endangered language documentation: An empirical study on yolóxochitl Mixtec. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1134–1145, Online. Association for Computational Linguistics.
Claytone Sikasote and Antonios Anastasopoulos. 2022.
BembaSpeech: A speech recognition corpus for the Bemba language. In *Proceedings of the Thirteenth* Language Resources and Evaluation Conference, pages 7277–7283, Marseille, France. European Language Resources Association.
Bao Thai, Robert Jimerson, Raymond Ptucha, and Emily Prud'hommeaux. 2020. Fully convolutional asr for less-resourced endangered languages. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages
(SLTU) and Collaboration and Computing for UnderResourced Languages (CCURL), pages 126–130.
Samuel Thomas, Michael L Seltzer, Kenneth Church, and Hynek Hermansky. 2013. Deep neural network features and semi-supervised training for low resource speech recognition. In *Acoustics, Speech* and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 6704–6708. IEEE.
Geoffroy Vanderreydt, François REMY, and Kris Demuynck. 2022. Transfer Learning from MultiLingual Speech Translation Benefits Low-Resource Speech Recognition. In *Proc. Interspeech 2022*,
pages 3053–3057.
Karel Vesely, Arnab Ghoshal, Lukás Burget, and Daniel `
Povey. 2013. Sequence-discriminative training of deep neural networks. In *Interspeech*, pages 2345–
2349.
Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. ESPnet: End-to-End Speech Processing Toolkit. In *Proceedings of Interspeech*, pages 2207–2211.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Second to last section.
✓ A2. Did you discuss any potential risks of your work?
Ethics section, last section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
✓ B1. Did you cite the creators of artifacts you used?
If by "artifacts" you mean "datasets", then yes, they are all cited when they are first mentioned.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In the Ethics section we mention that we downloaded some datasets that are publicly available. We also discuss the artifacts that we used that are not publicly available but were shared by indigenous communities with the authors.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Ethics section.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Our own data from indigenous communities was collected under our IRBs. The other data was downloaded from OpenSLR. We explain in the Ethics section that we assume that data was collected ethically but we cannot confirm it ourselves.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Extensively in the data section and ethics sections of our paper.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4, I Think.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Vaguely in section 4.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4, 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
We use speech data that was collected and transcribed as part of earlier projects, some by us and some by other groups.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Ethics section
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Ethics D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
orlikowski-etal-2023-ecological | The Ecological Fallacy in Annotation: Modeling Human Label Variation goes beyond Sociodemographics | https://aclanthology.org/2023.acl-short.88 | Many NLP tasks exhibit human label variation, where different annotators give different labels to the same texts. This variation is known to depend, at least in part, on the sociodemographics of annotators. Recent research aims to model individual annotator behaviour rather than predicting aggregated labels, and we would expect that sociodemographic information is useful for these models. On the other hand, the ecological fallacy states that aggregate group behaviour, such as the behaviour of the average female annotator, does not necessarily explain individual behaviour. To account for sociodemographics in models of individual annotator behaviour, we introduce group-specific layers to multi-annotator models. In a series of experiments for toxic content detection, we find that explicitly accounting for sociodemographic attributes in this way does not significantly improve model performance. This result shows that individual annotation behaviour depends on much more than just sociodemographics. | # The Ecological Fallacy In Annotation: Modelling Human Label Variation Goes Beyond Sociodemographics Matthias Orlikowski1, Paul Röttger2, **Philipp Cimiano**1, And **Dirk Hovy**3
1Bielefeld University 2University of Oxford 3Computing Sciences Department, Bocconi University, Milan, Italy
## Abstract
Many NLP tasks exhibit human label variation, where different annotators give different labels to the same texts. This variation is known to depend, at least in part, on the sociodemographics of annotators. Recent research aims to model individual annotator behaviour rather than predicting aggregated labels, and we would expect that sociodemographic information is useful for these models. On the other hand, the ecological fallacy states that aggregate group behaviour, such as the behaviour of the *average* female annotator, does not necessarily explain individual behaviour. To account for sociodemographics in models of individual annotator behaviour, we introduce group-specific layers to multi-annotator models. In a series of experiments for toxic content detection, we find that explicitly accounting for sociodemographic attributes in this way does not significantly improve model performance. This result shows that individual annotation behaviour depends on much more than just sociodemographics.
## 1 **Introduction**
Different annotators will not necessarily assign the same labels to the same texts, resulting in human label variation (Plank, 2022). Previous work finds that this variation depends at least in part on the sociodemographics of annotators, such as their age and gender (Binns et al., 2017; Al Kuwatly et al., 2020; Excell and Al Moubayed, 2021; Shen and Rose, 2021). These results are particularly pronounced for subjective tasks like toxic content detection (Sap et al., 2019; Kumar et al., 2021; Sap et al., 2022; Goyal et al., 2022). Since human label variation is relevant to a wide range of NLP
tasks, recent research has begun to model individual annotator behaviour, rather than predicting aggregated labels (Davani et al., 2022; Gordon et al.,
2022). In this setting, we would expect sociodemographic attributes to help explain annotator decisions. Therefore, we investigate **whether explicitly**
Figure 1: Group-specific layers representing annotator
![0_image_0.png](0_image_0.png)
sociodemographics in multi-annotator models.
## Accounting For The Sociodemographic Attributes Of Annotators Leads To Better Predictions Of Their Annotation Behaviour1.
There is a risk of misreading these efforts as an example of the *ecological fallacy*: aggregate group behaviour does not necessarily explain individual behaviour (Robinson, 1950; Freedman, 2015). For example, while on average, white annotators may be more likely to label African-American Vernacular English as toxic (Sap et al., 2019), that does not mean it is true for *every* white annotator individually. However, we aim at exactly this distinction to discuss the relevance of sociodemographic groups in models of individual annotator behaviour. Likewise, we do not assume prior work to commit ecological fallacies, even if a less-nuanced read might suggest it.
Davani et al. (2022) introduce a simple multiannotator model, where each annotator is modelled with a separate classification head. We expand their model with *group-specific* layers, which are activated for each annotator based on their sociodemographic attributes. We compare the two model setups to a control setup where we randomise group assignments. All comparisons use annotator-level toxicity data from Kumar et al. (2021). We find that find that explicitly accounting for sociodemo1Code to run our experiments and analyses is available at https://github.com/morlikowski/
ecological-fallacy 1017 graphic attributes does not significantly improve model performance. This result suggests that human label variation happens at a more individual level than sociodemographics, and that annotator decisions are even more complex.
Contributions 1) We introduce group-specific layers to model groups of annotators with shared attributes in multi-annotator models. 2) We evaluate the effect of group-specific layers for toxic content detection, and show that explicitly accounting for sociodemographic attributes does not significantly improve performance, thus highlighting the risk of the ecological fallacy in annotator modelling.
As a corollary, we show that multi-annotator models can be applied to many times more annotators than in prior work.
## 2 **Related Work**
Sociodemographics in Annotation Behaviour A growing body of research studies how annotator sociodemographics relate to their annotation decisions, for tasks ranging from natural language inference (Biester et al., 2022) to the detection of racist (Larimore et al., 2021) or generally toxic
(Sap et al., 2022) language. Goyal et al. (2022),
for example, find that annotators from certain sociodemographic groups (e.g., LGBTQ people) tend to find content attacking their own groups (e.g.,
homophobic content) to be more toxic. This motivates our research into explicitly accounting for sociodemographics to model annotation behaviour.
However, the link between sociodemographics and behaviour is not uncontested. Biester et al. (2022),
for example, do not find significant differences in annotation behaviour between annotators of different genders for four different tasks. Predicting Annotators' Decisions on Text Different from analyses of annotation behaviour, a recent line of research attempts to learn models based on individual annotations (Plank et al., 2014; Jamison and Gurevych, 2015; Akhtar et al., 2020; Fornaciari et al., 2021; Cercas Curry et al., 2021).
These models are motivated by the concern that aggregating labels into a single "truth" is too simplistic for many tasks (Uma et al., 2021; Basile et al., 2021) and might introduce uneven representation of perspectives (Prabhakaran et al., 2021; Abercrombie et al., 2022).
A particular way of learning from disaggregated labels are models that predict individual annotator decisions for an example. Our work builds directly on such a model, multi-annotator models (Davani et al., 2022), which we describe in more detail separately (§4). Gordon et al. (2022) present a model which also predicts individual annotations and allows a user to interactively aggregate them based on "a jury" inspired by the US judicial system.
Their work is similar to ours in central aspects as they explicitly model annotators' sociodemographics and use the same dataset as we do (Kumar et al.,
2021). Different from our work, they frame the task as a regression problem and develop a model based on recommender systems. While they also explore ecological fallacies, they focus on usage risks of their system and countermeasures. In contrast, we consider the issue of the ecological fallacy in modelling annotation behaviour more generally.
We compare our findings to their results (§6).
3 **Data**
We use a sample of the Kumar et al. (2021) dataset for our experiments. The full dataset contains 107,620 English comments from Twitter, Reddit, and 4Chan, annotated for toxicity by 17,280 annotators. The annotation process encouraged annotator subjectivity (Röttger et al., 2022) which is a desired feature for modelling annotator behaviour.
For each annotator, there is extensive sociodemographic information, collected with a survey. Annotations are given as ratings on a five-point scale which we convert to binary annotations by mapping ratings of 2 to 4 to *toxic*, and ratings 0 and 1 to *non-toxic*.
We randomly sample comments from the dataset until we reach annotations from more than 5,000 annotators. We then add all other annotations by these annotators. This approach maximizes the number of examples while controlling the number of annotators in our sample.
Our final sample contains 111,780 annotations from 5,002 annotators on 22,360 comments with 20 to 120 annotations per annotator (mean 22.35).
Most comments have five annotations. 20 comments have four because we removed any underage annotators before sampling. In total 78,357 annotations (70.10%) are toxic, and 33,423 annotations
(29.90%) are non-toxic.
We focus on four sociodemographic attributes:
gender, age, education, and sexual orientation. Group sizes vary by attribute. For gender, 2,450 annotators (48.98%) identify as female, 2,116
(42.30%) as male, 23 (0.46%) as non-binary (rest in residual categories, full statistics in A.1).
## 4 **Experiments**
We compare three models. The **baseline** model is the multi-annotator model by Davani et al. (2022).
We use their multi-task variant: For each annotator, there is a separate classification layer trained on annotations from that annotator. All annotator layers share a pre-trained language model used to encode the input. We use RoBERTa (Liu et al.,
2019) for this, motivated by computational constraints. The other models in our experiments build on this baseline model.
For the **sociodemographic** models, we add group-specific layers based on sociodemographic attributes of the annotators. A single attribute, e.g., age, implies several groups, e.g., *ages 25-*
34, *ages 35-44*. We add the group-specific layers between the pre-trained model and the annotator layers. Each group of annotators shares a separate group-specific layer. We implement group-specific layers as fully-connected, linear layers, each learning a feature transformation applied for one group of annotators.
Finally, for the **random** models, we shuffle the assignment of annotators to groups from the sociodemographic model, retaining the relative group sizes. In other words, the probability of each annotator staying in the same group or being reassigned to another group corresponds to the relative size of each group. This approach keeps the model architecture constant while removing the connection between actual sociodemographic attributes and group assignment. It allows us to distinguish the effects of additional parameters, which groupspecific layers add in comparison to the baseline, from the effects of sociodemographic information.
## 4.1 **Evaluation Setup**
We evaluate all models on individual annotations from gender, age, education, and sexual orientation groups. This setup is comparable to the "individual label" evaluations in Davani et al. (2022) and Gordon et al. (2022), but with scores calculated per group of annotators. We measure performance in macro-average F1, to weigh each class equally.
Cross-Validation As there is no standard split available for our dataset, we perform three iterations of a four-fold cross-validation with different seeds (training details in Appendix A.3). We choose four folds, so that even very small groups have more than a hundred annotations in each test set. Across folds, the numbers of annotations per sociodemographic group are similar (see Appendix A.4). We construct test sets that only contain comments unseen by the annotators in the training set.
We also ensure that all test sets have similar proportions of toxic or non-toxic comments (assigned by the majority of annotators) to address the class imbalance in the dataset (70.62% toxic, see §3).
Statistical Significance We test for statistical significance of our results from multiple runs of k-fold cross-validation via replicability analysis
(Dror et al., 2017). We report the number of significant folds and the Bonferroni-corrected count (Dror et al., 2018) in Appendix A.2. We compute the pvalues for each fold via a paired bootstrap-sampling test with BooStSa (Fornaciari et al., 2022). We set the significance level α = 0.05, draw 1000 bootstrap samples per fold, and use a sample size of 50% of the respective test set.
Remarks on Groups Annotators from different groups of the same attribute will in most cases not have annotated the same examples. Therefore, comparisons between models are only meaningful within each group.
The groups modeled via group-specific layers and those in the result tables are always the same.
For example, if we report scores for gender groups, then the sociodemographic and randomized models are also based on gender groups. In the following, we focus on a subset of groups, omitting, e.g., "Prefer not to say" (see Appendix A.5).
## 5 **Results**
Table 1 shows the results for gender, age, education, and sexual orientation. A naive majority class baseline that predicts all input to be toxic performs worse than all other models with a large margin
(exact results in Appendix A.5).
Sociodemographics vs. Baseline Across attributes, the average scores of the sociodemographic model and the baseline are similar. The sociodemographic model often has a slightly higher average macro F1 than the baseline, but no statistically significant gains. Where average performance is better by several points, as for homosexual annotators, this gain is offset by a large variance in performance (a consequence of small group sizes).
Sociodemographics vs. Random We also do not find significant performance differences between sociodemographic group-layer models and the corresponding random group assignment models. For most groups, the randomized models achieve the highest average scores, but differences to the sociodemographic model are never statistically significant.
| Gender | Baseline | Soc-Dem. | Random |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|------------|----------|
| Male | 68.00±0.49 67.66±0.46 67.63±0.53 | | |
| Female | 62.23±0.53 62.25±1.19 62.41±0.92 | | |
| Nonbinary 56.33±6.00 56.80±7.24 58.00±7.49 Age Baseline Soc-Dem. Random 18 - 24 59.39±1.58 60.44±1.05 60.52±1.37 25 - 34 66.72±0.56 66.63±0.83 66.92±0.51 35 - 44 64.50±0.59 64.94±1.33 65.24±0.89 45 - 54 65.68±0.66 65.88±1.39 65.98±0.83 55 - 64 64.37±1.22 64.94±1.66 64.84±1.30 65 or older 63.34±2.07 64.70±2.21 62.77±2.39 | | | |
| Education | Baseline | Soc-Dem. | Random |
| Associate degree | 60.69±1.44 60.54±2.35 60.78±1.62 | | |
| Bachelor's degree | 66.16±0.51 66.23±0.82 66.80±0.54 | | |
| Doctoral degree | 61.93±3.82 63.79±5.03 63.27±3.67 | | |
| High school | 60.53±1.39 60.47±2.22 60.55±1.87 | | |
| Below high school | 58.28±4.68 62.12±4.90 60.17±4.25 | | |
| Master's degree | 69.71±0.86 69.58±0.93 69.45±0.96 | | |
| Professional degree 66.75±2.37 67.84±3.32 68.62±2.84 College, no degree 58.65±1.19 59.40±1.79 59.99±2.19 Sexuality Baseline Soc-Dem. Random Bisexual 71.83±1.14 71.42±1.51 69.46±1.95 Heterosexual 63.25±0.39 63.32±1.21 63.82±0.55 Homosexual 64.43±1.75 66.11±2.20 65.12±1.94 | | | |
## 6 **Discussion**
We do not find strong evidence that explicitly modelling sociodemographics helps to predict annotation behaviour with multi-annotator models. These results might seem counter-intuitive, given the evidence of systematic annotation differences between sociodemographic groups (see §2). This discrepancy, however, echoes the issue highlighted by ecological fallacies (Robinson, 1950): Not every annotator will be a perfect representative of their group, so we will not necessarily learn additional information based on their group identity. This seems especially true if we already have access to individual behaviour (i.e., individual annotations).
In contrast to Davani et al. (2022), we made sociodemographic information explicit in our experiments, as one of the factors influencing annotation behaviour. Group-specific layers can be seen as an inductive bias putting emphasis on the sociodemographic relations between annotators. However, there are potentially many other factors influencing annotation behaviour (e.g., attitudes, moral values, cognitive biases, psychological traits). In light of our results, it seems plausible that multi-annotator models learn about these factors implicitly as part of predicting individual behaviour, so that making one factor explicit does not change prediction quality, at least in the case of sociodemographics.
Still, we also know that generally group attributes can help predict individual decisions, i.e.,
as base rates or priors. To avoid ecological fallacies in modelling annotation, we therefore need to better understand when and how modelling sociodemographic information is useful in predicting an individual annotator's decisions. For example, we have only evaluated group-specific layers for single attributes. In contrast, social scientists have long adopted the idea of intersectionality (Crenshaw, 1989), which also informs research on fairness in machine learning (Wang et al., 2022). Intersectionality means that the effect of interactions between sociodemographic attributes enables specific experiences that are not captured by the attributes in isolation. For example, identifying as a man means something different depending on the person's education. Groups derived from single attributes might simply be too coarse to improve classifiers learnt from individual labels, as in multi-annotator models.
The dataset we use (Kumar et al., 2021) has many characteristics which are ideal for our study
(see §3). However, it uses a broad notion of toxicity, in contrast to other studies of toxic language
(Larimore et al., 2021; Sap et al., 2022), which match content and analysed groups. When modeling the groups frequently referenced in the datasets themselves, we would expect greater benefits from group-specific layers. Similar to us, Biester et al.
(2022) who do not find significant differences between annotators of different genders, do so in a more general setting.
We can only partially compare to Gordon et al.
(2022), despite using the same dataset. In addition to differences in approach (see §2), our and their work also differ in their research questions and thus experimental conditions. Gordon et al.
(2022) compare their full model (group and individual) against using *group* information alone.
We compare our full model (group and individual)
against using *individual* information alone. So it is unclear if their model would benefit from group information in comparison to individual-level information alone. While they find an improvement from group information it is only in comparison to a baseline predicting not individual but aggregated labels. Additionally, the composition of test sets sampled from the full dataset differs between the studies: Gordon et al. (2022) use a test set of 5,000 comments, while we use 22,360 comments in a four-fold cross-validation. We leave an explicit comparison to future work.
Group-specific layers (§4) are a natural extension of annotator-specific classification layers in multi-annotator models. However, other architectures to predict annotator-level labels use different ways to represent sociodemographic information, e.g., via embeddings in a recommender system
(Gordon et al., 2022). Future work could explore additional representations of annotator attributes
(e.g., as part of the input, either textual or as separate features) and other approaches to modelling the relation of individual labeling decisions and attributes (e.g., probabilistic graphical models).
## 7 **Conclusion**
We ask how relevant modelling explicit sociodemographic information is in learning from individual annotators. Our experiments with group-specific layers for four sociodemographic attributes on social media data with toxicity annotations (Kumar et al., 2021) show no significant benefit of modelling sociodemographic groups in multi-annotator models. However, as the issue of ecological fallacies highlights, it is not implausible that these models do not learn additional information from group information beyond the inherent variation.
However, our results do not refute the usefulness of sociodemographic attributes in modelling annotation, but underscore the importance of their judicious use. Different tasks and model architectures will likely benefit to different extents. Ultimately, annotation behaviour is driven by complex factors and we will need to consider more than annotators' sociodemographics.
## Acknowledgements
We thank Deepak Kumar for providing access to the disaggregated dataset and his continued support. We also thank Aida Mostafazadeh Davani for providing information on implementation details of multi-annotator models. Members of MilaNLP (Bocconi) and the Semantic Computing Group (Bielefeld) provided feedback on earlier versions of this paper, for which we thank them again.
This work has in part been funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 949944, INTEGRATOR). Likewise, this work has in part been funded by the VolkswagenStiftung as part of the "3B Bots Building Bridges" project.
## Limitations
While the dataset by Kumar et al. (2021) enabled us to test models for a range of often overlooked groups (e.g., non-binary or bisexual annotators),
we ultimately modelled only four specific attributes
(gender, age, education, sexual orientation). There are likely to be more factors that could play a role.
Additionally, annotators in the Kumar et al. (2021)
dataset are exclusively from the United States of America, so that results do not necessarily hold for other countries or cultures (Hovy and Yang, 2021).
Specifically perceptions of harmful content online are known to vary across countries (Jiang et al.,
2021).
We used only the (Kumar et al., 2021) dataset.
This is mainly due to our strict criteria regarding dataset size and availability of annotator-level labels and sociodemographic information. These characteristics were a prerequisite for our experiments across different attributes with sufficient numbers of annotators. Most datasets which include annotator-level labels and sociodemographic information contain much smaller numbers of annotators and attributes. Nevertheless, with the *Measuring Hate Speech Corpus* there is at least one additional dataset (Sachdeva et al., 2022) with comparable characteristics that could be used in future experiments. Also, additional small-scale, more focused experiments could use datasets like Sap et al.
(2022) or *HS-Brexit* (Akhtar et al., 2021) which was annotated by 6 annotators, each from one of two sociodemographic groups.
We do not study the aggregation of individual predictions or evaluate against majority labels, as these are not directly relevant to our investigation of sociodemographic attributes in models of annotation behaviour. Consequently, we cannot derive a conclusion about performance in those settings from our results. This is a noteworthy limitation, because part of the experiments introducing multi-annotator models in Davani et al. (2022) compare labels aggregated from multi-annotator models against predictions from a standard classifier
(directly trained on aggregated labels).
For computational reasons, our experiments use a comparatively small pre-trained language model
(RoBERTa, Liu et al. 2019). Thus, results might differ with larger models.
## Ethics Statement
As sociodemographic attributes are sensitive information, we do not infer attributes, but build on a self-reported, IRB-reviewed dataset (Kumar et al.,
2021). We also see potential for a discussion of
"privacy by design" in modelling human label variation based on our results: There can be circumstances in which knowing more about annotators is not relevant, and indeed might lead to violations of privacy.
As multi-annotator models attempt to capture the preferences of individual annotators, there are valid concerns around privacy and anonymity. As discussed in Davani et al. (2022), increasing the annotator count can be one option to reduce privacy risks. We show it is feasible to learn a model for a large number of individual annotators (5002 vs.
18 and 82 in their work). But a prerequisite for improved privacy is to apply effective aggregation on top of individual predictions, which we do not study in the present work.
## References
Gavin Abercrombie, Valerio Basile, Sara Tonelli, Verena Rieser, and Alexandra Uma, editors. 2022. *Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022*. European Language Resources Association, Marseille, France.
Sohail Akhtar, Valerio Basile, and Viviana Patti. 2020.
Modeling annotator perspective and polarized opinions to improve hate speech detection. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 8, pages 151–154.
Sohail Akhtar, Valerio Basile, and Viviana Patti. 2021.
Whose opinions matter? perspective-aware models to identify opinions of hate speech victims in abusive language detection. Preprint arXiv:2106.15896.
Hala Al Kuwatly, Maximilian Wich, and Georg Groh.
2020. Identifying and measuring annotator bias based on annotators' demographic characteristics. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 184–190, Online. Association for Computational Linguistics.
Valerio Basile, Michael Fell, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio, and Alexandra Uma. 2021. We need to consider disagreement in evaluation. In Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future, pages 15–21, Online. Association for Computational Linguistics.
Laura Biester, Vanita Sharma, Ashkan Kazemi, Naihao Deng, Steven Wilson, and Rada Mihalcea. 2022. Analyzing the effects of annotator gender across NLP
tasks. In *Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022*, pages 10–19, Marseille, France. European Language Resources Association.
Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like trainer, like bot? inheritance of bias in algorithmic content moderation.
In *Social Informatics*, Lecture Notes in Computer Science, pages 405–415. Springer International Publishing.
Amanda Cercas Curry, Gavin Abercrombie, and Verena Rieser. 2021. ConvAbuse: Data, analysis, and benchmarks for nuanced abuse detection in conversational AI. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7388–7403, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Kimberle Crenshaw. 1989. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. *University of Chicago Legal Forum*,
1989(1):Article 8.
Aida Mostafazadeh Davani, Mark Díaz, and Vinodkumar Prabhakaran. 2022. Dealing with disagreements:
Looking beyond the majority vote in subjective annotations. *Transactions of the Association for Computational Linguistics*, 10:92–110.
Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. 2017. Replicability analysis for natural language processing: Testing significance with multiple datasets. *Transactions of the Association for* Computational Linguistics, 5:471–486.
Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing.
In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392, Melbourne, Australia. Association for Computational Linguistics.
Elizabeth Excell and Noura Al Moubayed. 2021. Towards equal gender representation in the annotations of toxic language detection. In Proceedings of the 3rd Workshop on Gender Bias in Natural Language Processing, pages 55–65, Online. Association for Computational Linguistics.
Tommaso Fornaciari, Alexandra Uma, Silviu Paun, Barbara Plank, Dirk Hovy, and Massimo Poesio. 2021.
Beyond black & white: Leveraging annotator disagreement via soft-label multi-task learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2591–2597, Online. Association for Computational Linguistics.
Tommaso Fornaciari, Alexandra Uma, Massimo Poesio, and Dirk Hovy. 2022. Hard and soft evaluation of NLP models with BOOtSTrap SAmpling - BooStSa.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 127–134, Dublin, Ireland. Association for Computational Linguistics.
David A. Freedman. 2015. Ecological inference. In James D. Wright, editor, International Encyclopedia of the Social & Behavioral Sciences (Second Edition), pages 868–870. Elsevier.
Mitchell L. Gordon, Michelle S. Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S. Bernstein. 2022. Jury learning: Integrating dissenting voices into machine learning models.
In *Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems*, CHI '22, pages 1–19. Association for Computing Machinery.
Nitesh Goyal, Ian D. Kivlichan, Rachel Rosen, and Lucy Vasserman. 2022. Is your toxicity my toxicity? exploring the impact of rater identity on toxicity annotation. Proceedings of the ACM on Human-Computer Interaction, 6:1–28.
Dirk Hovy and Diyi Yang. 2021. The importance of modeling social factors of language: Theory and practice. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 588–602, Online. Association for Computational Linguistics.
Emily Jamison and Iryna Gurevych. 2015. Noise or additional information? leveraging crowdsource annotation item agreement for natural language tasks.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 291–297, Lisbon, Portugal. Association for Computational Linguistics.
Jialun Aaron Jiang, Morgan Klaus Scheuerman, Casey Fiesler, and Jed R. Brubaker. 2021. Understanding international perceptions of the severity of harmful content online. *PLOS ONE*, 16(8).
Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, and Michael Bailey. 2021. Designing toxic content classification for a diversity of perspectives.
In *Seventeenth Symposium on Usable Privacy and* Security (SOUPS 2021), pages 299–318. USENIX
Association.
Savannah Larimore, Ian Kennedy, Breon Haskett, and Alina Arseniev-Koehler. 2021. Reconsidering annotator disagreement about racist language: Noise or signal? In *Proceedings of the Ninth International* Workshop on Natural Language Processing for Social Media, pages 81–90, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. Preprint arXiv:1907.11692.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Barbara Plank. 2022. The "problem" of human label variation: On ground truth in data, modeling and evaluation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10671–10682, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014.
Learning part-of-speech taggers with inter-annotator agreement loss. In *Proceedings of the 14th Conference of the European Chapter of the Association for* Computational Linguistics, pages 742–751, Gothenburg, Sweden. Association for Computational Linguistics.
Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, and Mark Diaz. 2021. On releasing annotator-level labels and information in datasets. In *Proceedings of* the Joint 15th Linguistic Annotation Workshop (LAW)
and 3rd Designing Meaning Representations (DMR)
Workshop, pages 133–138, Punta Cana, Dominican Republic. Association for Computational Linguistics.
W. S. Robinson. 1950. Ecological correlations and the behavior of individuals. *American Sociological Review*, 15(3):351–357.
Paul Röttger, Bertie Vidgen, Dirk Hovy, and Janet Pierrehumbert. 2022. Two contrasting data annotation paradigms for subjective NLP tasks. In *Proceedings* of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 175–190, Seattle, United States. Association for Computational Linguistics.
Pratik Sachdeva, Renata Barreto, Geoff Bacon, Alexander Sahn, Claudia von Vacano, and Chris Kennedy.
2022. The measuring hate speech corpus: Leveraging rasch measurement theory for data perspectivism.
In Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022, pages 83–94, Marseille, France. European Language Resources Association.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing @ NeurIPS 2019.
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 1668–1678, Florence, Italy. Association for Computational Linguistics.
Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. 2022.
Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5884–5906, Seattle, United States. Association for Computational Linguistics.
Qinlan Shen and Carolyn Rose. 2021. What sounds
"right" to me? experiential factors in the perception of political ideology. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1762–1771, Online. Association for Computational Linguistics.
Alexandra N. Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, and Massimo Poesio. 2021. Learning from disagreement: A survey.
Journal of Artificial Intelligence Research, 72:1385–
1470.
Angelina Wang, Vikram V Ramaswamy, and Olga Russakovsky. 2022. Towards intersectionality in machine learning: Including more identities, handling underrepresentation, and performing evaluation. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT '22, pages 336–349. Association for Computing Machinery.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
## A.2 **Significance Tests** A.3 **Training Details, Hyperparameters And** Computational Resources A **Appendix** A.1 **Annotator Sociodemographics In Sample**
In the Kumar et al. (2021) dataset, sociodemographic attributes are given for each individual annotation - not once per annotator. For some annotators, conflicting attribute values exist (e.g.,
two different age groups). As the data collection spanned several months (Kumar et al., 2021),
these value changes can in principle be reasonable
(e.g., because an annotator got older, finished a degree, changed sexual preference or gender identity).
However, as reasonable changes can not easily be discerned from erroneous input, we disambiguate values based on a heuristic: If an annotator reports several values for an attribute, we assume the most frequent value to be valid. In cases of no clear most frequent value, we set the attribute to "Prefer not to say". Thus, the main results do not contain annotators with ambiguous attributes.
Results of a replicability analysis (Dror et al., 2017)
testing for significant differences in macro F1 on scores from three runs of four-fold cross-validation.
Table 3 shows results for a comparison of the sociodemographic models against the *baseline* models. Table 4 shows results for a comparison of the sociodemographic models against the *randomized* assignment models. The Bonferroni correction for the corrected count of significant folds ˆk*Bonferroni* is used to account for the fact that we have overlapping test sets from multiple runs of four-fold cross-validation.
We implement models and the training loop using the Hugging Face Transformers library (version 4.19.2, Wolf et al. 2020). Maximum sequence length is 512 tokens, with truncation and padding to the maximum length. We train for 3 epochs with a batch size of 8 and an initial learning rate of 0.00001. Otherwise, we used default parameters. We found results to particularly depend on the learning rate, with higher or lower values leading to worse results.
We use a weighted loss function. Label weights are calculated per annotator on the training set of each fold. Label weights, evaluation scores and the four-fold dataset splits (StratifiedKFold) are calculated using the scikit-learn library (version 1.0.2, Pedregosa et al. 2011). The folds are based on a fixed random seed per iteration: 2803636207, 165043843, 2923262358 Table 2 shows how many annotators the sample contains. Counts are given per group of the four attributes gender, age, education and sexuality.
| Number of Annotators | |
|-----------------------------|------|
| Gender Female | 2450 |
| Male | 2116 |
| Prefer not to say | 412 |
| Nonbinary | 23 |
| Other | 1 |
| Number of Annotators | |
| Age 18 - 24 | 489 |
| 25 - 34 | 1861 |
| 35 - 44 | 1115 |
| 45 - 54 | 529 |
| 55 - 64 | 321 |
| 65 or older | 119 |
| Prefer not to say | 568 |
| Number of Annotators | |
| Sexuality Heterosexual | 4018 |
| Bisexual | 469 |
| Prefer not to say | 346 |
| Homosexual | 134 |
| Other | 35 |
| Number of Annotators | |
| Education Bachelor's degree | 1879 |
| College, no degree | 861 |
| Prefer not to say | 647 |
| Master's degree | 642 |
| Associate degree | 460 |
| High school | 363 |
| Professional degree | 68 |
| Doctoral degree | 51 |
| Below high school | 25 |
| Other | 6 |
The majority of parameters in our model belong to the pre-trained language model shared between all group-specific and annotator-specific layers. Specifically, RoBERTa (Liu et al., 2019) in the roberta-base variant has 125 Million parameters.
We keep the pre-trained model's default output dimensionality of 768, so that each group-specific layer adds 768 ∗ 768 + 768 = 590, 592 parameters and each annotator layer adds 768 ∗ 2 + 2 = 1, 538 parameters.
All experiments ran on a single GPU (GeForce GTX 1080 Ti, 12GB GPU RAM). Per fold, training and evaluation together take about three and a half hours in our setting. Three runs of four-fold crossvalidation (12 folds), thus take around 42 hours
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
| Nonbinary | 1 | 0 |
|---------------------|-----|-----|
| kˆcount kˆBonf. | | |
| 18 - 24 | 2 | 0 |
| 25 - 34 | 2 | 0 |
| 35 - 44 | 1 | 0 |
| 45 - 54 | 0 | 0 |
| 55 - 64 | 1 | 0 |
| 65 or older | 1 | 0 |
| kˆcount kˆBonf. | | |
| Bisexual | 2 | 0 |
| Heterosexual | 4 | 2 |
| Homosexual | 1 | 0 |
| kˆcount kˆBonf. | | |
| Associate degree | 0 | 0 |
| Bachelor's degree | 1 | 0 |
| Doctoral degree | 2 | 0 |
| High school | 0 | 0 |
| Belowhigh school | 0 | 0 |
| Master's degree | 0 | 0 |
| Professional degree | 0 | 0 |
| College, no degree | 2 | 2 |
(1.75 days). With four attributes and three trainable models the combined run time of the reported experiments is estimated to be 21 days. Including preliminary experiments, which, however, mostly were not full runs of k-fold cross-validation and also utilized DistilBERT (Sanh et al., 2019) with slightly faster run times, it will be many times more.
There is no discernible difference in experiment run times between multi-annotator models with or without groups or different numbers of groups.
## A.4 **Number Of Annotations Per Group Across** All Test Sets
Table 5 contains the number of annotations we have per group across the total of 12 folds (from three runs of four-fold cross-validation). This number of annotations is the effective test set size per group.
As the numbers do not vary substantially, perfor-
kˆcount kˆ*Bonf.*
![9_image_1.png](9_image_1.png)
| Female | 2 | 2 |
|---------------------|-----|-----|
| Male | 1 | 0 |
| Nonbinary | 1 | 0 |
| kˆcount kˆBonf. | | |
| 18 - 24 | 1 | 0 |
| 25 - 34 | 0 | 0 |
| 35 - 44 | 1 | 0 |
| 45 - 54 | 1 | 0 |
| 55 - 64 | 3 | 0 |
| 65 or older | 1 | 0 |
| kˆcount kˆBonf. | | |
| Bisexual | 6 | 2 |
| Heterosexual | 1 | 1 |
| Homosexual | 0 | 0 |
| kˆcount kˆBonf. | | |
| Associate degree | 2 | 0 |
| Bachelor's degree | 1 | 0 |
| Doctoral degree | 0 | 0 |
| High school | 2 | 0 |
| Belowhigh school | 2 | 0 |
| Master's degree | 0 | 0 |
| Professional degree | 0 | 0 |
| College, no degree | 1 | 1 |
![9_image_0.png](9_image_0.png)
mance on each fold is equally representative for all groups.
## A.5 **Full Results**
Table 6 shows full results of experiments (see 4),
including results for all residual categories and a naive baseline which always predicts *toxic*.
| Number Of Annotations | Min | Max | |
|-------------------------------|-------------|-----------------|-------|
| Gender Female | 13555±86.44 | 13383.0 13664.0 | |
| Male | 11925±61.65 | 11843.0 12062.0 | |
| Nonbinary | 115±6.03 | 104.0 | 122.0 |
| Other | 5±1.95 | 2.0 | 8.0 |
| Prefer not to say 2345±51.19 | 2281.0 | 2453.0 | |
| Number Of Annotations | Min | Max | |
| Age 18 - 24 | 2615±50.88 | 2521 | 2697 |
| 25 - 34 | 10315±61.45 | 10244 10457 | |
| 35 - 44 | 6250±51.06 | 6179 | 6324 |
| 45 - 54 | 3025±47.23 | 2929 | 3083 |
| 55 - 64 | 1865±25.48 | 1831 | 1903 |
| 65 or older | 675±19.31 | 643 | 704 |
| Prefer not to say 3200±55.28 | 3131 | 3289 | |
| Number Of Annotations | Min | Max | |
| Sexuality Bisexual | 2445±39.26 | 2383 | 2501 |
| Heterosexual | 22630±63.00 | 22507 22726 | |
| Homosexual | 725±26.57 | 670 | 759 |
| Other | 190±7.91 | 173 | 201 |
| Prefer not to say 1955±35.39 | 1878 | 2009 | |
| Number Of Annotations | Min | Max | |
| Education Associate degree | 2605±47.59 | 2516 | 2697 |
| Bachelor's degree | 10510±84.79 | 10348 10700 | |
| Doctoral degree | 305±18.83 | 270 | 332 |
| High school | 2080±37.01 | 2015 | 2139 |
| Below high school | 165±11.17 | 144 | 184 |
| Master's degree | 3515±48.08 | 3425 | 3580 |
| Other | 30±3.44 | 25 | 36 |
| Prefer not to say | 3690±52.92 | 3603 | 3808 |
| Professional degree 380±17.87 | 352 | 411 | |
| College, no degree | 4665±71.36 | 4539 | 4776 |
| Gender | Majority Baseline Baseline | Soc-Dem. | Random | |
|--------------------------------|----------------------------------|------------------------------------|-----------------------|------------|
| Female | 41.79±0.12 | 62.23±0.53 | 62.25±1.19 | 62.41±0.92 |
| Male | 40.53±0.11 | 68.00±0.49 | 67.66±0.46 | 67.63±0.53 |
| Nonbinary | 44.69±1.39 | 56.33±6.00 | 56.80±7.24 | 58.00±7.49 |
| Other | 45.50±4.69 | 48.56±10.78 50.53±14.63 43.66±7.25 | | |
| Prefer not to say 41.05±0.36 | 64.54±1.13 | 65.05±1.52 | 65.08±1.86 | |
| Age | Majority Baseline Baseline | Soc-Dem. | Random | |
| 18 - 24 | 42.49±0.28 | 59.39±1.58 60.44±1.05 60.52±1.37 | | |
| 25 - 34 | 40.49±0.09 | 66.72±0.56 66.63±0.83 66.92±0.51 | | |
| 35 - 44 | 41.87±0.15 | 64.50±0.59 64.94±1.33 65.24±0.89 | | |
| 45 - 54 | 40.63±0.26 | 65.68±0.66 65.88±1.39 65.98±0.83 | | |
| 55 - 64 | 41.65±0.39 | 64.37±1.22 64.94±1.66 64.84±1.30 | | |
| 65 or older | 41.46±0.54 | 63.34±2.07 64.70±2.21 62.77±2.39 | | |
| Prefer not to say 41.37±0.32 | 63.99±1.32 65.24±1.18 64.73±1.33 | | | |
| Education | Majority Baseline Baseline | Soc-Dem. | Random | |
| Associate degree | 43.16±0.19 | 60.69±1.44 | 60.54±2.35 60.78±1.62 | |
| Bachelor's degree | 40.38±0.10 | 66.16±0.51 | 66.23±0.82 66.80±0.54 | |
| Doctoral degree | 43.34±0.94 | 61.93±3.82 | 63.79±5.03 63.27±3.67 | |
| High school | 43.02±0.26 | 60.53±1.39 | 60.47±2.22 60.55±1.87 | |
| Below high school | 43.10±1.44 | 58.28±4.68 | 62.12±4.90 60.17±4.25 | |
| Master's degree | 37.55±0.32 | 69.71±0.86 | 69.58±0.93 69.45±0.96 | |
| Other | 42.95±2.31 | 56.56±10.88 57.59±9.86 57.71±12.28 | | |
| Prefer not to say | 40.97±0.27 | 65.07±1.16 | 65.69±1.05 65.74±1.09 | |
| Professional degree 40.43±0.80 | 66.75±2.37 | 67.84±3.32 68.62±2.84 | | |
| College, no degree | 43.61±0.18 | 58.65±1.19 | 59.40±1.79 59.99±2.19 | |
| Sexuality | Majority Baseline Baseline | Soc-Dem. | Random | |
| Bisexual | 34.69±0.50 | 71.83±1.14 71.42±1.51 69.46±1.95 | | |
| Heterosexual | 41.99±0.06 | 63.25±0.39 63.32±1.21 63.82±0.55 | | |
| Homosexual | 41.15±0.41 | 64.43±1.75 66.11±2.20 65.12±1.94 | | |
| Other | 43.53±0.78 | 57.55±3.79 60.57±4.51 58.69±4.72 | | |
| Prefer not to say 39.12±0.24 | 67.80±1.56 67.27±1.52 67.46±1.11 | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations, 8
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement, 9
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3, Appendix A.3
✓ B1. Did you cite the creators of artifacts you used?
3, Appendix A.3
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Clear from context, citations
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Clear from context, citations
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
3, Ethics Statement 9
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3, Appendix A.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3, 4, Appendix A.4
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
bhargava-penn-2023-decomposed | Decomposed scoring of {CCG} dependencies | https://aclanthology.org/2023.acl-short.89 | In statistical parsing with CCG, the standard evaluation method is based on predicate-argument structure and evaluates dependencies labelled in part by lexical categories. When a predicate has multiple argument slots that can be filled, the same lexical category is used for the label of multiple dependencies. In this paper, we show that this evaluation can result in disproportionate penalization of supertagging errors and obfuscate the truly erroneous dependencies. Enabled by the compositional nature of CCG lexical categories, we propose *decomposed scoring* based on subcategorial labels to address this. To evaluate our scoring method, we engage fellow categorial grammar researchers in two English-language judgement tasks: (1) directly ranking the outputs of the standard and experimental scoring methods; and (2) determining which of two sentences has the better parse in cases where the two scoring methods disagree on their ranks. Overall, the judges prefer decomposed scoring in each task; but there is substantial disagreement among the judges in 24{\%} of the given cases, pointing to potential issues with parser evaluations in general. |
## Decomposed Scoring Of Ccg Dependencies
Aditya Bhargava and **Gerald Penn**
Department of Computer Science University of Toronto Toronto, ON, Canada M5S 3G4
{aditya, gpenn}@cs.toronto.edu
## Abstract
In statistical parsing with ccg, the standard evaluation method is based on predicate-argument structure and evaluates dependencies labelled in part by lexical categories. When a predicate has multiple argument slots that can be filled, the same lexical category is used for the label of multiple dependencies. In this paper, we show that this evaluation can result in disproportionate penalization of supertagging errors and obfuscate the truly erroneous dependencies. Enabled by the compositional nature of ccg lexical categories, we propose *decomposed scoring* based on subcategorial labels to address this.
To evaluate our scoring method, we engage fellow categorial grammar researchers in two English-language judgement tasks: (1) directly ranking the outputs of the standard and experimental scoring methods; and (2) determining which of two sentences has the better parse in cases where the two scoring methods disagree on their ranks. Overall, the judges prefer decomposed scoring in each task; but there is substantial disagreement among the judges in 24% of the given cases, pointing to potential issues with parser evaluations in general.
## 1 Introduction
With a suitably designed architecture, combinatory categorial grammar (ccg) supertaggers can learn to better maintain syntagmatic consistency, adjusting for their own errors to keep the sentence parsable
(Vaswani et al., 2016; Bhargava and Penn, 2020). This kind of adjustment, however, comes at the expense of its evaluated word accuracy, which is the prevailing evaluation measure for supertagging.
The standard, final *parser* evaluation is no kinder in such cases: it examines induced bilexical dependencies, but these are labelled (in part) by the lexical category assigned to the head word (Clark and The code for decomposed ccg scoring is available online at https://www.cs.toronto.edu/~aditya/ccgds Hockenmaier, 2002). If the category is incorrect, its outgoing dependencies are considered incorrect.
While other areas of nlp such as natural language generation and machine translation have recently warmed to efforts to validate their intrinsic evaluations against human judgements (e.g., Novikova et al., 2017; Reiter, 2018), this has not been the case so far with statistical parsing. This is likely because evaluating the quality of a syntactic parse requires grammatical expertise.
In this paper, we examine ccg parser evaluation and identify a number of cases where the standard ccg scoring method gives undesirable results. We address these shortcomings by introducing *decomposed scoring*. To evaluate our new method, we elicit judgements from categorial grammar (cg) experts in two pairwise selection tasks using Englishlanguage data from CCGbank (Hockenmaier and Steedman, 2007). In the first, intrinsic task, the judges are directly asked which of the two scoring methods they prefer for a given sentence. We find that they prefer decomposed scoring in 90% of the cases presented. In the second, extrinsic task, judges are given two *different* sentences, each with an erroneous parse, where the scoring methods disagree about which parse should be ranked higher. Here, we find that the judges do not have majority agreement in 24% of the cases presented; but where they do reach majority consensus, they agree with decomposed scoring in 62% of cases. The high disagreement raises important questions about statistical parser evaluations, which we discuss.
## 2 Background
Following Clark and Hockenmaier (2002), the standard evaluation measure for CCGbank-based ccg parsers is F₁ over bilexical dependencies. Each dependency represents a predicate-argument relationship as indicated by the corresponding lexical categories. A (labelled) ccg dependency is defined as a 4-tuple = (ℎp, ℎa, c, ) where:
1030
(s\np)/pp, 1 (s\np)/pp, 2 pp/np, 1 np/n, 1 np (s\np)/pp pp/np np/n n I believe in the system np s\np ((s\np)\(s\np))/np np/n n s\np, 1 ((s\np)\(s\np))/np, 2 ((s\np)\(s\np))/np, 3 np/n, 1
- ℎpis the head word token of the predicate; - ℎais the head word token of the argument;
- c is the lexical category of the predicate; and - is the predicate's argument slot number that is filled by the argument.
Given an input sentence, the set of corresponding ground-truth dependencies G from CCGbank, and a candidate set of dependencies C from a parser, the candidate dependencies are evaluated according to the F₁ score between the two sets:
$$\mathrm{F_{1}}({\mathcal{D}}_{\mathrm{G}},{\mathcal{D}}_{\mathrm{C}})={\frac{2|{\mathcal{D}}_{\mathrm{G}}\cap{\mathcal{D}}_{\mathrm{C}}|}{|{\mathcal{D}}_{\mathrm{G}}|+|{\mathcal{D}}_{\mathrm{C}}|}}$$
A dependency ∈ G ∪ C is considered correct if and only if ∈ G ∩ C. In computing |C ∩
G|, individual dependency elements are compared for equality. Formally, for a given dependency =
(ℎp, ℎa, c, ), let h = (ℎp, ℎa), c = c, and s = .
Then C ∩ G is:
$${\mathcal{D}}_{\mathrm{G}}\cap{\mathcal{D}}_{\mathrm{C}}=\left\{(g,c)\left|\begin{array}{l}{{g\in{\mathcal{D}}_{\mathrm{G}}\wedge c\in{\mathcal{D}}_{\mathrm{C}}}}\\ {{\wedge\,g_{\mathrm{h}}=c_{\mathrm{h}}}}\\ {{\wedge\,g_{\mathrm{c}}=c_{\mathrm{c}}}}\\ {{\wedge\,g_{\mathrm{s}}=c_{\mathrm{s}}}}\end{array}\right.\right\}\quad(1)$$
This dependency-based measure directly evaluates the parser's ability to produce the intended predicate-argument structure. Models that analyze sentences with different derivations than the one provided by the corpus will not be penalized unless the derivation alters the semantics—i.e., this evaluation is invariant to spurious ambiguities.
## 3 Decomposed Dependency Scoring
In this paper, we do not take issue with the treatment of the head words ℎp and ℎa
, but rather of the lexical category, , and argument slot number, . We alter how ccg dependency labels are compared so that a dependency's correctness no longer requires the entire lexical category to be correct and allow (judicious) flexibility in valid values for the slot number.
As these modifications are dependent on subcategorial decompositions of the lexical category labels, we term the overall approach **decomposed scoring**.
## 3.1 Subcategorial Labelling
Requiring predicted lexical categories to be *fully* equal to the ground truth can cause errors to be overpenalized. In the example shown in Figure 1, the parser makes a complement-adjunct confusione rror
(a common parser pathology): the complement in is mistakenly analyzed as an adjunct. While this error is directly indicated by the erroneous dependencies between the verb and its complement, the standard scoring method "delocalizes" the error, marking 75% of the dependencies as erroneous.
To address this, we propose **subcategorial labelling** of ccg dependencies: instead of the entire lexical category, only the subcategory corresponding to the argument slot is used for comparison. To define this more formally, we first define a function arg
(x) that extracts the subcategory for argument slot from category x:
$$\arg_{n}(x)={\begin{cases}x&{\mathrm{if}}\,n={\mathrm{arity}}(x)=0,\\ z&{\mathrm{if}}\,x=(r|z)\wedge n={\mathrm{arity}}(x),\\ \arg_{n}(r)&{\mathrm{if}}\,x=(r|z)\wedge n<{\mathrm{arity}}(x),\end{cases}}$$
where arity(x) is the number of arguments that x takes before yielding its target category and | is a variable that ranges over the categorial slash operators {/, \}. For example, arg1
(s/(s\np)) = s\np.
Thus, subcategorial labelling replaces c = c from Equation 1 with args
(c) = args
(c). Returning to our example, subcategorial labelling allows the verb-subject dependency (from *believe* to I ) to be marked correct as shown in Figure 2.
## 3.2 Subcategorial Alignment
On its own, there are some situations where subcategorial labelling is insufficient. insufficient. In particular, subcategorial labelling is ineffective when the argument slot numbers differ between two dependencies that are being compared, as is the case for the prepositional complement dependency in Figure 2 (from in to *system*). While subcategorial labelling indicates that the argument subcategory is
![2_image_0.png](2_image_0.png)
correct, the differing slot numbers mean that the dependency is still considered incorrect, even though the parser found the correct syntactic relationship.
We thus propose **subcategorial alignment**: we allow the proposed slot number to be considered correct if there exists a *plausible alignment* (to be defined shortly) between its corresponding argument slot in the candidate lexical category and the correct argument slot in the ground-truth lexical category.
In order to establish such an alignment, we first decompose the (full) lexical categories for the given dependencies into a linear representation consisting of its target category and its "directed" argument subcategories such that the distinction between left and right arguments is maintained. We call these linear representations **functorial sequences**. More formally, the functorial sequence fs(x) of category x is defined as:
$$\operatorname{fs}(x)={\begin{cases}[x]&{\mathrm{if~arity}}(x)=0,\\ \operatorname{fs}(y)\oplus[z]&{\mathrm{if~}}x=y|z,\end{cases}}$$
where ⊕ denotes list concatenation. For example, pp/np and ((s\np)\(s\np))/np decompose into the functorial sequences [pp, /np] and [s, \np, \(s\np),
/np], respectively.
Next, we compute the Levenshtein distance between the two functorial sequences and then backtrack, gathering the set of optimal paths. A **plausible alignment** is then any match state (i.e., zerocost substitution) on any optimal path.
Formally, let ℳ(x, y) denote the set of all match states (, ) in any Levenshtein alignment between functorial sequences fs(x) and fs(y), where indexes over fs(x) and indexes over fs(y). Thus, in Equation 1, subcategorial alignment replaces s =
s with (s, s) ∈ ℳ(c, c).
Our use of Levenshtein alignments is motivated by the view of many supertagging errors as insertions or deletions of categorial arguments; for example, in Figure 1, we see the /pp argument as having
![2_image_1.png](2_image_1.png)
been deleted in the predicted category for *believe*.
Levenshtein alignment is robust here as subcategorial alignment is only relevant when the other elements of the dependency tuple are correct.¹ As well, the alignment's monotonicity prevents swapped arguments (e.g., swapped direct and indirect objects)
from mistakenly being marked correct.²
Returning to the complement-adjunct confusion example, the only plausible alignment that exists between pp/np and ((s\np)\(s\np))/np is at the /np directed subcategories. The dependencies from in to *system* are thus marked correct since there is a plusible alignment between the corresponding argument slots. As shown in Figure 3, this leaves only the dependencies that directly indicate the complement/adjunct relations as erroneous.
Other than complement-adjunct confusion, subcategorial alignment is also useful for prepositional phrase attachment errors, another common parser pathology. Refer to Appendix A.1 for an example.
## 3.3 Root Node Inclusion
Our final modification is the inclusion of root nodes.
The choice of root is relevant in *de dicto–de re* distinctions (*inter alia*), as shown in Figure 4. Despite the error in the parse, the standard ccg evaluation assigns a perfect F₁ score since it does not include root nodes (and corresponding dependencies).
Importantly, including root nodes also addresses a potential pathology of directly using subcategorial labels. If the only error is in the choice of spanning category for the sentence, subcategorial labelling without a root can result in a perfect F₁ score since
![3_image_0.png](3_image_0.png)
the root category does not fill any argument slots.
Including a root dependency that specifies the correct top-level category entirely addresses this. Refer to Appendix A.2 for an example.
## 4 Evaluating Decomposed Scoring
Thus far, we have argued for decomposed scoring on the basis of examples that show cases where decomposed scoring is able to more precisely isolate and penalize parsing errors. Our next aim is to determine the extent to which this capability holds true over a larger set of parser outputs when evaluated by expert judges in a systematic manner.
From here, we refer to the standard ccg evaluation as F₁ and to decomposed scoring as DF₁.
## 4.1 Intrinsic And Extrinsic Evaluation Tasks
The evaluation is split into two judgement tasks.
The first task gauges whether cg researchers agree that DF₁ is better able to isolate parsing errors than F₁. Judges are given sentences with corresponding dependency sets where F₁ and DF₁ disagree about the correctness of at least one dependency. Each sentence is presented as a pair of dependency figures similar to those presented above (e.g., Figures 1 and 3 are one such pair, though all figures in the judgement tasks include root nodes). For each sentence, the judges are asked which of the two methods better isolates the error made by the parser, as judged against the ground truth. As the first task directly compares F₁ and DF₁ on common sentences, we consider it to be an **intrinsic** evaluation.
The second task uses F₁ and DF₁ *in situ* as scoring methods; we therefore consider it to be an **extrinsic** evaluation. Since there is no objectively correct score for partially correct dependency sets, whether the score assigned by DF₁ to a given dependency set is better than that assigned by F₁ to the same set can only be evaluated in relative terms. At the extreme, if DF₁ were to yield a different score than F₁ for each dependency set of interest but the two induced the same preorder over the dependency sets, the two methods would not be meaningfully different.
The second task therefore examines pairwise rank inversions: pairs of sentences where F₁ and DF₁ disagree on which sentence's dependency set is better. Here the judges are given pairs of *different* sentences and then asked to select the sentence that they believe has the better parser-generated dependency set.³ The question underlying the extrinsic task is thus whether using DF₁ instead of F₁ results in sentence rankings that better match those that would be assigned by human judges.
For both tasks, the instructions include wording directing the judges to consider semantics in their evaluations. This maintains the assumption inherent in evaluations based on predicate-argument structure that parsing errors are significant in proportion to their effects on semantics.
## 4.1.1 Task Administration
The tasks are administered sequentially, with the second task being given to each judge after completion of the first. For each task, each judge is given a unique link to an online interface that describes notational conventions, specifies the task they are to complete, presents the data, and records their responses. No time limit is imposed, and previous judgements can be changed at any time.
Prior to having external judges complete the tasks, we conducted an internal pilot study; consult Appendix B for details. Here we focus on details and results of the main study only.
We use three parsers to generate parse predictions for sentences from the CCGbank test set: EasyCCG
Lewis and Steedman (2014), C&C (Clark and Cur-
| Dev set | Test set | | | |
|-----------|------------|------|------|------|
| Parser | F₁ | DF₁ | F₁ | DF₁ |
| C&C | 83.4 | 88.5 | 84.2 | 88.9 |
| EasyCCG | 82.6 | 88.0 | 83.1 | 88.1 |
| DepCCG | 89.9 | 93.3 | 89.8 | 93.0 |
Table 1: F₁ and DF₁ scores of three parsers on CCGbank.
ran, 2007), and DepCCG (Yoshikawa et al., 2017).
Table 1 shows the performance of the three parsers according to both F₁ and DF₁. Verbatim copies of all data and instructions as given to each judge are available in the supplementary materials of this paper, including the judges' responses. Refer to Appendix C for further details of the data generation and sampling procedures.
## 4.1.2 Judge Recruitment And Compensation
To decrease the likelihood of bias towards DF₁ for the main study, we do not provide any judgements ourselves; nor do we ask members of our institution to do so. Instead, we sought unaffiliated cg researchers to provide their expert judgements: four judges were recruited via professional connections.
Two judges were compensated at a flat rate and the remaining two were seconded by their employer. All four judges have peer-reviewed publications at relevant cg research venues and are fluent in English.
## 5 Results And Discussion
In the first task, the judges ruled strongly in favour of DF₁, agreeing with it in ¹⁸⁄∕20 cases. This is statistically significant (binomial test, ≈ 2.0 × 10−4).
We therefore conclude that DF₁ is better than F₁ at identifying the ultimate error in the parser's output.
In the second task, we find some disagreement among judges: for ¹¹⁄∕45 sentence pairs, two judges agreed with DF₁, while the remaining two disagreed with it, leading to a tie. But of the remaining 34 sentence pairs, a majority of judges agreed with DF₁ on 21 pairs. Even with the ties, this is also statistically significant ( ≈ 0.02); we thus conclude that DF₁ is preferable to F₁.
Disagreement among judges in the second task merits discussion. Judges are taken to be providing ground truth, so tied pairs are cases where a ground truth judgement is unavailable. What does this mean for decomposed scoring, and for ccg parser evaluations more generally?
Concerning decomposed scoring, the first task's results empirically validate the utility of DF₁ over F₁. DF₁ prevents obfuscation of erroneous dependencies, improving the granularity of the evaluation measure. In addition, there is the possibility of helping with the *training* of statistical parsers: when a dependency's only crime is sharing a lexical category with an erroneous one, training a parser to learn that both are errors may cause it to learn to avoid correct dependencies.
Moreover, we examined the sentence pairs in the second task ourselves and found that even when we judged DF₁'s rank inversion to be incorrect, undoing DF₁'s changes would not address the issue; the severity of parsing errors varies and is in part modulated by semantic intricacies. We thus expect that the disagreements among judges are due at least in part to underlying differences in opinions about sentence meaning and/or salience.
Turning to the broader issue of ccg parser evaluations, the lack of definitive ground truth for many inter-sentence comparisons implies limitations for inter-parser comparisons. Imagine a case where parser A claims to outperform parser B, and closer inspection reveals that A and B differ in their outputs on only two sentences. For one of these sentences, A's output has a higher F₁ than does B's; for the other, the opposite is true, but the difference in F₁ is smaller. Now, the claim that A outperforms B
becomes a claim that the parse that A produces for the first sentence is better than the parse that B produces for the second. And yet, as indicated by the judges' disagreements in the second task, it is not always possible to make these kinds of judgements.
## 6 Conclusion
We have found that the standard ccg evaluation method's choice of dependency labels is prone to amplifying minor errors. We proposed decomposed scoring and validated it by consulting experts. From their judgements, we conclude that decomposed scoring is better at isolating parser errors and is, overall, a better choice than the standard scoring method. Disagreement in the second task, however, is a source of concern and suggests potential issues for ccg parser evaluations. Given the frequentlysmall deltas between modern parsers, this is worth investigating further.
## 7 Limitations
While we used multiple parsers to avoid biasing the evaluation towards one parser, all parsers used are relatively high-performing parsers—all have labelled F₁ scores above 0.8 on the CCGbank development and test sets. This evaluation is thus biased towards especially difficult sentences, since those will be the ones where good parsers produce errors. While we found no correlation between parser scores and judge disagreement, at least suggesting that the judgements were not a function of parse quality, poorer parsers (or good parsers on novel domains) may make different kinds of errors than those that appeared in our sample. It is unclear how F₁ and DF₁ would compare under such circumstances; understanding this better remains an open area of research.
The relatively high disagreement among judges in the second task (24% of sentence pairs) is concerning, but it should be noted that the sentence pairs were sampled from a set of disagreements between two different *scoring methods*. The extent to which this is a problem in practice is unclear, as judge agreement may not be as low on outputs from different *parsers* evaluated by the *same* scoring method—but it could also be lower.
Although the dependency-based evaluations discussed in this paper are standard for CCGbankbased statistical ccg parser evaluations, the reliance on extra resources (namely, the **generate** program and markup files from C&C) makes it somewhat unique. Because of this, the extent to which decomposed scoring, or the ideas behind it, would be useful for other evaluations scenarios (such as for other corpora, including ccg corpora from other languages) is unclear.
## Acknowledgements
We thank Umut Özge, Jakob Prange, Laura Rimell, and Miloš Stanojević for participating in our judgement task, Jinman Zhao and Timothy Fowler for their judgements and feedback during our pilot study, and our anonymous reviewers for their feedback. We gratefully acknowledge Google DeepMind for seconding research staff in support of this project.
## References
Aditya Bhargava and Gerald Penn. 2020. Supertagging with ccg primitives. In *Proceedings of the 5th* Workshop on Representation Learning for nlp, pages 194–204, Online. Association for Computational Linguistics.
Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with ccg and
log-linear models. *Computational Linguistics*,
33(4):493–552.
Stephen Clark, Darren Foong, Luana Bulat, and Wenduan Xu. 2015. The Java version of the C&C parser version 0.95. Technical report, University of Cambridge Computer Laboratory.
Stephen Clark and Julia Hockenmaier. 2002. Evaluating a wide-coverage ccg parser. In *LREC Beyond PARSEVAL Workshop*, pages 60–66, Las Palmas, Spain.
Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of ccg derivations and dependency structures extracted from the Penn Treebank. *Computational Linguistics*, 33(3):355–396.
Mike Lewis and Mark Steedman. 2014. A* ccg parsing with a supertag-factored model. In *Proceedings of* the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 990–1000, Doha, Qatar. Association for Computational Linguistics.
Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. In *Proceedings of* the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics.
Ehud Reiter. 2018. A structured review of the validity of bleu. *Computational Linguistics*, 44(3):393–401.
Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging with lstms. In *Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 232–237, San Diego, California. Association for Computational Linguistics.
Masashi Yoshikawa, Hiroshi Noji, and Yuji Matsumoto.
2017. A* ccg parsing with a supertag and dependency factored model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 277–287, Vancouver, Canada. Association for Computational Linguistics.
## A Additional Examples A.1 Prepositional Phrase Attachment
As is the case for complement-adjunct confusion errors, prepositional phrase attachment errors are affected by argument slot renumbering, particularly when the analysis is as an adjunct, but there is ambiguity in whether the adjunct is modifying a noun or a verb. Since nouns and verbs have different arities, any arguments for their corresponding adjuncts will end up with different slot numbers, as shown in Figure 5.
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
Decomposed scoring again localizes the error directly to the adjuncts and their (adjunctival) arguments, as shown in Figure 6.
## A.2 Root Category Pathology
In Figure 7, the standard ccg evaluation substantially over-penalizes the very minor error in the root category (sₙg vs sb).
However, without including root nodes, subcategorial labelling suffers from a pathology in such cases. As shown in Figure 8, the subcategorial labelling (alone) results in all dependencies being
![6_image_3.png](6_image_3.png)
marked correct and thus a perfect F₁ score, despite the error in the parse.
Fortunately, as shown in Figure 9, adding the root node entirely solves this issue.
## B Pilot Study
Before our main study, we first conducted a pilot study to confirm the details of our tasks. Two members of our research lab served as pilot judges: one professor (the second author of this paper) and one Ph.D. student (uninvolved with the work in this pa-
![7_image_0.png](7_image_0.png)
per), both of whom have peer-reviewed publications in relevant cg research venues.
For both tasks in the pilot study, all judges were given the same data for evaluation. The first task included 10 annotation items (sentences) while the second task included 20 annotation items (sentence pairs). See Appendix C for details of the data generation and sampling procedures.
## B.1 Results
In the first task, opinion among the judges was unanimous: for each sentence, both judges agreed that the labelling and error assignment provided by DF₁
was better at identifying the ultimate error in the parser's output. For the null hypothesis of judges that make their selections at (uniform) random, the binomial test indicates that this degree of agreement in favour of DF₁ is extremely unlikely, with
≈ 9.5 × 10−7; we therefore reject it.
In the second task, the judges agreed with each other only half the time (¹⁰⁄∕20 pairs). Out of the ten cases where they agreed with each other, however, they agreed with the DF₁ ranking nine times. We can again reject the null hypothesis of judges choosing at (uniform) random: the binomial test indicates that such high agreement in favour of DF₁ would have
≈ 0.04.
## B.2 Changes For Main Study
The results of the pilot study led us to make the following alterations for the main study:
- Given the complete agreement between judges in the first task, we treated judges as interchangeable for the first task in the main study.
Each judge was thus given a *different* sample of sentences, allowing more sentences to be covered.
- To account for the high level of inter-judge disagreement in the second task, we increased the number of sentence pairs in the task to 45; the number of sentences in the first task was reduced to five per judge in order to make better use of the judges' time. As with the pilot study, each judge was given the same set of sentences.
As well, since each parser was tuned on the CCGbank development set, we used the CCGbank test set (i.e., section 23) to sample the sentences for the main study. Remaining task and administration details were the same as for the pilot study.
## C Data Generation And Sampling
To generate the data for both tasks, we started with the CCGbank development set (i.e., section 00).
As the dependency figures can easily become very wide, we excluded all sentences where the ground truth has more than 20 dependencies (including the root dependency). This ensured that the figures fit legibly on most displays. Next, we ran off-theshelf parsers on the remaining sentences to produce predicted parses. For the pilot study, we used EasyCCG Lewis and Steedman (2014) only. For the main study, we prevented the results from being biased towards a single parser by using three different parsers: EasyCCG, C&C (Clark and Curran, 2007), and DepCCG (Yoshikawa et al., 2017).
When sampling data for the two tasks, the parsers were alternated for selection per sampled item so that each parser was evenly represented in the data given to the judges (15 sentence pairs each for the second task). We converted all parser outputs into dependency sets using the **generate** program from C&C; we used updated markup files from the Java version of C&C (Clark et al., 2015). Sentential categories were extracted as needed from the parsers' outputs, after which each sentence was scored with both F₁ and DF₁. From here, the process diverged for the first and second tasks.
For the first (intrinsic) task, we kept only those sentences that met the relevant criterion: F₁ and DF₁ must disagree about at least one dependency. From these, we sampled sentences uniformly and without replacement to yield the set of sentences to be presented to judges for the first task (10 sentences for the pilot study and 5 for the main study). Although F₁ does not evaluate the root dependency, we found that omitting the root node from one of the two dependency figures in each pair made for a visually conspicuous absence; instead, we kept the root node for both cases, labelled the dependency accordingly, and never marked the root dependency as erroneous for the F₁ diagrams.
For the second (extrinsic) task, we first gathered all sentence pairs where F₁ and DF₁ disagreed on the relative ranks of the two pair elements. In order to avoid differences in scale, we then removed all pairs where the two sentences in the pair did not have the same number of dependencies in their ground truths. Next, in order to keep the task as simple as possible, we disallowed ties and therefore removed all pairs where at least one of F₁ or DF₁ assigned the same score to both elements in a pair. From these, sentence pairs were sampled uniformly and without replacement to yield the set of sentence pairs to be presented to judges for the second task (20 for the pilot study and 45 for the main study).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 1
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4 and Appendix C
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 4 And Appendices B And C
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Supplementary materials
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Most details provided in Section 4.1.2. Payment details omitted as it qualifies as an exception under our university's ethics protocol as collegial review.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Did not discuss in paper but participants were clear on this when recruited.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Exempt
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Section 4.1.2 |
raunak-etal-2023-gpts | Do {GPT}s Produce Less Literal Translations? | https://aclanthology.org/2023.acl-short.90 | Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT) models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E-X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions. | # Do Gpts Produce Less Literal Translations?
Vikas Raunak Arul Menezes Matt Post Hany Hassan Awadalla Microsoft Azure AI
Redmond, Washington
{viraunak,arulm,mpost,hanyh}@microsoft.com
## Abstract
Large Language Models (LLMs) such as GPT3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT)
models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E→X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions.
## 1 Introduction
Despite training only on a language-modeling objective, with no *explicit* supervision on aligned parallel data (Briakou et al., 2023), LLMs such as GPT-3 or PaLM (Brown et al., 2020; Chowdhery et al., 2022) achieve close to state-of-the-art translation performance under few-shot prompting
(Vilar et al., 2022; Hendy et al., 2023). Work investigating the output of these models has noted that the gains in performance are not visible when using older surface-based metrics such as BLEU
(Papineni et al., 2002a), which typically show large losses against NMT systems. This raises a question: How do these LLM translations differ *qualitatively* from those of traditional NMT systems?
We explore this question using the property of translation *literalness*. Machine translation systems have long been noted for their tendency to produce
| source | He survived by the skin of his teeth . |
|----------|------------------------------------------|
| NMT | Il a survécu par la peau de ses dents . |
| GPT-3 | Il a survécu de justesse . |
Table 1: An example where GPT-3 produces a more natural (non-literal) translation of an English idiom. When word-aligning these sentences, the source word *skin* remains unaligned for the GPT-3 translation.
overly-literal translations (Dankers et al., 2022b),
and we have observed anecdotally that LLMs seem less susceptible to this problem (Table 1). We investigate whether these observations can be validated quantitatively. First, we use measures based on word alignment and monotonicity to quantify whether LLMs produce less literal translations than NMT systems, and ground these numbers in human evaluation (§ 2). Next, we look specifically at idioms, comparing how literally they are translated under both natural and synthetic data settings (§ 3).
Our investigations focus on the translation between English and German, Chinese, and Russian, three typologically diverse languages. Our findings are summarized as follows: (1) We find that translations from two LLMs from the GPT series of LLMs are indeed generally less literal than those of their NMT counterparts when translating out of English, and (2) that this is particularly true in the case of sentences with idiomatic expressions.
## 2 Quantifying Translation Literalness
We compare the state-of-the-art NMT systems against the most capable publicly-accessible GPT models (at the time of writing) across measures designed to capture differences in translation literalness. We conduct both automatic metric-based as well as human evaluations. We explain the evaluation and experimental details below.
Datasets We use the official WMT21 En-De, DeEn, En-Ru and Ru-En News Translation test sets 1041
| System | Source | Translation |
|----------|----------------------------------------------------------|-----------------------------------------------------------------------|
| MS | Time is running out for Iran nuclear deal, Germany says, | Die Zeit für das Atomabkommen mit dem Iran läuft ab, sagt Deutschland |
| GPT | Time is running out for Iran nuclear deal, Germany says, | Deutschland sagt, die Zeit für das iranische Atomabkommen läuft ab. |
| MS | You're welcome, one moment please. | Sie sind willkommen, einen Moment bitte. |
| GPT | You're welcome, one moment please. | Bitte sehr, einen Moment bitte. |
Table 2: Translation examples with different Non-Monotonicity (NM) and Unaligned Source Word (USW) scores for MS-Translator (lower) and text-davinci-003 translations (higher) from the WMT-22 En-De test set, for illustration.
for evaluation (Barrault et al., 2021).
Measures of Quality We use COMET-QE1(Rei et al., 2020) as the Quality Estimation (QE) measure (Fomicheva et al., 2020) to quantify the fluency and adequacy of translations. Using QE as a metric presents the advantage that it precludes the presence of any reference bias, which has been shown to be detrimental in estimating the LLM output quality in related sequence transduction tasks
(Goyal et al., 2022). On the other hand, COMETQE as a metric suffers from an apparent blindness to copy errors (i.e., cases in which the model produces output in the source language) (He et al.,
2022). To mitigate this, we apply a language identifier (Joulin et al., 2017) on the translation output and set the translation to null if the translation language is the same as the source language. Therefore, we name this metric COMET-QE + LID.
Measures of Translation Literalness There do not exist any known metrics with high correlation geared towards quantifying translation literalness.
We propose and consider two automatic measures at the corpus-level:
1. *Unaligned Source Words (USW)*: Two translations with very similar fluency and adequacy could be differentiated in terms of their literalness by computing word to word alignment between the source and the translation, then measuring the number of source words left unaligned. When controlled for quality, a less literal translation is likely to contain more unaligned source words (as suggested in Figure 1).
2. *Translation Non-Monotonicity (NM)*: Another measure of literalness is how closely the translation tracks the word order in the source. We use the non-monotonicity metric proposed in Schioppa et al. (2021), which computes the deviation from the diagonal in the word to word alignment as the non-monotonicity measure.
1wmt20-comet-qe-da
This can also be interpreted as (normalized)
alignment crossings, which has been shown to correlate with translation non-literalness
(Schaeffer and Carl, 2014).
We use the multilingual-BERT-based awesomealigner (Devlin et al., 2019; Dou and Neubig, 2021)
to obtain the word to word alignments between the source and the translation. Table 2 presents an illustration of translations with different USW and NM scores2, obtained from different systems.
Systems Under Evaluation We experiment with the below four systems (NMT and LLMs):
1. WMT-21-SOTA: The Facebook multilingual system (Tran et al., 2021) won the WMT-21 News Translation task (Barrault et al., 2021),
and thereby represents the strongest NMT system on the WMT'21 test sets.
2. Microsoft-Translator: MS-Translator is one of the strongest publicly available commercial NMT systems (Raunak et al., 2022).
3. text-davinci-002: The text-davinci-002 model is an instruction fine-tuned model in the GPT
family (Brown et al., 2020). It represents one of the strongest publicly-accessible LLMs
(Liang et al., 2022).
4. text-davinci-003: The text-davinci-003 model further improves upon text-davinci-002 for many tasks3(Liang et al., 2022).
For both the GPT models, we randomly select eight samples from the corresponding WMT-21 development set, and use these in the prompt as demonstrations for obtaining all translations from GPTs.
Results We compare the performance of the four systems on the WMT-21 test sets. Figure 1 shows the results of this comparison. A key observation is that while the GPT based translations achieve superior COMET-QE+LID scores than Microsoft Translator across the language pairs (except En-Ru), they 2Metrics: https://github.com/vyraun/literalness 3LLMs: https://beta.openai.com/docs/models/
![2_image_0.png](2_image_0.png)
also consistently obtain considerably higher number of unaligned source words. This result holds for the comparison between the WMT-21-SOTA and GPT systems as well. Further, GPT translations also consistently show higher non-monotonicity for E→X translations. However, this is not the case for translations into English, wherein the multilingual WMT-21-SOTA system obtains very close non-monotonicity measurements. The combined interpretation of these measurements *suggests* that GPTs do produce less literal E→X translations.
Human Evaluation We verify the conclusion from the results in Figure 1 by conducting a human evaluation of translation literalness on 6 WMT-22 language pairs: En-De, En-Ru, En-Zh and De-En, Ru-En, Zh-En. For each language pair, we randomly sample 100 source-translation pairs, with translations obtained from MS-Translator (a strong commercial NMT system) and text-davinci-003
(a strong commercial LLM) (Hendy et al., 2023).
We used zero-shot text-davinci-003 translations for human evaluations in order to eliminate any biases through the use of specific demonstration examples. In each case, we ask a human annotator
(bilingual speaker for Zh-En, target-language native plus bilingual speaker otherwise) to annotate 100 translations from both GPT and MS-Translator and select which of the two translations is more literal. The human annotation interface is described in Appendix A. The results in Table 3 show that the annotators rate the GPT translations as less literal.
| Lang-Pair | MS-Translator | Davinci-003 | Equal | Diff |
|-------------|-----------------|---------------|---------|--------|
| En-De | 52 | 32 | 16 | +20 |
| En-Zh | 42 | 32 | 24 | +10 |
| En-Ru | 41 | 37 | 22 | + 4 |
| De-En | 48 | 26 | 26 | +12 |
| Zh-En | 42 | 38 | 20 | + 4 |
| Ru-En | 52 | 28 | 20 | +24 |
Experiments on Best WMT-22 NMT Systems Further, we also experiment with the WMT-Best systems on the WMT-22 General Machine Translation task (Kocmi et al., 2022). We evaluate USW
and NM on De-En, Ja-En, En-Zh and Zh-En, since on each of these language pairs, text-davinci-003's few-shot performance is very close to that of the WMT-Best system as per COMET-22 (Rei et al.,
2022), based on the evaluation done in Hendy et al.
(2023). We report our results in Table 4, which shows our prior findings replicated across the language pairs. For example, text-davinci-003, despite obtaining a 0.2 to 0.6 *higher* COMET-22 score than the best WMT systems on these language pairs, consistently obtains a *higher* USW
score and a higher NM score in all but one comparison (NM for En-De). Note that the NM score differences for Chinese and Japanese are larger in magnitude owing to alignment deviations measured over character-level alignments. Further, we refer the reader to Hendy et al. (2023) for similar USW and NM comparisons of translations from text-davinci-003 and MS-Translator.
| Language Pair | USW Diff | NM Diff |
|-----------------|------------|-----------|
| En-Zh | + 4.93 | + 12.94 |
| De-En | + 1.04 | - 0.10 |
| Zh-En | + 4.93 | + 13.06 |
| Ja-En | + 6.10 | + 11.13 |
Table 4: USW and NM score differences of text-davinci003 relative to WMT-Best on the WMT-22 test sets.
| MT System | C-QE ↑ | USW ↓ | NM ↓ |
|------------------|----------|---------|--------|
| MS-Translator | 21.46 | 13.70 | 9.63 |
| WMT'21 SOTA | 23.25 | 14.47 | 10.21 |
| text-davinci-002 | 23.67 | 18.08 | 11.39 |
Table 5: Natural Idiomatic Sentences: Combined Results over MAGPIE, EPIE, PIE (5,712 sentences).
## 3 Effects On Figurative Compositionality
In this section, we explore whether the less literal nature of E→X translations produced by GPT
models could be leveraged to generate higher quality translations for certain inputs. We posit the phenomenon of composing the non-compositional meanings of idioms (Dankers et al., 2022a) with the meanings of the compositional constituents within a sentence as *figurative compositionality*. Thereby, a model exhibiting greater figurative compositionality would be able to abstract the meaning of the idiomatic expression in the source sentence and express it in the target language non-literally, either through a non-literal (paraphrased) expression of the idiom's meaning or through an equivalent idiom in the target language. Note that greater nonliteralness does not imply better figurative compositionality. Non-literalness in a translation could potentially be generated by variations in translation different from the *desired* figurative translation.
## 3.1 Translation With Idiomatic Datasets
In this section, we quantify the differences in the translation of sentences with idioms between traditional NMT systems and a GPT model. There do not exist any English-centric parallel corpora dedicated to sentences with idioms. Therefore, we experiment with monolingual (English) sentences with idioms. The translations are generated with the same prompt in Section 2. The datasets with natural idiomatic sentences are enumerated below:
- *MAGPIE* (Haagsma et al., 2020) contains a set of sentences annotated with their idiomaticity, alongside a confidence score. We use the sentences pertaining to the news domain which are marked as idiomatic with cent percent annotator confidence (totalling 3,666 sentences).
- *EPIE* (Saxena and Paul, 2020) contains idioms and example sentences demonstrating their usage. We use the sentences available for static idioms (totalling 1,046 sentences).
- The *PIE dataset* (Zhou et al., 2021) contains idioms along with their usage. We randomly sample 1K sentences from the corpus.
Results The results are presented in Table 5. We find that text-davinci-002 produces better quality translations than the WMT'21 SOTA system, with greater number of unaligned words as well as with higher non-monotonicity.
Further Analysis Note that a direct attribution of the gain in translation quality to better translation of idioms specifically is challenging. Further, similarity-based quality metrics such as COMETQE themselves might be penalizing non-literalness, even though they are less likely to do this than surface-level metrics such as BLEU or ChrF (Papineni et al., 2002b; Popovic´, 2015). Therefore, while a natural monolingual dataset presents a useful testbed for investigating figurative compositionality abilities, an explicit comparison of figurative compositionality between the systems is very difficult. Therefore, we also conduct experiments on synthetic data, where we explicitly control the finegrained attributes of the input sentences. We do this by allocating most of the variation among the input sentences to certain constituent expressions in synthetic data generation.
## 3.2 Synthetic Experiments
For our next experiments, we generate synthetic English sentences, each containing expressions of specific *type(s)*: (i) names, (ii) random descriptive phrases, and (iii) idioms. We prompt text-davinci002 in a zero-shot manner, asking it to generate a sentence with different *instantiations* of each of these types (details are in appendix B). We then translate these sentences using the different systems, in order to investigate the relative effects on our literalness metrics between systems and across types. In each of the control experiments, we translate the synthetic English sentences to German.
| Expression | C-QE ↑ | USW ↓ | NM ↓ |
|----------------|----------|---------|--------|
| Random Phrases | -2.45 | +1.62 | +0.14 |
| Named Entities | -1.50 | +0.81 | +0.39 |
| Idioms | +5.90 | +2.82 | +1.95 |
Table 6: Synthetic sentences with Idioms vs Synthetic sentences containing other expressions: The difference between GPT (text-davinci-002) performance and NMT
performance (Microsoft Translator) is reported.
Synthetic Dataset 1 As described, we generate sentences containing expressions of the three types, namely, named entities (e.g., *Jessica Alba*), random descriptive phrases (e.g., *large cake on plate*)
and idioms (e.g., *a shot in the dark*). Expression sources as well as further data generation details are presented in Appendix B. Results are in Table 6.
\begin{tabular}{l|c|c|c|c} \hline Num Idioms & **1** & **2** & **3** & **4** \\ \hline USW & 17.58 & 18.39 & 18.28 & 18.99 \\ \hline \end{tabular}
Table 7: Synthetic sentences with multiple idioms (1-4):
Increasing the number of idioms increases the number of unaligned source words in text-davinci-002 translations.
Synthetic Dataset 2 We generate sentences containing *multiple* idioms (varying from 1 to 4). The prompts & examples are presented in appendix B.
The results are presented in Table 7.
Results Table 6 shows that the percentage of unaligned source words is highest in the case of idioms, followed by random descriptive phrases &
named entities. The results are consistent with the hypothesis that the explored GPT models produce less literal E→X translations, since named entities or descriptive phrases in a sentence would admit more literal translations as acceptable, unlike sentences with idioms. Davinci-002 obtains a much higher COMET-QE score in the case of translations of sentences with idioms, yet obtains a higher percentage of unaligned source words. Similarly, the difference in non-monotonicity scores is also considerably higher for the case of idioms. These results provide some evidence that the improved results of the GPT model, together with the *lower* literalness numbers, stem from correct translation of idiomatic expressions. Table 7 shows that this effect only increases with the number of idioms.
## 4 Discussion
In our experiments conducted across different NMT systems and GPT models, we find evidence that GPTs produce translations with greater nonliteralness for E→X in general. There could be a number of potential causes for this; we list two plausible hypotheses below:
Parallel Data Bias NMT models are trained on parallel data, which often contains very literal webcollected outputs. Some of this may even be the output of previous-generation MT systems, which is highly adopted and hard to detect. In addition, even high quality target text in parallel data always contains artifacts that distinguishes it from text originally written in that language, i.e. the 'translationese' effect (Gellerstam, 2005). These factors could likely contribute to making NMT translations comparatively more literal.
Language Modeling Bias Translation capability in GPTs arises in the absence of any *explicit* supervision for the task during the pre-training stage. Therefore, the computational mechanism that GPTs leverage for producing translations might be different from NMT models, imparting them greater abstractive abilities. This could have some measurable manifestation in the translations produced, e.g., in the literalness of the translations.
Differences in E→**X and X**→E In E→X, we consistently find that GPT translations of similar quality are less literal and in the X→E direction, we observe a few anomalies. For X→E, in Figure 1, in all but one comparison (WMT-21-SOTA
vs GPTs for De-En) GPTs obtain higher measures for non-literalness. On the other hand, we did not see anomalies in the trend for E→X directions.
Variations in Experimental Setup We also experimented with a variant of USW and NM which doesn't use the alignments pertaining to stopwords.
Each of our findings remain the same, with relatively minor changes in magnitudes but not in system rankings. Similarly, we observed a greater tendency towards less literalness in GPT translations in both few-shot and zero-shot settings, when compared across a range of NMT systems.
## 5 Summary And Conclusion
We investigated how the translations obtained through LLMs from the GPT family are qualitatively different by quantifying the property of translation literalness. We find that for E→X translations, there is a greater tendency towards nonliteralness in GPT translations. In particular, this tendency becomes evident in GPT systems' ability to figuratively translate idioms.
## 6 Acknowledgements
We thank Hitokazu Matsushita for help in conducting human evaluations.
## 7 Limitations
Measurement of translation literalness is neither well studied nor well understood. We rely on a combined interpretation of multiple measurements to investigate our hypothesis and its implications.
This limits the extent to which we can make strong claims, since in the absence of a highly correlated metric for translation literalness, it is hard to compare systems. We could only claim that our investigation indicates the presence of a tendency towards non-literalness in GPT translations, but a stronger result would have been preferred to further disambiguate the translation characteristics. Further, we only compare GPT translations in the standard zero-shot and few-shot settings and it is quite conceivable that more specific & verbose instructions could steer the LLMs to produce translations with different characteristics.
## References
Loic Barrault, Ondrej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Alexander Fraser, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Tom Kocmi, Andre Martins, Makoto Morishita, and Christof Monz, editors. 2021. Proceedings of the Sixth Conference on Machine Translation. Association for Computational Linguistics, Online.
Eleftheria Briakou, Colin Cherry, and George Foster.
2023. Searching for needles in a haystack: On the role of incidental bilingualism in palm's translation capability.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways.
Verna Dankers, Elia Bruni, and Dieuwke Hupkes. 2022a.
The paradox of the compositionality of natural language: A neural machine translation case study. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 4154–4175, Dublin, Ireland. Association for Computational Linguistics.
Verna Dankers, Christopher Lucas, and Ivan Titov.
2022b. Can transformer be too compositional?
analysing idiom processing in neural machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3608–3626, Dublin, Ireland. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112–2128, Online.
Association for Computational Linguistics.
Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation. *Transactions of the Association* for Computational Linguistics, 8:539–555.
Martin Gellerstam. 2005. Chapter 13. Fingerprints in Translation, pages 201–213. Multilingual Matters, Bristol, Blue Ridge Summit.
Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022.
News summarization and evaluation in the era of gpt-3. *arXiv preprint arXiv:2209.12356*.
Hessel Haagsma, Johan Bos, and Malvina Nissim. 2020.
MAGPIE: A large corpus of potentially idiomatic expressions. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 279–287, Marseille, France. European Language Resources Association.
Tianxing He, Jingyu Zhang, Tianle Wang, Sachin Kumar, Kyunghyun Cho, James Glass, and Yulia Tsvetkov. 2022. On the blind spots of model-based evaluation metrics for text generation. arXiv preprint arXiv:2212.10020.
Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, and Hany Hassan Awadalla. 2023. How good are gpt models at machine translation? a comprehensive evaluation. *arXiv* preprint arXiv:2302.09210.
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431, Valencia, Spain. Association for Computational Linguistics.
Tom Kocmi, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Rebecca Knowles, Philipp Koehn, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Michal Novák, Martin Popel, and Maja Popovic. 2022. ´ Findings of the 2022 conference on machine translation (WMT22). In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 1–45, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. *arXiv preprint arXiv:2211.09110*.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002a. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002b. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Maja Popovic. 2015. ´ chrF: character n-gram F-score for automatic MT evaluation. In *Proceedings of the* Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
Vikas Raunak, Matt Post, and Arul Menezes. 2022.
Salted: A framework for salient long-tail translation error detection.
Ricardo Rei, José G. C. de Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André F. T. Martins.
2022. COMET-22: Unbabel-IST 2022 submission for the metrics shared task. In Proceedings of the
Seventh Conference on Machine Translation (WMT),
pages 578–585, Abu Dhabi, United Arab Emirates
(Hybrid). Association for Computational Linguistics.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Prateek Saxena and Soma Paul. 2020. Epie dataset: A
corpus for possible idiomatic expressions.
Moritz Schaeffer and Michael Carl. 2014. Measuring the cognitive effort of literal translation processes.
In *Proceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation*, pages 29–
37, Gothenburg, Sweden. Association for Computational Linguistics.
Andrea Schioppa, David Vilar, Artem Sokolov, and Katja Filippova. 2021. Controlling machine translation for multiple attributes with additive interventions.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6676–6696, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Simone Tedeschi and Roberto Navigli. 2022. MultiNERD: A multilingual, multi-genre and fine-grained dataset for named entity recognition (and disambiguation). In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 801–812, Seattle, United States. Association for Computational Linguistics.
Chau Tran, Shruti Bhosale, James Cross, Philipp Koehn, Sergey Edunov, and Angela Fan. 2021. Facebook AI's WMT21 news translation task submission. In Proceedings of the Sixth Conference on Machine Translation, pages 205–215, Online. Association for Computational Linguistics.
David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, and George Foster. 2022. Prompting palm for translation: Assessing strategies and performance.
Chenyun Wu, Zhe Lin, Scott Cohen, Trung Bui, and Subhransu Maji. 2020. Phrasecut: Language-based image segmentation in the wild. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10216–10225.
Jianing Zhou, Hongyu Gong, and Suma Bhat. 2021.
PIE: A parallel idiomatic expression corpus for idiomatic sentence generation and paraphrasing. In Proceedings of the 17th Workshop on Multiword Expressions (MWE 2021), pages 33–48, Online. Association for Computational Linguistics.
Figure 2: Annotation Interface Screenshot for Human Evaluation. The translations are randomized b/w the systems under evaluation to prevent biased evaluation.
## A Human Annotation Interface
We use the annotation interface in Figure 2, in which the annotators are asked to rate the two translations. The bilingual and native speaker annotators were recruited in-house.
## B Synthetic Dataset Details
Synthetic Dataset 1: For each of the three expression types, 100 synthetic sentences are generated.
Figures 3, 4, 5 present examples. The sources of the named entities and descriptive phrase expressions are MultiNERD (Tedeschi and Navigli, 2022) and PhraseCut (Wu et al., 2020) datasets respectively.
Prompt: Q: Generate a sentence containing the idiom: **a short fuse**, in the form of a news article sentence. \n A:
Output: The man was known to have *a short* fuse*, and often exploded into fits of anger without* warning.
Figure 3: **Synthetic Dataset 1**: Example (**Idiom**)
Prompt: Q: Generate a sentence containing the entity: **Wolfgang Amadeus Mozart**, in the form of a news article sentence. \n A:
Output: **Wolfgang Amadeus Mozart***, a world* renowned composer, died today at the age of 35.
Figure 4: **Synthetic Dataset 1**: Example (**Entity**)
Synthetic Dataset 2: Figure 6 presents an example of the prompt as well as for a generated synthetic sentence containing two idioms.
Prompt: Q: Generate a sentence containing the phrase: **white chair**, in the form of a news article sentence. \n A:
Output: The **white chair** *was found to be comfortable by the majority of the participants in the* study.
Figure 5: **Synthetic Dataset 1**: Example (**Phrase**)
Prompt: Q: Generate a sentence using the two idioms: off the wall, **claim to fame** in the form of a news article sentence. \n A:
Output: The company's **off-the-wall** *marketing* campaign was its *claim to fame*.
Figure 6: **Synthetic Dataset 2**: Example (2 **Idioms**)
## C Alignments And Literalness
Schaeffer and Carl (2014) find that more alignment crossings (which is measured by the nonmonotonicity metric) between the source and translations are proportional to the extra cognitive effort
(measured using gazing time of human translators)
required by human translators in processing nonliteral translations. This links alignment crossings
(the non-monotonicity measure is normalized alignment crossing) with greater non-literalness.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 3, 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3, 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3, 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3, 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3, 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Appendix B
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix B
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Appendix B
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
stammbach-etal-2023-environmental | Environmental Claim Detection | https://aclanthology.org/2023.acl-short.91 | To transition to a green economy, environmental claims made by companies must be reliable, comparable, and verifiable. To analyze such claims at scale, automated methods are needed to detect them in the first place. However, there exist no datasets or models for this. Thus, this paper introduces the task of environmental claim detection. To accompany the task, we release an expert-annotated dataset and models trained on this dataset. We preview one potential application of such models: We detect environmental claims made in quarterly earning calls and find that the number of environmental claims has steadily increased since the Paris Agreement in 2015. | # Environmental Claim Detection
Dominik Stammbach ETH Zurich [email protected] Nicolas Webersinke FAU Erlangen-Nuremberg [email protected] Julia Anna Bingler Council on Economic Policies [email protected] Mathias Kraus FAU Erlangen-Nuremberg [email protected]
## Abstract
To transition to a green economy, environmental claims made by companies must be reliable, comparable, and verifiable. To analyze such claims at scale, automated methods are needed to detect them in the first place. However, there exist no datasets or models for this.
Thus, this paper introduces the task of environmental claim detection. To accompany the task, we release an expert-annotated dataset and models trained on this dataset. We preview one potential application of such models: We detect environmental claims made in quarterly earning calls and find that the number of environmental claims has steadily increased since the Paris Agreement in 2015.
## 1 Introduction
In the face of climate change, we witness a transition towards a more sustainable and green economy. This change is driven by changes in regulation, public opinion, and investor attitudes. For example, global assets managed under a sustainability label are on track to exceed $53 trillion by 2025, more than a third of total assets under management. However, unfortunately, the boom has been accompanied by rampant greenwashing, with companies boasting about their environmental credentials.1 Because of this surge in environmental claims and to protect consumers, initiatives on substantiating green claims are developed.2 Due to an ever-growing amount of text, there is a need for automated methods to detect environmental claims. Detecting such claims at scale can assist policy-makers, regulators, journalists, activists, the research community, and an informed public in analyzing and scrutinizing environmental claims made by companies and facilitating the transition to a green economy.
1See, e.g., The Economist, May 22nd, 2021.
2For example an EU initiative on green claims:
https://ec.europa.eu/environment/eussd/smgp/ initiative_on_green_claims.htm
## Markus Leippold
University of Zurich [email protected] Environmental claim: A total population of 6148 is getting the benefit of safe potable drinking water due to this initiative.
Environmental claim: Hydro has also started working on several initiatives to reduce direct CO2 emission in primary aluminium production.
Negative example: Generally, first of all, our Transmission department is very busy, both gas and electric transmission, I should say, meeting the needs of our on-network customers.
Negative example: Teams are thus focused on a shared objective in terms of growth and value creation.
Figure 1: Environmental Claims and Negative Examples from our dataset.
Thus, we introduce the task of environmental claim detection. Environmental claim detection is a sentence-level classification task with the goal of predicting whether a sentence contains an environmental claim or not. Often, environmental claims are made in a clear and concise matter on a sentence level, with the intention to convey to a consumer or stakeholder that a company or product is environmentally friendly.
To facilitate future research on environmental claim detection, we release an expert-annotated dataset containing real-world environmental claims and models which can be used by practitioners. For constructing the dataset, we were inspired by the European Commission (EC), which defines such claims as follows: *Environmental claims refer to* the practice of suggesting or otherwise creating the impression (in the context of a commercial communication, marketing or advertising) that a product or a service is environmentally friendly (i.e., it has a positive impact on the environment) or is less damaging to the environment than competing goods or services.
3 While such claims can be truthful and made in good faith, boasting about environmental credentials can also be monetized (de Freitas Netto 3From the Commission Staff Working Document, Guidance on the implementation/application of Directive 2005/29/EC on Unfair Commercial practices, Brussels, 3 December 2009 SEC(2009) 1666. See section 2.5 on misleading environmental claims.
1051 et al., 2020). For example, consumers are willing to spend more money on environmentally friendly products (Nielsen Media Research, 2015). The Commission states if environmental claims are too vague, unclear, or misleading, we are confronted with an instance of "greenwashing" (this definition is given in the same Commission Staff Working Document).
We situate environmental claim detection at the intersection of claim detection (e.g., Arslan et al.,
2020) and pledge detection (Subramanian et al.,
2019; Fornaciari et al., 2021). An environmental claim is typically made to increase the environmental reputation of a firm or a product. We show that models trained on the current claim and pledge detection datasets perform poorly at detecting environmental claims, hence the need for this new dataset. We make our dataset, code and models publicly available.4 Lastly, we envision computerassisted detection of greenwashing in future work, i.e., the automatic determination if an environmental claim is false, too vague, non-verifiable, or misleading. To make progress on automated greenwashing detection, it is mandatory to first detect environmental claims at scale.
## 2 Related Work
This work is part of an ongoing effort at the intersection of environmental and climate changerelated topics and natural language processing
(Stede and Patz, 2021). Resulting datasets and methods can help regulators, policy-makers, journalists, the research community, activists, and an informed public investigate such topics at scale with the help of computer assistance. Methods include ClimateBERT (Webersinke et al., 2021), and ClimateGPT (Vaghefi et al., 2022), two language models pre-trained on climate-related text. NLP tasks and datasets include climate change topic detection (Varini et al., 2020) and detecting media stance on global warming (Luo et al., 2020).
Duong et al. (2022) collect climate change opinions at scale from social platforms, Al-Rawi et al.
(2021) analyze fake news Tweets around climate change. In a similar direction, Coan et al. (2021)
analyze contrarian claims about climate change and
(Piskorski et al., 2022) explore data augmentation techniques for climate change denial classification.
4We host all code, data and models on https://github.
com/dominiksinsaarland/environmental_claims. The dataset can also be accessed as a hugginface dataset, and our model is available on the huggingface model hub.
| split | # examples | mean length | claims (%) |
|---------|--------------|---------------|--------------|
| train | 2117 | 24.4 | 0.25 |
| dev | 265 | 24.2 | 0.25 |
| test | 265 | 24.9 | 0.25 |
| all | 2647 | 24.5 | 0.25 |
Table 1: Dataset Statistics Further, there exists work about claim verification of climate change related claims (Diggelmann et al., 2020), detecting media stance on global warming
(Luo et al., 2020), collecting climate change opinions at scale from social platforms (Duong et al.,
2022), and finally, the analysis of regulatory disclosures (Friederich et al., 2021; Kölbel et al., 2022).
In this broader context of applying NLP methods for climate change-related topics, We situate environmental claim detection at the intersection of claim spotting and pledge detection, covering the domain of text produced by companies with the goal of boosting their environmental credentials.
Claim spotting is the task of finding fact-check worthy claims (Arslan et al., 2020; Atanasova et al.,
2018; Barron-Cedeno et al., 2020). Pledge detection aims to detect pledges made in, for example, political campaigns (Subramanian et al., 2019; Fornaciari et al., 2021). Environmental claims state an environmental benefit (claim) or convey the intention (pledge) for a material impact, i.e., some environmental benefit, which pleases the audience
(consumers or stakeholders) of the claim.
## 3 Dataset
Our dataset contains environmental claims made by listed companies. We collected text from sustainability reports, earning calls, and annual reports of listed companies and annotated 3'000 sentences. After discarding tied annotations, our resulting dataset contains 2'647 examples.5 We provide dataset statistics in Table 1 and a text length histogram in Appendix Figure 4.
The dataset is annotated by 16 domain experts.6
model pr rc F1 acc pr rc F1 acc pr rc F1 acc
CV dev test
Majority baseline 0.0 0.0 0.0 74.9 0.0 0.0 0.0 74.7 0.0 0.0 0.0 75.1
Random baseline 26.2 53.2 35.1 50.5 27.9 58.2 37.7 51.3 26.2 46.6 33.5 53.5 ClaimBuster RoBERTa 27.9 62.6 38.6 49.9 27.3 52.7 35.9 47.5 25.3 51.4 33.9 45.7
Pledge Detection RoBERTa 26.2 31.7 28.7 60.4 27.6 28.4 28.0 59.2 24.1 29.2 26.4 55.8
TF-IDF SVM 71.1 65.9 68.4 84.7 67.7 63.6 65.6 83.4 68.1 70.1 69.1 84.2 Character n-gram SVM 76.8 63.6 69.6 86.0 69.2 68.2 68.7 84.5 75.0 67.2 70.9 86.0
DistilBERT 79.9 89.0 84.2 91.6 77.5 **93.9** 84.9 91.7 74.4 **95.5** 83.7 90.6
ClimateBERT 80.1 90.1 84.8 91.9 76.9 90.9 83.3 90.9 76.5 92.5 83.8 90.9
RoBERTabase 77.8 **91.3** 84.0 91.3 74.7 **93.9** 83.2 90.6 73.3 94.0 82.4 89.8
RoBERTalarge **83.1** 90.1 86.4 92.9 80.5 93.9 86.7 92.8 **78.5** 92.5 **84.9 91.7**
The authors drafted annotation guidelines in an iterative process and added examples of clear and borderline environmental claims to the guidelines.
In Appendix B, we list the complete guidelines available to the annotators, along with examples and rationales that the authors discussed in pilot annotation rounds.
To extract the sentences annotated in our dataset, we use a preliminary model to sample candidate sentences from various text sources produced by firms. Furthermore, we randomly sample sentences from different clusters obtained with k-means to increase the coverage of the domain. We describe the sampling process of the dataset in detail in Appendix A and provide further information on the data sources in Appendix C.
While we do not release a large-scale dataset, this is the result of a conscious decision to prioritize quality over quantity. We employed domain experts to annotate the data, which results in costly annotations. In Appendix D, we show that the performance of models converges after being trained on more than 60% of the training set, and we find diminishing marginal utility of including more sentences. Hence our decision to stop annotation here and release an annotated dataset with 2'647 examples.
We assigned each sentence to four annotators.
The annotations are aggregated by majority vote.
60% of the 3'000 samples was decided unanimously by the annotators, and 88.3% of the annotations made were part of a majority decision. 353 sentences received tied annotations (11.7% of the samples), and we discarded these examples from the dataset.The overall inter-annotator agreement measured in Krippendorff's alpha is 0.47, indicating moderate agreement.
## 4 Experiments
We conduct two types of experiments: (1) We examine the performance of various models on our dataset, among them pre-trained claim and pledge detection models and fine-tuned environmental claim detection transformer models (such as, e.g. Devlin et al., 2019; Liu et al., 2019; Sanh et al., 2019; Webersinke et al., 2021). (2) we apply our models to the text produced by listed companies, which leads to a small case study demonstrating one of the intended use cases of the dataset.
## 4.1 Environmental Claim Detection Models
We report various metrics on a 5-fold crossvalidation split of the whole dataset, the development, and the test set in Table 2. We present two poorly performing baselines: *majority*, where we assign the not-a-claim label to all examples, and *random*, where we randomly assign one of the two labels to each example. Next, we fine-tune a RoBERTabase model on the ClaimBuster dataset
(Arslan et al., 2020), and use this model to detect environmental claims in the dataset.7 While achieving rather high recall, the model does not cope well with the domain shift and fails to detect environmental claims reliably. Similar findings hold for a RoBERTabase model trained on a Pledge Detection dataset (Subramanian et al., 2019).8 These results highlight the need for a dedicated dataset.
Furthermore, we train two SVM models. The first one uses tf-idf bag-of-word features, the sec-7We train the model to distinguish fact-check-worthy claims vs. all other claims. The model works exceptionally well on the ClaimBuster test set with a micro-F1 of 97.9%
and a macro-F1 of 97.0%.
8The model achieves a 67% F1 score and 78% accuracy on a held-out split of the Pledge Detection but also fails to adapt to detect environmental claims.
![3_image_0.png](3_image_0.png)
ond is based on character n-gram features. Both models achieve an acceptable F1 score between 65% and 71% on all dataset splits. These results indicate that environment-related keywords or ngrams are somewhat predictive of whether a sentence is an environmental claim or not. However, all transformer models explored in this study outperform the SVM, hence the presence of environmental keywords alone is not sufficient for predicting such claims. Especially for recall, we find a large gap between transformer and SVM models of up to 25% points. We interpret this gap as evidence that not all environmental claims contain distinguishing environmental keywords.
Lastly, we fine-tune various transformer models
(Liu et al., 2019; Sanh et al., 2019; Webersinke et al., 2021). They all achieve an F1 score higher than 82% on all different dataset splits, a vast performance increase compared to the other models examined so far. We observe only minor differences between these models. The biggest model RoBERTalarge achieves the best scores overall, followed by ClimateBERT, a DistilBert-like language model further pre-trained on over 1.6 million climate-related paragraphs. Hence, further pretraining on climate-related text seems beneficial to detect environmental claims.
For training our models, we use Hugging Face
(Wolf et al., 2020) and standard RoBERTa hyperparameters. We use the Adam optimizer with a learning rate of 2e-5, a batch size of 16, and train models for 3 epochs. To minimize compute and environmental footprint of our experiments and due to consistent results over different dataset splits, we did not explore other hyper-parameters in more detail and reported only results of single runs.
## 4.2 Earning Calls
We use our trained model to detect environmental claims in corporate earning calls between 2012 and 2020. These are conference calls between the management of a publicly traded company, analysts, investors, and the media to discuss the company's financial results and other topics for a given reporting period (mainly quarterly). The conference calls consist of different segments, of which the segment with questions and answers is the most interesting for our purposes. Therefore, we focus on the management responses, which consist of 12 million sentences from 3,361 unique companies. All earnings conference call transcripts are obtained from Refinitiv Company Events Coverage. Due to the size of the data and computational constraints, we use our ClimateBERT model, finetuned on detecting environmental claims instead of the RoBERTalarge model.
We would expect that the amount of environmental claims made by corporations and business leaders has steadily increased since the Paris Agreement in 2015. In Figure 2, we find that this is indeed the case. The amount of environmental claims is not only increasing, but the increase is also accelerating. In 2019, the share of environmental claims is twice as high as in 2015. Not only the amount of environmental claims made in earning calls is increasing, but also the share of companies who makes such claims increased by 33%, and in 2019, one in ten companies makes at least one environmental claim in the answer sections of an earning call.
In Figure 3, we display word clouds for the most important words classified as non-claims (on the
![4_image_0.png](4_image_0.png)
left), and the most important words for environmental claims (on the right). It is evident that the sentences classified as claims contain more environmental-related keywords; We see that these keywords cover different environmental aspects, e.g., recycling and waste, carbon and emissions, renewables, water, etc. In Appendix Table 6, we additionally list the 5 highest and lowest scoring sentences based on our model. Our model effectively identifies environmental claims as the predominant category at the upper end of the distribution, whereas it appears that such claims are absent in the lower end of the distribution.
This small case study illustrates one of the intended use cases of our dataset and the associated models: We present a tool that allows us to detect environmental claims at scale. Having access to environmental claims at scale makes it possible to analyze and scrutinize them in future work.
## 5 Conclusion
The vast and ever-growing volume of corporate disclosures, regulatory filings, and statements in the news calls for an algorithmic approach to detect environmental claims made by companies at scale. Thus, we introduce the NLP task of detecting environmental claims, a dataset containing such claims and associated models which can detect these claims in the wild. Our dataset is annotated by domain experts and thus of high quality. We describe the dataset and its construction process and present various models for detecting environmental claims in our dataset and a small case study.
We envision several directions for future work.
First, we plan to investigate "greenwashing", the practice of making a false, vague, unclear, or misleading environmental claim. To make progress
![4_image_1.png](4_image_1.png)
on this front, it is mandatory that we can detect environmental claims in the first place. Second, models trained on detecting environmental claims have merits of their own, as previewed in our case study. We plan to explore more such applications in detail, e.g., analyzing annual reports and TCFD9 reports at scale. For example, it would be interesting to see in which sections of TCFD reports firms make environmental claims. Lastly, we expect an increase of contributions at the intersection of environmental topics, climate change, and NLP in the near future. This work contributes to such efforts.
## Limitations
We find several limitations in this work. First, we acknowledge that the technical novelty of this work is limited: We introduce a sequence classification task, and we investigate rather standard models in our experiment section (i.e., state-of-the-art transformer language models). Nevertheless, we believe that there is a gap in the literature for the task presented in this work, hence our introduction of the environmental claim detection task, the dataset, and models.
Second, we collect data from sustainability reports, earning calls, and annual reports. However, this does not cover the universe of text where environmental claims are made, e.g., company websites and product descriptions. Also, environmental claims can be made about environmental improvements on a wide range of topics such as carbon emissions, water pollution, and recycling, among others. We discussed creating different datasets, where each dataset is dedicated to one specific is-9Task Force on Climate-Related Financial Disclosures sue. However, we leave this to future work. Third, sometimes it is necessary to have access to more context to determine whether a sentence is an environmental claim. We discussed whether it would be beneficial to annotate whole paragraphs instead.
However, the trade-off would be exploding annotation work and costs, hence our decision to introduce environmental claims as a sentence-level classification task (and we specifically asked annotators to reject ambiguous cases as environmental claims).
Nevertheless, given a unlimited budget, we would have pursued annotating whole paragraphs instead
(or annotating all environmental claims in a paragraph).
Our data sources, e.g., sustainability reports, are mostly published by European and US-listed companies, which is reflected in our dataset. We crawled these reports from the SEC10, hence our dataset contains mostly claims made by (a) big firms and (b) firms from developed countries. It is conceivable that smaller firms and firms from nondeveloped countries make different environmental claims, and models trained on our dataset might not be suitable to detect these claims.
Moreover, our work is subject to all concerns raised in the Ethics Statement below. We find it important to keep all these perspectives in mind when reading and discussing our work.
## Ethics Statement
Intended Use: This dataset will benefit journalists, activists, the research community, and an informed public analyzing environmental claims made by listed companies at scale. Also, we see this as a first step towards algorithmic greenwashing detection using NLP methods. It might also be useful to policy-makers and regulators in both the financial sector and the legal domain. Next, we hope companies are inspired by our work to produce more carefully drafted environmental claims. To conclude, we envision that the dataset and related models bring a large positive impact by encouraging truly environmentally friendly actions and less verbose boasting about environmental credentials.
Misuse Potential: Although we believe the intended use of this research is largely positive, there exists the potential for misuse. For example, it is possible that for-profit corporations will exploit AI models trained on this dataset while drafting
## Environmental Claims.
Model Bias: Although the performance of NLP
models usually achieves an F1 score of above 80%,
it is widely known that ML models suffer from picking up spurious correlations from data. Furthermore, it has been shown that large pre-trained language models such as ClimateBERT suffer from inherent biases present in the pre-training data leading to biased models - and we believe our models presented in this work also suffer from these biases.
Data Privacy: The data used in this study are mostly public textual data provided by companies and public databases. There is no user-related data or private data involved.
Annotator Salary: We paid standard research assistant salaries of around $30 per hour, which is common practice at the University of Zurich. We were upfront in disclosing to annotators that their annotations will lead to a dataset and models which can automatically detect environmental claims. We found that this goal motivated annotators. We speculate (and hope) annotators interpreted the dataset creation process and the goal of releasing the resulting dataset and models as an AI4Good application.
The feedback was overwhelmingly positive, and many annotators have asked whether it is possible to participate in follow-up annotation work related to greenwashing detection.
## References
Ahmed Al-Rawi, Derrick O'Keefe, Oumar Kane, and Aimé-Jules Bizimana. 2021. Twitter's fake news discourses around climate change and global warming.
Front. Commun., 6.
Fatma Arslan, Naeemul Hassan, Chengkai Li, and Mark Tremayne. 2020. A benchmark dataset of checkworthy factual claims. *Proceedings of the International AAAI Conference on Web and Social Media*,
14(1):821–829.
Pepa Atanasova, Alberto Barron-Cedeno, Tamer Elsayed, Reem Suwaileh, Wajdi Zaghouani, Spas Kyuchukov, Giovanni Da San Martino, and Preslav Nakov. 2018. Overview of the clef-2018 checkthat!
lab on automatic identification and verification of political claims. task 1: Check-worthiness.
Alberto Barron-Cedeno, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Nikolay Babulkov, Bayan Hamdan, Alex Nikolov, Shaden Shaar, and Zien Sheikh Ali. 2020. Overview of checkthat! 2020:
Automatic identification and verification of claims in social media.
Travis G. Coan, Constantine Boussalis, John Cook, and Mirjam O. Nanko. 2021. Computer-assisted classification of contrarian claims about climate change.
Scientific Reports, 11(1):22320.
Sebastião Vieira de Freitas Netto, Marcos Felipe Falcão Sobral, Ana Regina Bezerra Ribeiro, and Gleibson Robert da Luz Soares. 2020. Concepts and forms of greenwashing: a systematic review. *Environmental* Sciences Europe, 32(1):19.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Thomas Diggelmann, Jordan Boyd-Graber, Jannis Bulian, Massimiliano Ciaramita, and Markus Leippold. 2020. Climate-fever: A dataset for verification of real-world climate claims. *arXiv preprint* arXiv:2012.00614.
Cuc Duong, Qian Liu, Rui Mao, and Erik Cambria.
2022. Saving earth one tweet at a time through the lens of artificial intelligence. In *2022 International* Joint Conference on Neural Networks (IJCNN), pages 1–9.
Tommaso Fornaciari, Dirk Hovy, Elin Naurin, Julia Runeson, Robert Thomson, and Pankaj Adhikari.
2021. "we will reduce taxes" - identifying election pledges with language models. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 3406–3419, Online. Association for Computational Linguistics.
David Friederich, Lynn H. Kaack, Alexandra Luccioni, and Bjarne Steffen. 2021. Automated Identification of Climate Risk Disclosures in Annual Corporate Reports. Papers 2108.01415, arXiv.org.
Daniel Hershcovich, Nicolas Webersinke, Mathias Kraus, Julia Anna Bingler, and Markus Leippold.
2022. Towards climate awareness in nlp research.
Julian F Kölbel, Markus Leippold, Jordy Rillaerts, and Qian Wang. 2022. Ask bert: How regulatory disclosure of transition and physical climate risks affects the cds term structure. *Journal of Financial Econometrics* (forthcoming).
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Yiwei Luo, Dallas Card, and Dan Jurafsky. 2020. Detecting stance in media on global warming. In Findings of the Association for Computational Linguistics:
EMNLP 2020, pages 3296–3315, Online. Association for Computational Linguistics.
Nielsen Media Research. 2015. https://nielseniq.
com/global/en/insights/analysis/2015/
the-sustainability-imperative-2/. Accessed:
2022-07-04.
Jakub Piskorski, Nikolaos Nikolaidis, Nicolas Stefanovitch, Bonka Kotseva, Irene Vianini, Sopho Kharazi, and Jens P. Linge. 2022. Exploring data augmentation for classification of climate change denial: Preliminary study. In *Proceedings of Text2Story*
- Fifth Workshop on Narrative Extraction From Texts held in conjunction with the 44th European Conference on Information Retrieval (ECIR 2022), Stavanger, Norway, April 10, 2022, volume 3117 of CEUR Workshop Proceedings, pages 97–109. CEURWS.org.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *ArXiv*,
abs/1910.01108.
Manfred Stede and Ronny Patz. 2021. The climate change debate and natural language processing. In Proceedings of the 1st Workshop on NLP for Positive Impact, pages 8–18, Online. Association for Computational Linguistics.
Shivashankar Subramanian, Trevor Cohn, and Timothy Baldwin. 2019. Deep ordinal regression for pledge specificity prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 1729–1740, Hong Kong, China. Association for Computational Linguistics.
Saeid Vaghefi, Veruska Muccione, Christian Huggel, Hamed Khashehchi, and Markus Leippold. 2022.
Deep climate change: A dataset and adaptive domain pre-trained language models for climate change related tasks. In NeurIPS 2022 Workshop on Tackling Climate Change with Machine Learning.
Francesco Varini, Jordan Boyd-Graber, Massimiliano Ciaramita, and Markus Leippold. 2020. Climatext:
A dataset for climate change topic detection. In *Tackling Climate Change with Machine Learning workshop at NeurIPS 2020*. NeurIPS.
Nicolas Webersinke, Mathias Kraus, Julia Anna Bingler, and Markus Leippold. 2021. Climatebert: A
pretrained language model for climate-related text.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
![8_image_0.png](8_image_0.png)
## A Sample Selection
The basis for selecting samples are documents from four domains in the text produced by companies.
We consider TCFD reports that are voluntarily selfdisclosed by firms about their environmental impact, but not legally binding. Furthermore, we consider annual reports, comprehensive reports about activities conducted by a firm in a given year. We also consider corporate earnings calls (only the answer sections), which are conference calls between the management of a public company, analysts, investors, and the media to discuss the company's financial results and other business-relevant topics during a given reporting period. Earnings conference call transcripts are obtained from Refinitiv Company Events Coverage (formerly Thomson Reuters StreetEvents). Lastly, we include the language data on environmental risks, targets, and performance from the CDP disclosure questionnaire responses from 2021. We denote the universe of these documents by Dlarge. In Table 3, we show many sentences we have from each of these sources
(first row), and the distribution of these sources in our final dataset (second row).
Share TCFD Reports Annual Reports CDP Earning Calls N All data 0.07 0.20 0.00 0.73 16Mio Dataset 0.21 0.41 0.01 0.37 2'647
Table 3: Data distribution over different sources (in %),
and sentence distribution in our dataset over different sources (in %). The last column indicates the number of overall sentences.
In pilot studies, we decided to only keep sentences having more than 10 and less than 40 words.
Shorter sentences rarely ever are environmental claims, but a combination of section titles, filler sentences, and table descriptions. Longer sentences usually are the result of a failure in preprocessing.
A random selection of sentences from these documents would lead to a high number of sentences not related to the environment, thus, is impracticable. We also decided against using a keyword search to pre-filter Dlarge for two reasons. If we use a keyword set that is too narrow, we might have dataset artifacts. On the other hand, if we use a set that is too loose, we might again end up with too many non-climate-related sentences, which again is impracticable.
As a remedy, we start with a handpicked selection of 250 environmental claims used in a recent marketing study about greenwashing in French investment funds by 2DII, an independent, non-profit think tank working to align financial markets and regulations with the Paris Agreement goals. We also consider 200 non-environmental claims as negative samples, randomly sampled from company websites. The authors translated them to English (if necessary) and loosely annotated these sentences to double-check their quality and to help come up with annotation guidelines. However, these 450 sentences do not appear in the final version of the dataset. Next, we train a preliminary RoBERTabase model on this dataset and use this trained model to compute the likelihood of each sentence from Dlarge being an environmental claim. Using this likelihood, we use the following strategy to select both samples with a high chance of being environmental claims, samples with a low chance of being environmental claims, and samples that are semantically similar but lead to very different results compared to our base transformer model:
1. First 300 samples were sampled, which are adjacent to our starting selection of 250 environmental claims in SBERT embedding space
(Reimers and Gurevych, 2019), but for which the base transformer model assigned a small score of being an environmental claim.
2. Then, 1500 samples with a score greater than 0.7 from our preliminary transformer model are selected.
3. Next, 500 samples with a score between 0.2 and 0.5 from our preliminary transformer model are selected.
4. Then, we selected 200 samples with a score
lower than 0.2 from our preliminary transformer model.
5. Finally, all encoded samples from SBERT are clustered into 2000 clusters using k-means.
The largest clusters, from which no sample was selected in steps 1-4, are then represented by a random sample from the cluster. This way we increase the coverage of the whole domain by our selected samples. We selected 500 samples with that strategy.
While we tried to maximize domain coverage using this sampling procedure, given the limited annotation budget, it is likely that we missed lots of utterances of environmental claims. Also, the sample is somewhat biased toward our preliminary model, which we used to sample environmental claims from. Moreover, we did not include all domains of text produced by listed companies. For example, company websites and advertisements are not included in our universe of documents.
## B Annotation Guidelines
Your task is to label sentences. The information we need is whether they are environmental claims (yes or no).
A broad definition for such a claim is given by the European Commission: Environmental claims refer to the practice of suggesting or otherwise creating the impression [...] that a product or a service is environmentally friendly (i.e., it has a *positive* impact on the environment) or is *less damaging* to the environment than competing goods or services
[...]
In our case, claims relate to **products, services**
OR specific corporate environmental performance.
## General Annotation Procedure/Principles :
- You will be presented with a sentence and have to decide whether the sentence contains an **explicit** environmental claim.
- Do not rely on implicit assumptions when you decide on the label. Base your decision on the information that is available within the sentence.
- However, if a sentence contains an abbreviation, you could search online for the meaning of the abbreviation before assigning the label.
- In case a sentence is too technical/complicated and thus not easily understandable, it usually does not suggest to the average consumer that a product or a service is environmentally friendly and thus can be rejected.
- Likewise, if a sentence is not specific about having an environmental impact for a product or service, it can be rejected.
- Final goal: We will train a classifier on these annotations and apply it to massive amounts of financial text to explore which companies/sectors at which time make how many environmental claims. Does the number of environmental claims correlate with sectors/companies reducing their environmental footprint?
- The annotation task is not trivial in most cases.
Borderline decisions are often the case. If you are uncertain about your decisions, copy-paste the sentence and add an explanatory note to the sentence. We will then cross-check it in case needed.
In Table 4 and 5, we show examples that were discussed within the author team.
We presented each sentence in our sample to four annotators to determine a label. In the case of a clear majority of the annotators for a sentence
(4:0, 3:1, 1:3, or 0:4), the sentence is annotated as such. In case of no majority (2:2), the sentence is discarded and excluded from our final dataset. The rationale behind this is that a sentence annotated as *positive* accuses the writer to claim something.
This accusation should be agreed on by the majority of readers (in dubio pro reo - in doubt, rule for the accused).
## C Data Sources
We crawled TCFD and annual reports from the SEC (the U.S. Securities and Exchange Commission), specifically from www.annualreports.com www.responsibilityreports.com. Given that sustainability reports are mostly published by European and US firms, there is not an even global coverage in our sample, but a tendency for firms in developed countries. For the reports we collected, we show a distribution of Countries in Figure 5a and Industries in Figure 5b. For the earning calls data, we show a distribution over sectors in Figure 5c.
![10_image_0.png](10_image_0.png)
![10_image_2.png](10_image_2.png)
## D Dataset Size
Figure 6 shows that model performance as a function of dataset size converges quickly. We fine-tune a ClimateBERT model on different subsets of the training data, e.g. on 10%, on 20%, etc. In Figure 6, we find diminishing marginal utility after having fine-tuned a model on more than 60% of the dataset.
![10_image_1.png](10_image_1.png)
Hence, we believe that our dataset is sufficient in size and we do not expect model performance to increase drastically anymore if we were to annotate more data points.
## E Environmental Impact
In this section, following (Hershcovich et al.,
2022) we describe the environmental impact of our dataset construction and experiments. All experiments were conducted on a carbon-neutral computing cluster in Switzerland, using a single Nvidia GeForce GTX 1080 Ti GPU with a TDP of 250 W.While the computing cluster we performed the experiments on is superficially carbon-neutral, there are still emissions for the production and shipping of the hardware used. Also, the energy used for our experiments could replace power produced by fossil fuel somewhere else. Therefore, we calculate emissions based on the country's energy mix.
Running the main experiments took less than 1 hour combined. Detecting environmental claims in the quarterly earning calls took an additional 3 hours. For preliminary experiments, we trained a battery of transformer models on loosely annotated data (we used scores assigned by our "best" model to sample the sentences in the dataset). This took roughly 48 hours. Also, we embedded all sentences with SBERT for two additional hours. In total, we spent about 60 hours of computation time.
## F Funding
This paper has received funding from the Swiss National Science Foundation (SNSF) under the project (Grant Agreement No. 207800).
| Label | Sentence | Explanation |
|---------|------------|---------------|
| yes (unanimously) Farmers who operate under this scheme are required Environmental scheme with details on implementation to dedicate 10% of their land to wildlife preservation. yes (borderline) We prove our commitment to a sustainable world every day—by being a force for change where we work and live and holding ourselves and our suppliers to high standards in three vital aspects of doing business: people, product, and planet. Very generic sustainability or responsibility wording without clear reference to environmental aspects. Yet the term "sustainability" and "responsibility" includes environmental aspects. yes (borderline) Our places, which are designed to meet high sustainability standards, become part of local communities, No would be: "Our places, which are designed to become part of local communities, provide opportunities provide opportunities for skills development and employment and promote wellbeing. for skills development and employment and promote wellbeing." yes (borderline) Fast Retailing has adopted "Unlocking the Power of Clothing" for its Sustainability Statement, and through the apparel business seeks to contribute to the sustainable development of society. Very generic sustainability or responsibility wording without clear reference to environmental aspects. Yet the term "sustainability" and "responsibility" includes environmental aspects. yes (borderline) Hermès, which is currently managed by the sixth generation of family shareholders, is aware of its social responsibility and strives to give back to the world a part of what it gives to the Company. Very generic sustainability or responsibility wording without clear reference to environmental aspects. Yet the term "sustainability" and "responsibility" includes environmental aspects. yes (borderline) In 2016, UTC was placed on the CDP climate change and supplier A List, and in 2017 and 2018 received an A- and Leadership designation. Environmental initiatives and leadership. yes (borderline) Change internal behavior; Drive low-carbon investment; Identify and seize low-carbon opportunities; Stakeholder expectations. Intangible but environmentally friendly/ier processes. yes (borderline) We are looking into the Insurance Underwriting element, and have taken part in the CRO Forum's Sustainability Carbon Footprinting paper of Underwriting. Intangible but environmentally friendly/ier processes. yes (borderline) In a further demonstration of the importance we place on helping customers to live sustainably, we became signatories of the Task Force on Climate related Financial Disclosures, to provide consistent information to our stakeholders. Intangible but environmentally friendly/ier processes. yes (borderline) As for assets, DBJ Green Building certification for 18 properties, BELS certification for 33 properties, and CASBEE certification for one property have been received. Official environmental Labels yes (borderline) Our clean, safe and high-tech products and solutions enable everything from food production to space Environmentally friendly/ier products and solutions travel, improving the everyday life of people everywhere. yes (borderline) FreshPoint, our specialty produce company, addresses customers' needs for fresh, unique, organic, and local produce items. Environmentally friendly/ier products and solutions yes (borderline) WilLDAR consists of detecting methane leaks with an optical gas imaging camera and repairing those leaks within 30 days. Environmentally friendly/ier products and solutions yes (borderline) These products include climate metrics, Climate Value-at-Risk (VAR), carbon portfolio reporting, low carbon, and climate change indexes as well as tools to identify clean-tech and environmentally oriented companies. Environmentally friendly/ier products and solutions Table 4: Environmental Claims with Rationale in Annotation Guidelines | | |
| Label | Sentence | Explanation |
|-----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|
| no (borderline) | We do this for 15 sustainable and impact strategies No positive impact or no link to better environmental (equities, bonds and green bonds). performance | |
| no (borderline) | We use the EcoAct ClimFIT (Climate Financial Institutions Tool) tool to measure the carbon emissions associated with the household and personal products sector. No positive impact or no link to better environmental performance | |
| no (borderline) | AUSEA is a miniaturized sensor, fitted onto a commercial drone, that can detect methane and carbon dioxide. Product with potentially positive environmental impact, but impact is not stated hence no claim | |
| no (borderline) | This will further accelerate Croda's positive impact by creating and delivering solutions to tackle some of the biggest challenges the world is facing. Unclear whether this relates to environmental positive impacts, only implicit assumptions would make it an environmental claim. | |
| no (unanimously) Hence, the Scope 2 emission is included in the Scope 1 emission which has been reported in accordance with the ISO 14064-1 requirements as verified by qualified independent assessor. Technical details, descriptions, and explanations no (unanimously) Emissions associated with processing activities are associated with the supply of these ingredients and are included in our Scope 3 supply chain emissions. Technical details, descriptions, and explanations no (unanimously) Emissions are modelled based on sector averages including linear regression and country carbon emissions intensities for GDP. Technical details, descriptions, and explanations no (unanimously) Wood products facilities also operate lumber drying kilns and other processes that can either use the steam from the boilers or, if direct fired, will commonly use natural gas. Technical details, descriptions, and explanations no (unanimously) We use the EcoAct ClimFIT (Climate Financial Institutions Tool) tool to measure the carbon emissions associated with utilities. Technical details, descriptions, and explanations no (unanimously) In the past we have conducted analysis of our portfolio Technical details, descriptions, and explanations impact on the climate, using scope 3 as a metric. no (unanimously) For that, Danone needs organic fresh milk. Sentence context would be required to understand whether it is a claim no (unanimously) UPM Biofuels is developing a new feedstock concept by growing Brassica Carinata as a sequential crop in South America. Sentence context would be required to understand whether it is a claim no (unanimously) Our key sources of emissions are the running of our environmental risk exposure description but no commitment / claim to act on reducing the risk or improving impact operations (electricity, business travel, etc), purchased goods and services (consultants, maintenance work, IT services, etc), and land leased to sheep and beef farming (to keep the grass low under our wind farms). no (unanimously) Extreme weather events and the impacts of transitioning to a low-carbon economy have the potential to environmental risk exposure description but no commitment / claim to act on reducing the risk or improving impact disrupt business activities, damage property, and otherwise affect the value of assets, and affect our customers' ability to repay loans. no (unanimously) At the date of this report, the Group owns 34 mills (29 of which produce containerboard), 245 converting plants (most of which convert containerboard into corrugated boxes), 40 recovered fibre facilities and two wood procurement operations (which together provide raw material for our mills) and 34 other production facilities carrying on other related activities. environmental risk exposure description but no commitment / claim to act on reducing the risk or improving impact | | |
Table 5: Negative Examples with Rationale in Annotation Guidelines Environmental Claims Negative Examples In support of Apple's commitment to reduce its carbon footprint by transitioning its entire supply chain to 100% renewable energy, we've transitioned our facilities in China to be powered through a series of renewable power purchase agreements.
We are looking at opportunities to expand our commitment to renewable diesel while continuing to optimize the efficiency of our fleet of traditional biodiesel plants.
So there's an annual cycle that, to some degree, dictates the pace of these enrollment campaigns.
And so when we get these biopsy data published, which we're aggressively working on, we think we will have sufficient information to begin to approach payers, including Medicare.
We plan to continue our low risk growth strategy by building our core business with rate base infrastructure, while maintaining the commitment to renewable energy initiatives and to reducing emissions.
And I guess first of all, I would say the thesis which we have at FERC here for precedent is no different than what takes place right now for the LDC companies, where the LDC
companies pay for pipeline infrastructure that's developed by a pipeline operator.
We just completed $1 billion of capital projects to expand, upgrade and modernize and improve the environmental footprint of an important industry in Russia.
But as Jon points out, the thing that they really seem to be focused on is we claim a five-year life, and they want to make sure that that's a reasonable claim on our batteries for AED Plus.
And we also announced that BHGE is committed to reduce its carbon footprint by 50% by 2030, and also net 0 by 2050.
They're critical to reimbursement, meaning you just simply can't get revenue unless you've done things like enroll it, and you have to have accurate data to get providers enrolled.
Table 6: Environmental Claims and Negative Examples Predicted in Quarterly Earning Calls Answer Sections.
| Minimum card | |
|-------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| Information | Unit |
| 1. Is the resulting model publicly available? | yes |
| 2. How much time does the training of the final model take? | < 5 min |
| 3. How much time did all experiments take (incl. hyperparameter search)? | 60 hours |
| 4. What was the energy consumption (GPU/CPU)? | 0.3 kW |
| 5. At which geo-location were the computations performed? | Switzerland Extended card |
| 6. What was the energy mix at the geolocation? | 89 gCO2eq/kWh |
| 7. How much CO2eq was emitted to train the final model? | 2.2 g |
| 8. How much CO2eq was emitted for all experiments? | 1.6 kg |
| 9. What is the average CO2eq emission for the inference of one sample? | 0.0067 mg |
| 10. Which positive environmental impact can be expected from this work? | This work can help detect and evaluate environmental claims and thus have a positive impact on the environment in the future. |
| 11. Comments | - |
Table 7: Climate performance model card following (Hershcovich et al., 2022)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
in the dedicated "Limitations" section after the conclusion
✓ A2. Did you discuss any potential risks of your work?
in the dedicated "Ethics Statement" after the conclusion
✓ A3. Do the abstract and introduction summarize the paper's main claims?
view the Abstract + Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section "3. Dataset"
✓ B1. Did you cite the creators of artifacts you used?
in Section "4. Experiments", we use existing datasets to train models which we evaluate in a zero-shot setting on our newly created dataset. We cite the authors of the artifacts invovled in this process appropriately.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We will host our dataset and models upon publication on huggingface hub and github. We provide the license and terms for use and/or distribution of our artifacts on the huggingface hub and github, instead of mentioning this in the paper.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We used two existing artifacts - two datasets associated with a research paper, the first one containing claims, the second containing pledges. For both artifacts, we did not find an intended use in the paper. However, we assume that it is fine to use these artifacts for follow-up research (given the datasets are associated with a research paper and the datasets are freely accessible).
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? in the dedicated "Ethics Statement" after the conclusion
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
in Appendix Section "D Data Sources"
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. in Section "3. Dataset" The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** In Section "4. Experiments"
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
in Section "4. Experiments" and In Appendix Section "E Environmental Impact"
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? in Section "4. Experiments"
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
in Section "4. Experiments" C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** In Section "3. Dataset"
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
in Appendix "B Annotation Guidelines"
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
in Section "3. Dataset" and in the "Ethics Statement"
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? in the "Ethics Statement"
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
there was no need for an approval by an ethics review board for our data collection protocol
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
in Section "3. Dataset" |
cifka-liutkus-2023-black | Black-box language model explanation by context length probing | https://aclanthology.org/2023.acl-short.92 | The increasingly widespread adoption of large language models has highlighted the need for improving their explainability. We present *context length probing*, a novel explanation technique for causal language models, based on tracking the predictions of a model as a function of the length of available context, and allowing to assign *differential importance scores* to different contexts. The technique is model-agnostic and does not rely on access to model internals beyond computing token-level probabilities. We apply context length probing to large pre-trained language models and offer some initial analyses and insights, including the potential for studying long-range dependencies. The [source code](\url{https://github.com/cifkao/context-probing/}) and an [interactive demo](\url{https://cifkao.github.io/context-probing/}) of the method are available. |
## Black-Box Language Model Explanation By Context Length Probing
Ondrej Cífka Antoine Liutkus ˇ
Zenith Team, LIRMM, CNRS UMR 5506 Inria, Université de Montpellier, France [email protected], [email protected]
## Abstract
The increasingly widespread adoption of large language models has highlighted the need for improving their explainability. We present *context length probing*, a novel explanation technique for causal language models, based on tracking the predictions of a model as a function of the length of available context, and allowing to assign *differential importance scores* to different contexts. The technique is modelagnostic and does not rely on access to model internals beyond computing token-level probabilities. We apply context length probing to large pre-trained language models and offer some initial analyses and insights, including the potential for studying long-range dependencies. The source code1and an interactive demo2 of the method are available.
## 1 Introduction
Large language models (LMs), typically based on the Transformer architecture (Vaswani et al., 2017),
have recently seen increasingly widespread adoption, yet understanding their behaviour remains a difficult challenge and an active research topic.
Notably, as the length of the context that can be accessed by LMs has grown, a question that has attracted some attention is how this influences their predictions. Some recent studies in this line of research suggest that even "long-range" LMs focus heavily on local context and largely fail to exploit distant ones (O'Connor and Andreas, 2021; Sun et al., 2021; Press et al., 2021; Sun et al., 2022). A
more nuanced understanding of how contexts of different lengths influence LMs' predictions may hence be valuable for further improving their performance, especially on tasks like long-form text generation where long-range dependencies are of critical importance.
1https://github.com/cifkao/
context-probing/
2https://cifkao.github.io/
context-probing/
Figure 1: A screenshot of a demo2 of the proposed method. After selecting a target token (here "**birds**"),
the preceding tokens are highlighted according to their
(normalized) *differential importance scores* (green =
positive, red = negative), obtained using our method.
The user can also explore the top predictions for contexts of different lengths (here the context "house, shouting about lunatics. [. . .] mortally afraid of").
In this work, we propose *context length probing*,
a simple explanation technique for *causal* (autoregressive) language models, based on tracking the predictions of the model as a function of the number of tokens available as context. Our proposal has the following advantages:
- It is conceptually simple, providing a straightforward answer to a natural question: How does the length of available context impact the prediction?
- It can be applied to a pre-trained model without retraining or fine-tuning and without training any auxiliary models.
- It does not require access to model weights, internal representations or gradients.
- It is model-agnostic, as it can be applied to any causal LM, including attentionless architectures like RNN (Mikolov et al., 2010) and CNN (Dauphin et al., 2017). The only requirement for the model is to accept arbitrary input
1067 segments (i.e. not be limited to document prefixes).
Furthemore, we propose a way to use this technique to assign what we call differential importance scores to contexts of different lengths. This can be seen as complementary to other techniques like attention or saliency map visualization. Interestingly, contrary to those techniques, ours appears promising as a tool for studying long-range dependencies, since it can be expected to highlight important information not already covered by shorter contexts.
## 2 Related Work
A popular way to dissect Transformers is by visualizing their attention weights (e.g. Vig, 2019; Hoover et al., 2020). However, it has been argued that this does not provide reliable explanations and can be misleading (Jain and Wallace, 2019; Serrano and Smith, 2019). A more recent line of work (Elhage et al., 2021; Olsson et al., 2022) explores "mechanistic explanations",
based on reverse-engineering the computations performed by Transformers. These techniques are tied to concrete architectures, which are often "toy" versions of those used in real-world applications, e.g.
attention-only Transformers in Elhage et al.
Other options include general-purpose methods like neuron/activation interpretation (e.g. Geva et al., 2021; Goh et al., 2021; Dai et al., 2022),
saliency maps (e.g. Fong and Vedaldi, 2017; Ancona et al., 2019) and influence functions (Koh and Liang, 2017). These require access to internal representations and/or the ability to backpropagate gradients, and have some caveats of their own (Kindermans et al., 2019; Kokhlikyan et al., 2021).
More closely related to our work are studies that perform *ablation* (e.g. by shuffling, truncation or masking) on different contexts to understand their influence on predictions (O'Connor and Andreas, 2021; Sun et al., 2021; Press et al., 2021; Vafa et al., 2021). To our knowledge, all such existing works only test a few select contexts or greedily search for the most informative one; in contrast, we show that it is feasible to consider all context lengths in the range from 1 to a maximum cmax, which permits us to obtain fine-grained insights on the example level, e.g. in the form of the proposed differential importance scores. Moreover, many existing analyses (e.g. Vafa et al., 2021; O'Connor and Andreas, 2021) rely on specific training or finetuning, which is not the case with our proposal.
## 3 Method 3.1 Context Length Probing
A causal LM estimates the conditional probability distribution of a token given its left-hand context in a document:
$$p(x_{n+1}\mid x_{1},\ldots,x_{n}).$$
$$(1)$$
We are interested here in computing the probabilities conditioned on a *reduced* context of length c ∈ {1*, . . . , n*}:
$$p(x_{n+1}\mid x_{n-c+1},\ldots,x_{n}),$$
$$(2)$$
study the behavior of the $\alpha$-ray scattering.
so that we may then study the behavior of this distribution as a function of c.
An apparent obstacle in doing so is that applying the model to an arbitrary subsequence xn−c+1*, . . . , x*n, instead of the full document x1*, . . . , x*N , may lead to inaccurate estimates of the probabilities in Eq. (2). However, we note that large LMs are not usually trained on entire documents. Instead, the training data is pre-processed by shuffling all the documents, concatenating them
(with a special token as a separator), and splitting the resulting sequence into *chunks* of a fixed length
(usually 1024 or 2048 tokens) with no particular relation to the document length. Thus, the models are effectively trained to accept sequences of tokens starting at arbitrary positions in a document and it is therefore correct to employ them as such to compute estimates of Eq. (2).
3 It now remains to be detailed how to efficiently evaluate the above probabilities for all positions n and context lengths c. Specifically, for a given document x1*, . . . , x*N and some maximum context length cmax, we are interested in an (N − 1) ×
cmax *× |V|* tensor P , where V =
w1, . . . , w|V| is the vocabulary, such that:
$$P_{n,c,i}=p(x_{n+1}=w_{i}\mid x_{n-c+1},\ldots,x_{n}),$$
$\downarrow$).
$\frac{1}{2}$ 4.
with Pn,c,∗ = Pn,n−1,∗ for n ≤ c.
4 Observe that by running the model on any segment xm*, . . . , x*n, we obtain all the values Pm+c−1,c,∗ for c ∈
{1*, . . . , n* − m + 1}. Therefore, we can fill in the tensor P by applying the model along a sliding window of size cmax, i.e. running it on N (overlapping)
3For models trained on data that is pre-processed differently, (re)training or fine-tuning with data augmentation such as random shifts may be needed in order to apply our method, analogously to Vafa et al. (2021), who use word dropout to ensure compatibility with their method.
4P*n,c,*∗ is a |V|-dimensional slice of P along the last axis.
1068 segments of length at most cmax. See Appendix A
for an illustration and additional remarks.
## 3.2 Metrics
Having obtained the tensor P as we have just described, we use it to study how the predictions evolve as the context length is increased from 1 to cmax. Specifically, our goal is to define a suitable metric that we can compute from P*n,c,*∗ and follow it as a function of c (for a specific n or on average).
One possibility would be to use the negative loglikelihood (NLL) loss values:
$$-\log p(x_{n+1}\mid x_{n-c+1},\ldots,x_{n}).$$
However, this may not be a particularly suitable metric for explainability purposes, as it depends
(only) on the probability assigned to the ground truth xn+1, while the LM outputs a probability distribution P*n,c,*∗ over the entire vocabulary, which may in fact contain many other plausible continuations. For this reason, we propose to exploit a metric defined on whole *distributions*, e.g. the Kullback-Leibler (KL) divergence. To achieve this, we choose the maximum-context predictions Pn,cmax,∗ as a reference and get:
$$\begin{split}D_{n,c}&=D_{\text{KL}}[\mathbf{P}_{n,c_{\text{max}},*}\parallel\mathbf{P}_{n,c,*}]\\ &=\sum_{i=1}^{|\mathcal{V}|}\mathbf{P}_{n,c_{\text{max}},i}\log\frac{\mathbf{P}_{n,c_{\text{max}},i}}{\mathbf{P}_{n,c,i}}.\end{split}\tag{5}$$
The rationale for (5) is to quantify the amount of information that is lost by using a shorter context c ≤ cmax. Interestingly, this metric is not related to the absolute performance of the model with maximal context, but rather to how the output *changes* if a shorter context is used.
## 3.3 Differential Importance Scores
We are also interested in studying how individual increments in context length affect the predictions.
We propose to quantify this as the change in the KL divergence metric (5) when a new token is introduced into the context. Specifically, for a pair of tokens xn+1 (the *target token*) and xm (the context token), we define a *differential importance score*
(∆-score for short)
$$\Delta{\mathcal{D}}_{n,m}={\mathcal{D}}_{n,n-m-1}-{\mathcal{D}}_{n,n-m}.$$
We may visualize these scores as a way to explain the LM predictions, much like is often done with
| name | #param #layer #head dmodel max len | | | | |
|----------|--------------------------------------|----|----|------|------|
| gpt2 | 117 M | 12 | 12 | 768 | 1024 |
| gpt2-xl | 1.5 B | 48 | 25 | 1600 | 1024 |
| gpt-j-6B | 6.1 B | 28 | 16 | 4096 | 2048 |
Table 1: Hyperparameters of the 3 models used.
attention weights, with two important differences.
First, a high ∆Dn,m should not be interpreted as meaning that xm in isolation is important for predicting xn+1, but rather that it is salient given the context that follows it (which might mean that it brings information not contained in the following context). Second, unlike attention weights, our scores need not sum up to one, and can be negative; in this regard, the proposed representation is more conceptually similar to a saliency map than to an attention map.
## 4 Results
We apply the proposed technique to publicly available pre-trained large Transformer language models, namely GPT-J (Wang and Komatsuzaki, 2021)
and two GPT-2 (Radford et al., 2019) variants –
see Table 1 for an overview. We use the validation set of the English LinES treebank5from Universal Dependencies (UD; Nivre et al., 2020), containing 8 documents with a total length of 20 672 tokens6 and covering fiction, an online manual, and Europarl data. We set cmax = 1023. We use the Transformers library7(Wolf et al., 2020) to load the pre-trained models and run inference. Further technical details are included in Appendix B.
## 4.1 Lm Loss By Context Length
Fig. 2 shows the cross entropy losses (NLL means)
across the whole validation dataset as a function of context length c. As expected, larger models perform better than smaller ones, which is traditionally explained by their larger capacity. A less common observation we can make thanks to this detailed representation is that the gains in performance come mostly from relatively short contexts
(8–256 tokens); this is consistent with prior works
(Sun et al., 2021; Press et al., 2021) which found 5https://universaldependencies.org/
treebanks/en_lines/index.html 6After concatenating all sentences and applying the GPT-2 tokenizer, which is used by both GPT-2 and GPT-J.
7https://github.com/huggingface/
transformers
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
that very long contexts bring only minimal improvement (though these focused on specific *long-range* architectures and on contexts beyond the range we investigate here).
In Fig. 3, we display the same information (loss by context length) broken down by part-of-speech
(POS) tags, for GPT-J only. For most POS tags, the behavior is similar to what we observed in Fig. 2 and the loss appears to stabilize around context lengths 16–64. However, we see a distinct behaviour for proper nouns (PROPN), which are the hardest-to-predict category for short contexts, but whose loss improves steadily with increasing c, surpassing that of regular nouns (NOUN) at c = 162 and continuing to improve beyond that point.
## 4.2 Per-Token Losses By Context Length
We have also examined token-level losses, as well as the KL divergence metric (see Section 3.2); an example plot is shown in Fig. 4 and more are found in Appendix C.1. In general, we observe that the values tend to change gradually with c; large differences are sparse, especially for large c, and can often be attributed to important pieces of information appearing in the context (e.g. "owl" and "swoop" in the context of "birds" in Fig. 4). This justifies our use of these differences as importance scores.
## 4.3 Differential Importance Scores
To facilitate the exploration of ∆-scores from Section 3.3, we have created an interactive web demo,2 which allows visualizing the scores for any of the 3 models on the validation set as shown in Fig. 1.
In Fig. 5, we display the magnitudes of the ∆-
scores - normalized for each position to sum up to 1 across all context lengths - as a function of context length. The plot suggests a power-law-like inverse relationship where increasing context length proportionally reduces the ∆-score magnitude on average. We interpret this as far-away tokens being less likely to carry information not already covered by shorter contexts. Long contexts (see inset in Fig. 5) bear less importance for larger models than for smaller ones, perhaps because the additional capacity allows relying more on shorter contexts.
In Fig. 6, we also display the mean importance score received by each POS category, by model.
We can see that proper nouns (PROPN) are substantially more informative than other categories
(which is in line with the observations in the previous section), but less so for the smallest model.
This could mean e.g. that larger models are better at memorizing named entities from training data and using them to identify the topic of the document, or simply at copying them from distant context as observed in (Sun et al., 2021).
## 5 Limitations And Future Directions
Experiments. We acknowledge the limited scope of our experiments, including only 8 (closeddomain) documents, 3 models and a single language. This is largely due to the limited availability of suitable large LMs and their high computational cost. Still, we believe that our experiments are valuable as a case study that already clearly showcases some interesting features of our methodology.
Computational cost. While we have demonstrated an efficient strategy to obtain predictions for all tokens at all possible context lengths, it still requires running the model N times for a document of length N.
For a k-fold reduction in computational cost, the technique may be modified to use a sliding window with stride k > 1 (instead of k = 1 as proposed above). See Appendix A.1 for details.
![4_image_1.png](4_image_1.png)
![4_image_0.png](4_image_0.png)
![4_image_2.png](4_image_2.png)
![4_image_3.png](4_image_3.png)
Choice of metrics. The proposed methodology allows investigating how any given metric is impacted by context, yet our study is limited to NLL
loss and the proposed KL divergence metric (the latter for defining importance scores). These may not be optimal for every purpose, and other choices should be explored depending on the application.
For example, to study sequences *generated* (sampled) from a LM, one might want to define importance scores using a metric that does depend on the generated token, e.g. its NLL loss or its ranking among all candidates. (Indeed, our web demo also supports ∆-scores defined using NLL loss values.)
## 6 Conclusion And Future Directions
We have presented *context length probing*, a novel causal LM explanation technique based on tracking the predictions of the LM as a function of context length, and enabling the assignment of *differential* importance scores (∆*-scores*). While it has some advantages over existing techniques, it answers different questions, and should thus be thought of as complementary rather than a substitute.
A particularly interesting feature of our ∆-scores is their apparent potential for discovering *longrange dependencies* (LRDs) (as they are expected to highlight information not already covered by shorter contexts, unlike e.g. attention maps).
Remarkably, our analysis suggests a power-lawlike inverse relationship between context length and importance score, seemingly questioning the importance of LRDs in language modeling. While LRDs clearly appear crucial for applications such as longform text generation, their importance may not be strongly reflected by LM performance metrics like cross entropy or perplexity. We thus believe that there is an opportunity for more specialized benchmarks of LRD modeling capabilities of different models, such as that of Sun et al. (2022), for example. These should further elucidate questions like to what extent improvements in LM performance are due to better LRD modeling, how LRDs are handled by various Transformer variants (e.g. Kitaev et al., 2020; Katharopoulos et al., 2020; Choromanski et al., 2021; Press et al., 2022), or what their importance is for different tasks.
## Acknowledgments
This work was supported by the LabEx NUMEV (ANR-10-LABX-0020) within the I-Site MUSE (ANR-16-IDEX-0006). The authors are grateful to the OPAL infrastructure from Université Côte d'Azur for providing resources and support.
## References
Marco Ancona, Cengiz Oztireli, and Markus Gross.
2019. Explaining deep neural networks with a polynomial time algorithm for Shapley value approximation. In *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pages 272–281.
PMLR.
Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sarlós, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J. Colwell, and Adrian Weller. 2021.
Rethinking attention with Performers. In 9th International Conference on Learning Representations
(ICLR 2021). OpenReview.net.
Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8493–
8502, Dublin, Ireland. Association for Computational Linguistics.
Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pages 933–941. PMLR.
Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. 2021. A
mathematical framework for Transformer circuits.
Transformer Circuits Thread.
Ruth C. Fong and Andrea Vedaldi. 2017. Interpretable explanations of black boxes by meaningful perturbation. In IEEE International Conference on Computer Vision, pages 3449–3457, Venice, Italy. IEEE Computer Society.
Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are keyvalue memories. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5484–5495, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. 2021. Multimodal neurons in artificial neural networks. *Distill*.
Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2020. exBERT: A Visual Analysis Tool
to Explore Learned Representations in Transformer Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:
System Demonstrations, pages 187–196, Online. Association for Computational Linguistics.
Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 3543–3556, Minneapolis, Minnesota.
Association for Computational Linguistics.
Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. 2020. Transformers are RNNs: Fast autoregressive Transformers with linear attention. In *Proceedings of the 37th International* Conference on Machine Learning. PMLR.
Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. 2019. The (un)reliability of saliency methods. In Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller, editors, *Explainable AI: Interpreting, Explaining and Visualizing Deep Learning*,
volume 11700 of *Lecture Notes in Computer Science*,
pages 267–280. Springer.
Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya.
2020. Reformer: The efficient Transformer. In 8th International Conference on Learning Representations (ICLR 2020). OpenReview.net.
Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings* of Machine Learning Research, pages 1885–1894.
PMLR.
Narine Kokhlikyan, Vivek Miglani, Bilal Alsallakh, Miguel Martin, and Orion Reblitz-Richardson. 2021.
Investigating sanity checks for saliency maps with image and text classification. arXiv preprint arXiv:2106.07475.
Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Cernocký, and Sanjeev Khudanpur. 2010. ˇ Recurrent neural network based language model. In *INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association*, pages 1045–1048, Makuhari, Chiba, Japan. ISCA.
Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajic, Christopher D. Manning, Sampo ˇ
Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2:
An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034–4043, Marseille, France. European Language Resources Association.
Joe O'Connor and Jacob Andreas. 2021. What context features can transformer language models use? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 851–864, Online. Association for Computational Linguistics.
Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. 2022. In-context learning and induction heads. *Transformer Circuits* Thread.
Ofir Press, Noah Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear biases enables input length extrapolation. In *The Tenth International Conference on Learning Representations*, Virtual Event. OpenReview.net.
Ofir Press, Noah A. Smith, and Mike Lewis. 2021.
Shortformer: Better language modeling using shorter inputs. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 5493–5505, Online. Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 2931–2951, Florence, Italy. Association for Computational Linguistics.
Simeng Sun, Kalpesh Krishna, Andrew MattarellaMicke, and Mohit Iyyer. 2021. Do long-range language models actually use long-range context? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 807–
822, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Simeng Sun, Katherine Thai, and Mohit Iyyer. 2022.
ChapterBreak: A challenge dataset for long-range language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3704–3714, Seattle, United States. Association for Computational Linguistics.
Keyon Vafa, Yuntian Deng, David Blei, and Alexander Rush. 2021. Rationales for sequential predictions.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages
10314–10332, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, pages 5998–6008, Long Beach, CA, USA.
Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 37–42, Florence, Italy. Association for Computational Linguistics.
Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A
6 billion parameter autoregressive language model.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
![7_image_0.png](7_image_0.png)
## A Context Length Probing
Fig. 7 illustrates a step of context length probing. We wish to obtain the tensor P from Eq. (3), understood as a table where each cell contains the predictions (next-token logits) for a given position in the text and a given context length. By running our LM on a segment of the text, we get predictions such that for the n-th token in the segment, the effective context length is equal to n, which corresponds to a diagonal in the table. We can thus fill in the whole table by running the LM on all segments of length cmax (plus trailing segments of lengths cmax − 1*, . . . ,* 1).
Notice that this process is somewhat similar to (naïvely) running the LM in generation mode, except that at each step, the leading token is removed, preventing the use of caching to speed up the computation.
In practice, it is not necessary to explicitly construct the tensor P . Indeed, we find it more efficient to instead store the raw logits obtained by running the model on all the segments, then do the necessary index arithmetics when computing the metrics.
## A.1 Strided Context Length Probing
For a k-fold reduction in computational cost, we may instead use a sliding window with a stride k > 1, i.e. run the model only on segments starting at positions k (n − 1) + 1 for all n ∈ {1, . . . , ⌈N/k⌉}, rather than all positions. This way, for a target token xn+1, we obtain the predictions p(xn+1 | xn−c+1*, . . . , x*n)
only for such context lengths c that c mod k = n. In other words, predictions with context length c are only available for tokens xc+1, xc+k+1, xc+2k+1*, . . .*. Consequently:
- Overall, we still cover all context lengths 1*, . . . , c*max, allowing us to perform aggregate analyses like the ones in Section 4.1.
- When analyzing the predictions for a specific target token in a document (e.g. to compute ∆-scores),
context tokens come in blocks of length k. Visualizations like the ones in Figs. 1 and 4 are still possible for all target tokens, but become less detailed, grouping every k context tokens together.
- Computation time, as well as the space needed to store the predictions, is reduced by a factor of k.
## B Technical Details
Data. The LinES treebank is licensed under Creative Commons BY-NC-SA 4.0. We concatenated all tokens from each of the documents from the treebank, then re-tokenized them using the GPT-2 tokenizer.
We mapped the original (UD) POS tags to the GPT-tokenized dataset in such a way that every GPT token is assigned the POS tag of the first UD token it overlaps with.
Models. We used the models EleutherAI/gpt-j-6B (Apache 2.0 license), and gpt2-xl and gpt2 (MIT license), all from huggingface.co.
Computation. We parallelized the inference over 500 jobs on a compute cluster,8each running on 8 CPU cores with at least 8 GB of RAM per core, with a batch size of 16. Each job took about 10–20 min for GPT-2 and 30–60 min for GPT-J. Additionally, computing the metrics from the logits (which take up 2 TB of disk space in float16) took between 2 and 4 h per model on a single machine with 32 CPU
cores. The total computing time was 318 core-days, including debugging and discarded runs.
## C Additional Plots C.1 Token-Wise Metrics As A Function Of Context Length
Figs. 8 and 9 show NLL and KL divergence (5), respectively, as a function of context length, for selected target tokens (proper nouns) from the validation set.
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
![9_image_2.png](9_image_2.png)
Figure 8: NLL losses (y axis) for 3 selected target tokens as a function of context length (x axis). Below each plot, the target token is displayed in bold, along with a context of 60 tokens. The x axis is reversed to correspond visually to left-hand context. The red dots show the 10 tokens that cause the largest drops in GPT-J cross entropy when added to the context.
![10_image_0.png](10_image_0.png)
![10_image_1.png](10_image_1.png)
![10_image_2.png](10_image_2.png)
![10_image_3.png](10_image_3.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5
✗ A2. Did you discuss any potential risks of your work?
We did not identify any risks. The usual risks related to large language models are arguably not present here since we are not proposing or training a new LM, but a way to explain it.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4, appendix
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
For artifacts we use, no intended use is specified except for non-commercial license (CC NC-BYSA) for the data. We do not specifically discuss this but we are using all artifacts in a purely non-commercial research context. Any artifacts we release will be under a non-commercial license.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We use data published in peer-reviewed conference proceedings and we rely on the original authors to have taken those steps.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4, appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4, appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4. Only reporting a single (deterministic) run at a time; it is made clear whenever reporting a mean metric over the dataset
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4, appendix A
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wang-etal-2023-check | Let Me Check the Examples: Enhancing Demonstration Learning via Explicit Imitation | https://aclanthology.org/2023.acl-short.93 | Demonstration learning aims to guide the prompt prediction by providing answered demonstrations in the few shot settings. Despite achieving promising results, existing work only concatenates the answered examples as demonstrations to the prompt template (including the raw context) without any additional operation, neglecting the prompt-demonstration dependencies. Besides, prior research found that randomly replacing the labels of demonstrations marginally hurts performance, illustrating that the model could not properly learn the knowledge brought by the demonstrations. Inspired by the human learning process, in this paper, we introduce Imitation DEMOnstration learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour, which includes: (1) contrastive learning mechanism to concentrate on similar demonstrations.(2) demonstration-label re-prediction method to consolidate known knowledge. Experiment results show that our proposed method achieves state-of-the-art performance on 5 out of 14 classification corpus. Further studies also prove that Imitation-Demo strengthens the associations between the prompt and demonstrations, which could provide the basis for exploring how demonstration learning works. | # Let Me Check The Examples: Enhancing Demonstration Learning Via Explicit Imitation
Sirui Wang1,2∗
, Kaiwen Wei2∗†, Hongzhi Zhang2, Yuntao Li2**, Wei Wu**2 1Department of Automation, Tsinghua University, China 2Meituan Inc., Beijing, China
{wangsirui,weikaiwen,zhanghongzhi03}@meituan.com
{liyuntao04,wuwei130}@meituan.com
## Abstract
Demonstration learning aims to guide the prompt prediction by providing answered demonstrations in the few shot settings. Despite achieving promising results, existing work only concatenates the answered examples as demonstrations to the prompt template (including the raw context) without any additional operation, neglecting the prompt-demonstration dependencies. Besides, prior research found that randomly replacing the labels of demonstrations *marginally* hurts performance, illustrating that the model could not properly learn the knowledge brought by the demonstrations. Inspired by the human learning process, in this paper, we introduce Imitation DEMOnstration learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour, which includes: (1) contrastive learning mechanism to concentrate on similar demonstrations.(2)
demonstration-label re-prediction method to consolidate known knowledge. Experiment results show that our proposed method achieves state-of-the-art performance on 5 out of 14 classification corpus. Further studies also prove that Imitation-Demo strengthens the associations between the prompt and demonstrations, which could provide the basis for exploring how demonstration learning works.
## 1 Introduction
Prompt-based learning typically works by modifying the input into cloze-style prompt templates and using the masked language models (MLMs) to complete the unfilled information in probabilistic. It has achieved promising performance in various NLP
tasks (Schick and Schütze, 2021; Lester et al., 2021; Hu et al., 2021), especially in low-resource settings
(Scao and Rush, 2021). A promising prompt engineering category is *demonstration learning* (Gao et al., 2021; Liu et al., 2021a), which seeks to provide a few answered samples as demonstrations to assist prompt prediction. As shown in Fig. 1 (a),
the demonstration learning method concatenates the answered demonstrations per category to the prompt, and seeks to classify the [*MASK*] token as *great*, indicating a *positive* prediction result based on a label-to-word mapping.
The intuition of demonstration learning is that samples with similar expressions or content can provide repetitive patterns (Liu et al., 2021a). However, Min et al. (2022) point out that replacing gold demonstration labels with random labels *marginally* hurts performance. This finding is counter-intuitive and illustrates that the model could not comprehensively refer to the knowledge brought by the demonstrations in an implicit way.
We attribute this problem to that existing methods simply concatenate the answered demonstrations to the prompt template without any additional operation, ignoring the dependencies between prompt and demonstrations.
To overcome this limitation, we rethink how human beings learn from demonstrations. Intuitively, when faced with a new challenging question, they typically (1) look for the most similar example to the question first, and then (2) reply to the question according to the answering steps of the retrieved example. Humans tend to strengthen the learning process through review strategies, i.e., finding a better solution to select similar examples and reanswering the questions of examples to consolidate known knowledge. Inspired by this, likewise, the interactions between the prompt and demonstrations could also be reinforced by imitating the human reviewing process for demonstration learning.
In this paper, we propose a simple-yet-effective version of demonstration learning, named **Imitation DEMO**nstration Learning (Imitation-Demo)
to explicitly strengthen the two sub-steps of demonstration learning via human-like review. Specifi1080
![1_image_0.png](1_image_0.png)
cally, to accurately locate similar samples, we introduce a contrastive learning mechanism (Chen et al.,
2020; Robinson et al., 2021) to reorganize demonstrations by reducing the divergences of demonstration contexts among the same category while increasing those divergences between different categories. Besides, to solidify known knowledge, we leverage a demonstration-label re-prediction method to emphasize the positions of the answers in demonstrations. Even without introducing new parameters or any prediction computation, our proposed method achieves state-of-the-art performance on 5 out of 14 classification corpus. Compared to the strong baseline LM-BFF (Gao et al.,
2021), Imitation-Demo achieves 1.11 points averaged improvement on the 14 datasets. Further study also shows that Imitation-Demo strengthens the association between prompt and demonstrations, which could provide the basis for exploring how demonstration learning works.
## 2 Methodology
Demonstration Learning. As illustrated in Fig. 1 (a), The prompt template x prompt consists of input sentence x sent and template x temp containing mask token, i.e., x prompt = [x sent, x*temp*]. Firstly, we leverage the pre-trained SBERT (Reimers and Gurevych, 2019) to retrieve the demonstrations (including context x
(k)and label y
(k)) for the k-th category that has maximum semantic similarity to the raw prompt context. Then, the retrieved demonstrations are concatenated to the input prompt. After that, we convert the concatenated input sentence x in to hidden vectors h in via the RoBERTa model
(Liu et al., 2019). The model is optimized by crossentropy loss, and the goal of demonstration learning is to predict y mask at the [*MASK*] position from the hidden state of mask h mask via MLM
head. The whole process could be formulated as1:
$$\begin{array}{c}{{x^{i n}=\left[x^{p r o m p t},(x^{(1)},y^{(1)}),...,(x^{(K)},y^{(K)})\right]}}\\ {{\qquad\mathbf{h}^{i n}=\mathrm{RoBERTa}(x^{i n})}}\\ {{\qquad\mathcal{L}_{m a s k}=\mathrm{CE}(\mathbf{h}^{m a s k},\hat{Y}^{m a s k})}}\\ {{\qquad p\left(y^{m a s k}\mid x_{\mathrm{in}}\right)=\mathrm{MLM}(\mathbf{h}^{m a s k})}}\end{array}\qquad\mathrm{(1)}$$
where [*.., .., ..*] denotes concatenating diverse parts with sentence separator [SEP]. K is the number of categories. CE is short for cross-entropy loss, and Yˆ *mask* is the ground-truth labels from the predefined label-to-word mapping.
Demonstration Reorganization via Contrastive Learning. In demonstration learning, it is crucial to decide from which known demonstrations to select the repetitive patterns. Therefore, we introduce a contrastive learning mechanism to imitate human review behaviour by reorganizing the demonstrations based on their contexts. As shown in Fig. 1 (b)(I), we treat the demonstration contexts with identical categories to the input prompt as positive samples, and the others are regarded as negative ones. By pulling in positive samples and pulling out negative samples, the model could select the most relevant sample among the given 1Due to the space restriction, we only briefly describe the general process of demonstration learning, please refer to Gao et al. (2021) for more details.
demonstrations more precisely. In the experiment, we apply mean-pooling operations on the hidden states of positive, negative demonstration contexts h
+, h−, and input sentence h in, obtaining the sentence representations s
+, s−, and s in. Inspired by Robinson et al. (2021) in computer vision, we introduce HCL loss to ensure intra-class compactness while increasing inter-class distances:
$$\mathcal{L}_{c o n t e x t}=E\left[-\log\frac{e^{s^{i n}\cdot s^{+}}}{e^{s^{i n}\cdot s^{+}}+\sum_{i=1}^{N}e^{s^{i n}\cdot s^{-}}}\right]\tag{2}$$
where · is the dot product operation, N is the number of negative contexts in the task, and E [..] denotes calculating the mean value.
Demonstration-label Re-prediction. We further utilize a demonstration-label re-prediction method to mimic human review behaviour by recovering the labels from all the given demonstration contexts. Specifically, the target of our model is not only to identify the category of [*MASK*] token, but also to classify the tokens located in demonstration label positions. Take the binary classification task in Fig. 1 (b)(II) as an example, more than predicting the class of the mask token, the model also requires to predict y great and y terri (i.e., *great* and terrible) based on the hidden states h great and h terri at corresponding label positions.
During training, the cross-entropy loss is utilized to calculate L*great* and L*terri* for different demonstration labels, then we sum them up to obtain the demonstration-label re-prediction loss L*label*:
$$\begin{array}{l}{{\mathcal{L}_{g r e a t}=\mathbf{C E}(\mathbf{h}^{g r e a t},\hat{Y}^{g r e a t})}}\\ {{\mathcal{L}_{t e r r i}=\mathbf{C E}(\mathbf{h}^{t e r r i},\hat{Y}^{t e r r i})}}\\ {{\mathcal{L}_{l a b e l}=\mathcal{L}_{g r e a t}+\mathcal{L}_{t e r r i}}}\end{array}\qquad(3)$$
$\mathbf{\hat{\theta}}$ $\mathbf{tammi}$...
where Yˆ *great* and Yˆ *terri* are the ground-truth labels at diverse demonstration label positions.
Similar contrastive learning and demonstrationlabel re-prediction operations can also be performed for the multi-category classification tasks.
The overall loss of Imitation-Demo is defined as follows:
$${\mathcal{L}}={\mathcal{L}}_{m a s k}+\alpha,$$
L = Lmask + αLlabel + βL*context* (4)
where α, β are weight coefficients to control the importance of different components.
## 3 Experiments
Experiments Settings. Following the settings in Gao et al. (2021), we evaluate on 14 classification
| MRPC | SNLI | SST-2 | |
|-----------------|------------|------------|------------|
| Imitation-Demo | 80.8 (3.2) | 80.0 (3.3) | 93.1 (0.5) |
| LM-BFF* | 79.7 (3.2) | 77.8 (0.6) | 92.1 (1.5) |
| Imitation-Demo* | 74.4 (9.2) | 76.0 (5.2) | 91.0 (1.3) |
Table 1: Results when using demonstrations with random labels. * denotes trained with random labels.
datasets. For SNLI (Bowman et al., 2015), SST-2
(Socher et al., 2013), CoLA (Warstadt et al., 2019),
MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), RTE (Dagan et al., 2005; Giampiccolo et al., 2007; Bentivogli et al., 2009), MRPC
(Dolan and Brockett, 2005), QQP2and SST-B (Cer et al., 2017), we use the original development sets for testing. For MR (Pang and Lee, 2005), CR
(Hu and Liu, 2004), MPQA (Wiebe et al., 2005)
and Subj (Pang and Lee, 2004), we randomly sample 2,000 examples as the testing set. For SST-5
(Socher et al., 2013) and TREC (Voorhees and Tice, 2000), we use the official test sets. F1 score (F1)
are adopted as the evaluation metric of MRPC and QQP, and the other datasets utilize accuracy (acc)
as the evaluation criteria.
Parameters Setting We implement all the baselines and our frameworks using PyTorch (Paszke et al., 2019). The pre-trained *RoBERTa-large* model and *roberta-large-nli-stsb-mean-tokens* SBERT (Reimers and Gurevych, 2019) from huggingface3are applied in the experiments. We get 16 samples per class during training for all models. In order to control the smoothness of the exponential functions when calculation contrastive learning loss, we divide every mean-pooling results with temperature T. Grid search mechanisim are utilized to select optimal hyper-parameter combinations on each split. Finally we select the the coefficients α and β as 1 and 5, respectively. The temperature T is set as 5 and the batch size is 16.
The other hyper-parameters and the prompt templates are identical to the default settings in LMBFF (Gao et al., 2021) for fair comparison. We report the average performance of models trained on 5 different randomly sampled training and dev splits, the random seeds are fixed as 13, 32, 42 ,87 ,
100, respectively.
Compared Methods. (1) **Majority**, which select the majority class of the dataset; (2) **Prompt-based**
zero-shot: which use prompt tunning in zeroshot situations; (3) "GPT-3" in-context learn2https://www.quora.com/q/quoradata/
3https://github.com/huggingface/transformers
| SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | |
|-------------------------------------|------------|------------|------------|------------|------------|------------|------------|
| (acc) | (acc) | (acc) | (acc) | (acc) | (acc) | (acc) | |
| Majority | 50.9 | 23.1 | 50.0 | 50.0 | 50.0 | 50.0 | 18.8 |
| Prompt-based zero-shot | 83.6 | 35.0 | 80.8 | 79.5 | 67.6 | 51.4 | 32.0 |
| "GPT-3" in-context learning | 84.8 (1.3) | 30.6 (0.9) | 80.5 (1.7) | 87.4 (0.8) | 63.8 (2.1) | 53.6 (1.0) | 26.2 (2.4) |
| Fine-tuning | 81.4 (3.8) | 43.9 (2.0) | 76.9 (5.9) | 75.8 (3.2) | 72.0 (3.8) | 90.8 (1.8) | 88.8 (2.1) |
| P-tuning | 92.2 (0.4) | - | 86.7 (1.2) | 91.8 (1.1) | - | 90.3 (2.2) | 86.3 (4.5) |
| DART | 93.5 (0.5) | - | 88.2 (1.0) | 91.8 (0.5) | - | 90.7 (1.4) | 87.1 (3.8) |
| Li's | 92.8 (0.6) | 50.7 (2.9) | 89.4 (0.8) | 90.5 (2.2) | 83.2 (1.4) | 92.1 (0.7) | 87.2 (3.8) |
| Demo-tuning (LM-BFF) | 93.2 (0.4) | 50.1 (0.4) | 87.9 (0.6) | 91.5 (0.6) | 85.9 (1.5) | 92.3 (0.6) | 90.7 (4.5) |
| LM-BFF + SupCon | 94.2 (0.7) | 54.0 (0.8) | 89.6 (0.8) | 91.0 (1.4) | 86.9 (1.1) | 92.4 (0.6) | 89.8 (1.8) |
| EFL ♡ | 91.1 (1.5) | 41.8 (1.6) | 85.7 (3.7) | 87.7 (5.4) | 75.8 (4.8) | 91.7 (1.8) | 88.1 (2.3) |
| LM-BFF ♡ | 92.2 (1.4) | 51.2 (1.6) | 88.2 (0.9) | 91.8 (1.5) | 85.5 (4.2) | 90.9 (1.9) | 87.6 (4.8) |
| Imitation-Demo (ours) | 93.1 (0.5) | 52.3 (0.6) | 89.1 (1.0) | 91.8 (0.7) | 87.7 (1.2) | 92.4 (1.1) | 89.1 (3.2) |
| Prompt-based Fine-tuning (man) ♡ | 92.6 (0.5) | 47.4 (2.5) | 87.0 (1.2) | 90.3 (1.0) | 84.7 (2.2) | 91.2 (1.1) | 84.8 (5.1) |
| + demonstrations♡ | 92.2 (1.4) | 51.2 (1.6) | 88.2 (0.9) | 91.8 (1.5) | 85.5 (4.2) | 90.9 (1.9) | 87.6 (4.8) |
| + demonstration-label re-prediction | 92.8 (0.7) | 51.4 (1.0) | 89.2 (1.0) | 92.2 (1.2) | 87.5 (1.0) | 92.1 (1.6) | 89.9 (3.1) |
| + contrastive learning | 93.1 (0.5) | 52.3 (0.6) | 89.1 (1.0) | 91.8 (0.7) | 87.7 (1.2) | 92.4 (1.1) | 89.1 (3.2) |
| MNLI | MNLI-mm | SNLI | QNLI | RTE | MRPC | QQP | |
| (acc) | (acc) | (acc) | (acc) | (acc) | (F1) | (F1) | |
| Majority | 32.7 | 33.0 | 33.8 | 49.5 | 52.7 | 52.7 | 0.0 |
| Prompt-based zero-shot | 50.8 | 51.7 | 49.5 | 50.8 | 51.3 | 61.9 | 49.7 |
| "GPT-3" in-context learning | 52.0 (0.7) | 53.4 (0.6) | 47.1 (0.6) | 53.8 (0.4) | 60.4 (1.4) | 45.7 (6.0) | 36.1 (5.2) |
| Fine-tuning | 45.8 (6.4) | 47.8 (6.8) | 48.4 (4.8) | 60.2 (6.5) | 54.4 (3.9) | 76.6 (2.5) | 60.7 (4.3) |
| P-tuning | 61.5 (2.1) | - | 72.3 (3.0) | 64.3(2.8) | - | 76.2 (2.3) | 65.6 (3.0) |
| DART | 67.5 (2.6) | - | 75.8 (1.6) | 66.7 (3.7) | - | 78.3 (4.5) | 67.8 (3.2) |
| Li's | 69.2 (4.0) | 71.0 (3.5) | 79.3 (3.2) | 69.0 (4.5) | 74.2 (3.1) | 73.2 (7.5) | 68.2 (3.4) |
| Demo-tuning (LM-BFF) | 71.0 (2.0) | 72.8 (1.5) | 78.7 (1.9) | 73.1 (1.8) | 70.0 (3.4) | 78.4 (2.3) | 70.2 (1.7) |
| LM-BFF + SupCon | 72.4 (2.0) | 74.2 (1.9) | 79.6 (2.6) | 71.1 (6.8) | 71.8 (1.1) | 77.8 (4.6) | 74.0 (2.5) |
| EFL ♡ | 65.8 (3.7) | 68.5 (2.8) | 78.2 (1.3) | 67.6 (5.5) | 68.9 (1.5) | 77.4 (6.3) | 67.0 (2.9) |
| LM-BFF ♡ | 69.6 (2.9) | 71.3 (2.6) | 78.0 (3.6) | 68.8 (5.4) | 68.7 (2.3) | 77.3 (6.0) | 68.7 (4.7) |
| Imitation-Demo (ours) | 71.4 (0.9) | 72.0 (2.0) | 80.0 (3.3) | 70.5 (3.3) | 71.5 (1.5) | 80.8 (3.2) | 70.9 (1.5) |
| Prompt-based Fine-tuning (man) ♡ | 68.3 (2.3) | 70.5 (1.9) | 77.2 (3.7) | 64.5 (4.3) | 69.1 (3.6) | 74.5 (5.3) | 65.5 (5.3) |
| + demonstrations♡ | 69.6 (2.9) | 71.3 (2.6) | 78.0 (3.6) | 68.8 (5.4) | 68.7 (2.3) | 77.3 (6.0) | 68.7 (4.7) |
| + demonstration-label re-prediction | 71.3 (0.9) | 72.5 (1.4) | 79.6 (3.2) | 70.3 (4.1) | 70.8 (3.4) | 77.0 (2.6) | 68.8 (2.6) |
| + contrastive learning | 71.4 (0.9) | 72.0 (2.0) | 80.0 (3.3) | 70.5 (3.3) | 71.5 (1.5) | 80.8 (3.2) | 70.9 (1.5) |
ing, which use the in-context learning proposed in RoBERTa with no parameter updating; (4) **Finetuning**; (5) **P-tuning** (Liu et al., 2021b), which employ trainable continuous prompt embeddings;
(6) **DART** (Zhang et al., 2021), which differentially optimize the prompt template and the target label during the backpropagation process; (7) **Li's**
(Li et al., 2022), which reformulate a classification or a regression task as a token-replaced detection problem utilizing pre-trained model Electra
(Clark et al., 2020); (8) **Demo-tuning (LM-BFF)**
(Liang et al., 2022), which select "mask token" output feature as the input for contrastive learning to get a good representation of "virtual demonstration". We select the LM-BFF as the basic backbone model for fair comparisons. (9) **LM-BFF + SupCon** (Jian et al., 2022), which propose a supervised contrastive framework that clusters inputs from the same class under different augmented "views" and repel the ones from different classes. The LM-BFF
is selected as the basic model. (10) EFL (Wang et al., 2021), which reformulate potential NLP task into an entailment one. (11) **LM-BFF** (Gao et al.,
2021), which manually design templates and augment prompt tuning with demonstrations.
Main Results. From the experiment results illustrated in Table 2, we can conclude that: (1) The methods leveraging demonstrations (e.g. LM-BFF and Imitation-Demo) generally achieve productive results, proving the superiority of demonstration learning mechanism. (2) Compared to those methods that utilize continuous prompt embeddings or reformulate the task formats to boost experiment results, Imitation-Demo achieves state-of-the-art results on 5 out of 14 datasets in the original maskprediction way without introducing additional pa-
| QQP | MNLI-mm | MNLI | |
|----------------|-----------|--------|------|
| LM-BFF | 1.11 | 1.02 | 1.01 |
| Imitation-Demo | 1.16 | 1.04 | 1.05 |
Table 3: Averaged RoBERTa attention results pointing from demonstrations to prompt. The values are normalized by default RoBERTa pre-training weights.
rameters or any prediction computation. The performance gain indicates that Imitation-Demo could effectively promote experiment results by reinforcing the connections between the prompt and demonstrations. (3) Ablation experiment results in the lower part of Table 2 illustrate the effectiveness of the proposed demonstration reorganization and demonstrations-label re-prediction methods.
Analysis. Extensive experiments are conducted to show that our human-like imitation mechanisms enhance the connection between prompt and demonstration. Firstly, when trained with random demonstration labels, as shown in Table 1, we observe that Imitation-Demo has a greater drop rate than LM-BFF, indicating [*MASK*] is dependent more on the semantic information from demonstrations.
This finding could explain why there is little performance degradation when using random demonstration labels in Min et al. (2022) to some extent.
Moreover, following Wei et al. (2021), we further conduct an experiment to show the review process with attention weights of the RoBERTa backbone.
We average the total 384 attention heads of Robertalarge pointing from demonstrations to prompt, then normalize the values by default RoBERTa pretrained weights. From the results in Table 3, we observe Imitation-Demo received larger attention values. The result indicates that our approach could direct the RoBERTa model by modifying the corresponding attention weights and guiding prompt to focus more on the clues brought by demonstrations. Since the models are trained in a few-shot scenario, the weights of models are not tuned heavily, thus we do not observe significant average attention score difference between the proposed Imitation-Demo and baseline method. However, with only 16 samples per class for training, Imitation-Demo can already show higher averaged attention weights compared with baseline method, indicating stronger connections between prompt and demonstrations.
## 4 Conclusion
In this paper, we propose imitation demonstration learning (Imitation-Demo) to reinforce the correlations between prompt and given demonstrations. Inspired by the human review process, we introduce contrastive learning to locate similar samples and demonstration-label re-prediction mechanisms to solidify known knowledge. Experiments show that our method consistently outperforms other baselines on 5 out of 14 classification datasets in the few-shot settings. We hope this work could inspire the exploration of the working mechanism of demonstration learning and toward better few-shot learning abilities.
## Limitations
Although the experiment results have illustrated the effectiveness of the proposed Imitation-Demo method, we have to admit that our work has the following limitations:
1) This article is based on that the readers have some knowledge of prompt-based learning or demonstration learning. Due to the space limitation, we can only briefly describe the basic process of the demonstration learning, which may make the article a bit obscure and difficult to follow.
2) Imitation-Demo does not achieve state-of-theart on all the datasets, but outperforms other strong baselines on 5 out of 14 datasets. Besides, it consistently surpasses the demonstration learning-based baseline LM-BFF. Since Imitation-Demo is trained without introducing new parameters and explores the working principle of demonstration learning from a certain perspective, we believe the results are acceptable.
## References
Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In TAC.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 632–642. The Association for Computational Linguistics.
Daniel M. Cer, Mona T. Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval2017 task 1: Semantic textual similarity multilingual
and crosslingual focused evaluation. In *Proceedings* of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017, pages 1–14. Association for Computational Linguistics.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1597–1607. PMLR.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, volume 3944 of *Lecture* Notes in Computer Science, pages 177–190. Springer.
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In Proceedings of the Third International Workshop on Paraphrasing, IWP@IJCNLP 2005, Jeju Island, Korea, October 2005, 2005. Asian Federation of Natural Language Processing.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting* of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1:
Long Papers), Virtual Event, August 1-6, 2021, pages 3816–3830. Association for Computational Linguistics.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9.
Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In *Proceedings of the Tenth* ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, Washington, USA, August 22-25, 2004, pages 168–177. ACM.
Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, and Maosong Sun. 2021. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. *CoRR*,
abs/2108.02035.
Yiren Jian, Chongyang Gao, and Soroush Vosoughi.
2022. Contrastive learning for prompt-based fewshot language learners. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA,
United States, July 10-15, 2022, pages 5577–5587.
Association for Computational Linguistics.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 3045–
3059. Association for Computational Linguistics.
Zicheng Li, Shoushan Li, and Guodong Zhou. 2022.
Pre-trained token-replaced detection model as fewshot learner. *CoRR*, abs/2203.03235.
Xiaozhuan Liang, Ningyu Zhang, Siyuan Cheng, Zhenru Zhang, Chuanqi Tan, and Huajun Chen.
2022. Contrastive demonstration tuning for pretrained language models. In *Findings of the Association for Computational Linguistics: EMNLP 2022,*
Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 799–811. Association for Computational Linguistics.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021a. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
arXiv preprint arXiv:2107.13586.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. GPT
understands, too. *CoRR*, abs/2103.10385.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? *CoRR*,
abs/2202.12837.
Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, 21-26 July, 2004, Barcelona, Spain, pages 271–278. ACL.
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In *ACL 2005, 43rd Annual* Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30 June 2005, University of Michigan, USA, pages 115–124.
The Association for Computer Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z.
Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing* Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024–8035.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383–2392.
The Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980–3990.
Association for Computational Linguistics.
Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2021. Contrastive learning with hard negative samples. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net.
Teven Le Scao and Alexander M. Rush. 2021. How many data points is a prompt worth? In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2627–2636. Association for Computational Linguistics.
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 255–269. Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1631–1642. ACL.
Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In SIGIR 2000:
Proceedings of the 23rd Annual International ACM
SIGIR Conference on Research and Development in Information Retrieval, July 24-28, 2000, Athens, Greece, pages 200–207. ACM.
Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, and Hao Ma. 2021. Entailment as few-shot learner.
CoRR, abs/2104.14690.
Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments.
Trans. Assoc. Comput. Linguistics, 7:625–641.
Kaiwen Wei, Xian Sun, Zequn Zhang, Jingyuan Zhang, Zhi Guo, and Li Jin. 2021. Trigger is not sufficient:
Exploiting frame-aware knowledge for implicit event argument extraction. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP
2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4672–4682. Association for Computational Linguistics.
Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005.
Annotating expressions of opinions and emotions in language. *Lang. Resour. Evaluation*, 39(2-3):165–
210.
Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, NAACLHLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1112–1122.
Association for Computational Linguistics.
Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, and Huajun Chen. 2021. Differentiable prompt makes pre-trained language models better few-shot learners. *CoRR*,
abs/2108.13161.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitation Section.
✓ A2. Did you discuss any potential risks of your work?
Limitation Section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction Sections.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 3
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** In Section 3.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In Section 3.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In Section 3.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In Section 3.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In Section 3.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
rei-etal-2023-inside | The Inside Story: Towards Better Understanding of Machine Translation Neural Evaluation Metrics | https://aclanthology.org/2023.acl-short.94 | Neural metrics for machine translation evaluation, such as COMET, exhibit significant improvements in their correlation with human judgments, as compared to traditional metrics based on lexical overlap, such as BLEU. Yet, neural metrics are, to a great extent, {``}black boxes{''} returning a single sentence-level score without transparency about the decision-making process. In this work, we develop and compare several neural explainability methods and demonstrate their effectiveness for interpreting state-of-the-art fine-tuned neural metrics. Our study reveals that these metrics leverage token-level information that can be directly attributed to translation errors, as assessed through comparison of token-level neural saliency maps with Multidimensional Quality Metrics (MQM) annotations and with synthetically-generated critical translation errors. To ease future research, we release our code at: \url{https://github.com/Unbabel/COMET/tree/explainable-metrics} |
## The Inside Story: Towards Better Understanding Of Machine Translation Neural Evaluation Metrics
Ricardo Rei∗1,2,4, Nuno M. Guerreiro∗3,4**, Marcos Treviso**3,4, Alon Lavie1, Luisa Coheur2,4**, André F. T. Martins**1,3,4 1Unbabel, Lisbon, Portugal, 2INESC-ID, Lisbon, Portugal 3Instituto de Telecomunicações, Lisbon, Portugal 4Instituto Superior Técnico, University of Lisbon, Portugal
## Abstract
Neural metrics for machine translation evaluation, such as COMET, exhibit significant improvements in their correlation with human judgments compared to traditional metrics based on lexical overlap, such as BLEU.
Yet neural metrics are, to a great extent,
"black boxes" that return a single sentence-level score without transparency about the decisionmaking process. In this work, we develop and compare several neural explainability methods and demonstrate their effectiveness for interpreting state-of-the-art fine-tuned neural metrics. Our study reveals that these metrics leverage token-level information that can be directly attributed to translation errors, as assessed through comparison of token-level neural saliency maps with Multidimensional Quality Metrics (MQM) annotations and with synthetically-generated critical translation errors. To ease future research, we release our code at https://github.com/Unbabel/COMET/
tree/explainable-metrics.
## 1 Introduction
Reference-based neural metrics for machine translation evaluation are achieving evergrowing success, demonstrating superior results over traditional lexical overlap-based metrics, such as BLEU (Papineni et al., 2002) and CHRF (Popovic´, 2015),
in terms of both their correlation with human ratings and their robustness across diverse domains (Callison-Burch et al., 2006; Smith et al.,
2016; Mathur et al., 2020; Kocmi et al., 2021; Freitag et al., 2022). However, lexical overlapbased metrics remain popular for evaluating the performance and progress of translation systems and algorithms. Concerns regarding trust and interpretability may help explain this (Leiter et al.,
2022): contrary to traditional metrics, neural metrics are considered "black boxes" as they often use
∗ Equal contribution. Corresponding author:
[email protected]
![0_image_0.png](0_image_0.png)
increasingly large models (e.g., the winning metric of the WMT 22 Metrics shared task was a 10B
parameter model (Freitag et al., 2022)).
While some recent work has focus on explaining the predictions made by *reference-free* quality estimation (QE) systems (Fomicheva et al., 2021; Zerva et al., 2022), explaining *reference-based* metrics has remained a largely overlooked problem (Leiter et al., 2022). It is an open question whether the observations from studies of explainable QE carry over to this scenario. Thus, in this work, we fill that gap by turning to state-of-theart reference-based metrics—we aim to interpret their decision-making process by exploiting the fact that these metrics show consistently good correlations with *Multidimentional Quality Metrics*
(MQM) (Freitag et al., 2021, 2022; Sai et al., 2022),
which are fine-grained quality assessments that result from experts identifying error spans in translation outputs (Lommel et al., 2014). We hypothesize that reference-based metrics leverage this tokenlevel information to produce sentence-level scores.
To test this hypothesis, we assess whether our explanations - measures of token-level importance obtained via attribution and input attribution methods such as attention weights and gradient scores (Treviso et al., 2021; Rei et al., 2022b) - align with 1089 human-annotated spans (Fomicheva et al., 2021, 2022; Zerva et al., 2022), as illustrated in Figure 1.
Our analysis focuses on two main vectors: (i) understanding the impact of the reference information on the quality of the explanations; and (ii) finding whether the explanations can help to identify potential weaknesses in the metrics. Our main contributions are:
- We provide a comparison between multiple explainability methods for different metrics on all types of evaluation: src-only, ref-only, and src+ref joint evaluation.
- We find that explanations are related to the underlying metric architecture, and that leveraging reference information improves the explanations.
- We show that explanations for critical translation errors can reveal weaknesses in the metrics.
## 2 Explaining Neural Metrics
We aim to explain sentence-level quality assessments of reference-based metrics by producing token-level explanations that align to translation errors. In what follows, we describe the metrics and how we produce the explanations that we study.
## 2.1 Metrics
We focus our analysis on two state-of-the-art neural metrics: COMET (Rei et al., 2020) and UNITE (Wan et al., 2022).1 While both metrics use a multilingual encoder model based on XLMR (Conneau et al., 2020), they employ distinct strategies to obtain sentence-level quality scores.
On the one hand, COMET *separately* encodes the source, translation and reference to obtain their respective sentence embeddings; these embeddings are then combined to compute a quality score. On the other, UNITE *jointly* encodes the sentences to compute a contextualized representation that is subsequently used to compute the quality score.
Interestingly, UNITE is trained to obtain quality scores for different input combinations: [mt; src] ( SRC ), [mt; ref] (REF), and [mt; src; ref] (SRC+REF). In fact, when the input is SRC ,
UNITE works like TransQuest (Ranasinghe et al.,
2020); REF, like BLEURT (Sellam et al., 2020);
and SRC+REF, like ROBLEURT (Wan et al., 2021).
## 2.2 Explanations Via Attribution Methods
In this work, we produce explanations using attribution methods that assign a scalar value to each translation token (i.e. a token-level attribution) to represent its importance. While many input attribution methods exist and have been extensively studied in the literature (Ribeiro et al., 2016; Shrikumar et al., 2017; Sundararajan et al., 2017; Jain and Wallace, 2019; Atanasova et al., 2020; Zaman and Belinkov, 2022), we focus specifically on those that have been demonstrated to be effective for explaining the predictions of QE models (Treviso et al., 2021; Fomicheva et al., 2022; Fernandes et al., 2022; Zerva et al., 2022) and extend them to our reference-based scenario. Concretely, we use the following techniques to extract explanations:2
- **embed–align:** the maximum cosine similarity between each translation token embedding and the reference and/or source token embeddings (Tao et al., 2022);
- **grad** ℓ2: the ℓ2-norm of gradients with respect to the word embeddings of the translation tokens (Arras et al., 2019);
- **attention**: the attention weights of the translation tokens for each attention head of the encoder (Treviso et al., 2021);
- attn × **grad**: the attention weights of each head scaled by the ℓ2-norm of the gradients of the value vectors of that head (Rei et al., 2022b).
## 3 Experimental Setting
MQM annotations. We use MQM annotations from the WMT 2021 Metrics shared task (Freitag et al., 2021),3covering three language pairs
- English-German (en→de), English-Russian
(en→ru), and Chinese-English (zh→en) —in two different domains: News and TED Talks. For each incorrect translation, human experts marked the corresponding error spans. In our framework, these error spans should align with the words that the attribution methods assign higher importance to.
2For all attention-based methods, we ensemble the explanations from the top 5 heads as this has shown to improve performance consistently over selecting just the best head (Treviso et al., 2021; Rei et al., 2022b). Moreover, we use the full attention matrix, instead of relying only on cross attention information.
3https://github.com/google/
wmt-mqm-human-evaluation
| METRIC | EXPLAINABILITY | en→de | zh→en | en→ru | Avg. | | | | |
|---------------------------|---------------------------|---------|---------|---------|--------|-------|-------|-------|-------|
| METHOD | AUC | R@K | AUC | R@K | AUC | R@K | AUC | R@K | |
| src-only⋆ evaluation | | | | | | | | | |
| embed–align[mt, src] | 0.587 | 0.339 | 0.644 | 0.281 | 0.583 | 0.167 | 0.604 | 0.262 | |
| grad ℓ2 | 0.572 | 0.293 | 0.535 | 0.200 | 0.620 | 0.169 | 0.576 | 0.221 | |
| attention | 0.636 | 0.322 | 0.612 | 0.253 | 0.612 | 0.189 | 0.620 | 0.254 | |
| attn × grad | 0.707 | 0.376 | 0.639 | 0.294 | 0.633 | 0.211 | 0.660 | 0.294 | |
| ref-only evaluation | | | | | | | | | |
| UNITE SRC | embed–align[mt, ref] | 0.658 | 0.396 | 0.667 | 0.328 | 0.635 | 0.218 | 0.653 | 0.314 |
| grad ℓ2 | 0.596 | 0.319 | 0.571 | 0.260 | 0.661 | 0.202 | 0.609 | 0.261 | |
| attention | 0.637 | 0.344 | 0.670 | 0.335 | 0.652 | 0.224 | 0.653 | 0.301 | |
| attn × grad | 0.725 | 0.425 | 0.667 | 0.380 | 0.660 | 0.248 | 0.684 | 0.351 | |
| src,ref joint evaluation | | | | | | | | | |
| UNITE REF | embed–align[mt, src; ref] | 0.650 | 0.383 | 0.670 | 0.330 | 0.618 | 0.213 | 0.646 | 0.309 |
| grad ℓ2 | 0.595 | 0.325 | 0.579 | 0.257 | 0.643 | 0.191 | 0.606 | 0.257 | |
| attention | 0.657 | 0.421 | 0.670 | 0.383 | 0.649 | 0.223 | 0.659 | 0.342 | |
| attn × grad | 0.736 | 0.421 | 0.674 | 0.383 | 0.671 | 0.248 | 0.693 | 0.351 | |
| UNITE SRC+REF | embed–align[mt, src] | 0.590 | 0.371 | 0.674 | 0.314 | 0.577 | 0.220 | 0.614 | 0.301 |
| embed–align[mt, ref] | 0.694 | 0.425 | 0.696 | 0.355 | 0.647 | 0.275 | 0.679 | 0.352 | |
| embed–align[mt, src; ref] | 0.688 | 0.416 | 0.697 | 0.357 | 0.622 | 0.279 | 0.669 | 0.350 | |
| grad ℓ2 | 0.603 | 0.312 | 0.540 | 0.252 | 0.604 | 0.185 | 0.582 | 0.250 | |
| attention | 0.604 | 0.351 | 0.592 | 0.259 | 0.633 | 0.209 | 0.608 | 0.268 | |
| attn × grad | 0.710 | 0.365 | 0.633 | 0.278 | 0.662 | 0.244 | 0.669 | 0.295 | |
| COMET | | | | | | | | | |
Models. For COMET, we use the latest publicly available model: wmt22-comet-da (Rei et al.,
2022a).4 For UNITE, we train our own model using the same data used to train COMET in order to have a comparable setup5. We provide full details (training data, correlations with human annotations, and hyperparameters) in Appendix A.
Overall, the resulting reference-based UNITE models (REF and SRC+REF) are on par with COMET.
Evaluation. We want our explanations to be directly attributed to the annotated error spans, in the style of an error detection task. Thus, we report Area Under Curve (AUC) and [email protected] These metrics have been used as the main metrics in previous works on explainable QE (Fomicheva et al.,
2021, 2022; Zerva et al., 2022).
## 4 Results 4.1 High-Level Analysis Explanations Are Tightly Related To The Underlying Metric Architecture. The Results In Ta-
ble 1 show that the predictive power of the attribution methods differ between UNITE and COMET: attn × grad is the best method for UNITEbased models, while embed–align works best for COMET.
7 This is expected as UNITE constructs a joint representation for the input sentences, thus allowing attention to flow across them; COMET,
in contrast, encodes the sentences separately, so it relies heavily on the separate contextualized embeddings that are subsequently combined via elementwise operations such as multiplication and absolute difference. Interestingly, embed–align and attn × grad were the winning explainability approaches of the WMT 2022 Shared-Task on Quality Estimation (Zerva et al., 2022). This suggests that explainability methods developed for QE systems can translate well to reference-based metrics. We provide examples of explanations in Appendix C.
## Reference Information Boosts Explainability
power. Table 1 also shows that, across all metrics, using reference information brings substantial improvements over using only the source information. Moreover, while reference-based attributions significantly outperform source-based attributions, combining the source and reference information to 7In Appendix B, we provide a comparison between the explanations obtained via embed–align with COMET and with its pretrained encoder model, XLM-R.
![3_image_0.png](3_image_0.png)
obtain token-level attributions does not consistently yield superior results over using the reference alone.
Notably, the best attribution method for COMET
does not require any source information. This is interesting: in some cases, reference-based metrics may largely ignore source information, relying heavily on the reference instead.
## 4.2 How Do The Explanations Fare For Critical Translation Errors?
The MQM data analyzed until now consists primarily of high quality translations, with the majority of annotated errors being non-critical. However, it is important to assess whether our explanations can be accurately attributed to critical errors, as this may reveal potential metric shortcomings. To this end, we employ SMAUG (Alves et al., 2022)
8, a tool designed to generate synthetic data for stresstesting metrics, to create corrupted translations that contain critical errors. Concretely, we generate translations with the following pathologies: negation errors, hallucinations via insertions, named entity errors, and errors in numbers.9 Explanations identify critical errors more easily than non-critical errors. Figure 2 shows that explanations are more effective in identifying critical errors compared to other non-critical errors (see Table 1). Specifically, we find significant performance improvements up to nearly 30% in Recall@K for certain critical errors. Overall, hallucinations are the easiest errors to identify across all neural metrics. This suggests that neural 8https://github.com/Unbabel/smaug 9We corrupt fully correct translations that are not an exact copy of the reference translation. Moreover, as the full suit of SMAUG transformations can only be applied to English data, we focus solely on zh→en translations. Overall, the synthetic dataset consists of 2610 translations. Full statistics about the corrupted data and examples are shown in Appendix A.2.
metrics appropriately identify and penalize hallucinated translations, which aligns with the findings of Guerreiro et al. (2022). Moreover, explanations for both UNITE models behave similarly for all errors except numbers, where the source information plays a key role in improving the explanations. Notably, contrary to what we observed for data with non-critical errors, COMET explanations are less effective than those of UNITE REF and UNITE
SRC+REF for identifying critical errors.
Explanations can reveal potential metric weaknesses. Figure 2 suggests that COMET explanations struggle to identify localized errors (negation errors, named entity errors and discrepancies in numbers). We hypothesize that this behavior is related to the underlying architecture. Unlike UNITE-based metrics, COMET does not rely on soft alignments via attention between the sentences in the encoding process. This process may be key to identify local misalignments during the encoding process. In fact, the attention-based attributions for UNITE metrics can more easily identify these errors. COMET, however, encodes the sentences separately, which may result in grammatical features (e.g. numbers) being encoded similarly across sentences (Chi et al., 2020; Chang et al., 2022). As such, explanations obtained via embedding alignments will not properly identify these misalignments on similar features. Importantly, these findings align with observations made in (Amrhein and Sennrich, 2022; Raunak et al., 2022). This showcases how explanations can be used to diagnose and reveal shortcomings of neural-based metrics.
## 5 Conclusions And Future Work
In this paper, we investigated the use of explainability methods to better understand widely used neural metrics for machine translation evaluation, such as COMET and UNITE. Concretely, we analyzed how explanations are impacted by the reference information, and how they can be used to reveal weaknesses of these metrics. Our analysis shows that the quality of the explanations is tightly related to the underlying metric architecture. Interestingly, we also provide evidence that neural metrics like COMET may heavily rely on reference information over source information. Additionally, we show that explanations can be used to reveal reference-based metrics weaknesses such as failing to severely penalize localized critical errors.
This opens up promising opportunities for future research on leveraging explanations to diagnose reference-based metrics errors. To support these studies, we call for future datasets illustrating critical errors (e.g., challenge sets (Karpinska et al.,
2022)) to be accompanied by annotated error spans.
## Limitations
We highlight three main limitations of our work.
First, although we have explored gradient-based explanations that take the whole network into consideration and have been shown to be faithful in previous work (Bastings et al., 2021), we do not explicitly explore how COMET combines the sentence representations in the feed-forward that precedes the encoder model to produce the sentence-level score.
Second, we have shown that combining attention with gradient information results in the best explanations for UNITE-based metrics. However, from a practical standpoint, running inference and extracting the explainability scores simultaneously may be more computationally expensive than other techniques: gradient-based metrics benefit from GPU infrastructure and require storing all gradient information.
Third, we have not explored extracting explanations in low-resource settings. That is because high-quality MQM annotations for such language pairs are not yet available. Nevertheless, further research in those settings is needed to access the broader validity of our claims.
## Acknowledgements
This work was partially supported by the P2020 programs (MAIA, contract 045909), the Portuguese Recovery and Resilience Plan (PRR)
through project C645008882-00000055, Center for Responsible AI, by the European Research Council
(ERC StG DeepSPIN, 758969), by EU's Horizon Europe Research and Innovation Actions (UTTER,
contract 101070631), and by the Fundação para a Ciência e Tecnologia (contracts UIDB/50021/2020 and UIDB/50008/2020).
## References
Duarte Alves, Ricardo Rei, Ana C Farinha, José G.
C. de Souza, and André F. T. Martins. 2022. Robust MT Evaluation with Sentence-level Multilingual Augmentation. In *Proceedings of the Seventh Conference on Machine Translation*, pages 469–478, Abu Dhabi. Association for Computational Linguistics.
Chantal Amrhein and Rico Sennrich. 2022. Identifying Weaknesses in Machine Translation Metrics Through Minimum Bayes Risk Decoding: A Case Study for COMET. In *Proceedings of the 2nd Conference* of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1125–1141, Online only. Association for Computational Linguistics.
Leila Arras, Ahmed Osman, Klaus-Robert Müller, and Wojciech Samek. 2019. Evaluating recurrent neural network explanations. In *Proceedings of the 2019* ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 113–126, Florence, Italy. Association for Computational Linguistics.
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classification. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 3256–3274, Online. Association for Computational Linguistics.
Jasmijn Bastings, Sebastian Ebert, Polina Zablotskaia, Anders Sandholm, and Katja Filippova. 2021. "will you find these shortcuts?" a protocol for evaluating the faithfulness of input salience methods for text classification.
Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of Bleu in machine translation research. In 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 249–256, Trento, Italy.
Association for Computational Linguistics.
Tyler A. Chang, Zhuowen Tu, and Benjamin K. Bergen.
2022. The geometry of multilingual language model representations.
Ethan A. Chi, John Hewitt, and Christopher D. Manning. 2020. Finding universal grammatical relations in multilingual BERT. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 5564–5577, Online. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Daniel Deutsch, Rotem Dror, and Dan Roth. 2021. A
statistical analysis of summarization evaluation metrics using resampling methods. *Transactions of the* Association for Computational Linguistics, 9:1132–
1146.
Patrick Fernandes, Marcos Treviso, Danish Pruthi, André F. T. Martins, and Graham Neubig. 2022. Learning to scaffold: Optimizing model explanations for teaching.
Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, and Yang Gao. 2021. The Eval4NLP shared task on explainable quality estimation: Overview and results. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, pages 165–178, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Marina Fomicheva, Lucia Specia, and Nikolaos Aletras. 2022. Translation error detection as rationale extraction. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 4148–4159, Dublin, Ireland. Association for Computational Linguistics.
Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George Foster, Alon Lavie, and André F. T. Martins.
2022. Results of WMT22 Metrics Shared Task: Stop Using BLEU - Neural Metrics Are Better and More Robust. In *Proceedings of the Seventh Conference* on Machine Translation, pages 46–68, Abu Dhabi.
Association for Computational Linguistics.
Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ondˇrej Bojar. 2021. Results of the WMT21 metrics shared task: Evaluating metrics with expert-based human evaluations on TED and news domain. In *Proceedings of the Sixth Conference on Machine Translation*, pages 733–774, Online. Association for Computational Linguistics.
Nuno M. Guerreiro, Elena Voita, and André F. T. Martins. 2022. Looking for a needle in a haystack: A
comprehensive study of hallucinations in neural machine translation.
Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 3543–3556, Minneapolis, Minnesota.
Association for Computational Linguistics.
Marzena Karpinska, Nishant Raj, Katherine Thai, Yixiao Song, Ankita Gupta, and Mohit Iyyer. 2022.
Demetr: Diagnosing evaluation metrics for translation. In *Proceedings of the 2022 Conference on* Empirical Methods in Natural Language Processing, page 9540–9561, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics.
Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, and Steffen Eger.
2022. Towards explainable evaluation metrics for natural language generation.
Arle Lommel, Hans Uszkoreit, and Aljoscha Burchardt.
2014. Multidimensional Quality Metrics (MQM) : A
Framework for Declaring and Describing Translation Quality Metrics. *Tradumàtica*, pages 0455–463.
Nitika Mathur, Timothy Baldwin, and Trevor Cohn.
2020. Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4984–4997, Online. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Maja Popovic. 2015. ´ chrF: character n-gram F-score for automatic MT evaluation. In *Proceedings of the* Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020. TransQuest: Translation Quality Estimation with Cross-lingual Transformers. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5070–5081, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Vikas Raunak, Matt Post, and Arul Menezes. 2022.
Salted: A framework for salient long-tail translation error detection.
Ricardo Rei, José G. C. de Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André F. T. Martins.
2022a. COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task. In *Proceedings of the* Seventh Conference on Machine Translation, pages 578–585, Abu Dhabi. Association for Computational Linguistics.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Ricardo Rei, Marcos Treviso, Nuno M. Guerreiro, Chrysoula Zerva, Ana C Farinha, Christine Maroti, José G. C. de Souza, Taisiya Glushkova, Duarte Alves, Luisa Coheur, Alon Lavie, and André F. T.
Martins. 2022b. CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task. In Proceedings of the Seventh Conference on Machine Translation, pages 634–645, Abu Dhabi. Association for Computational Linguistics.
Marco Ribeiro, Sameer Singh, and Carlos Guestrin.
2016. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 97–101, San Diego, California. Association for Computational Linguistics.
Ananya B. Sai, Vignesh Nagarajan, Tanay Dixit, Raj Dabre, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M. Khapra. 2022. IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation metrics for Indian Languages.
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020.
BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics.
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning Important Features Through Propagating Activation Differences. In *Proceedings* of the 34th International Conference on Machine Learning, volume 70 of *Proceedings of Machine* Learning Research, pages 3145–3153. PMLR.
Aaron Smith, Christian Hardmeier, and Joerg Tiedemann. 2016. Climbing mont BLEU: The strange world of reachable high-BLEU translations. In *Proceedings of the 19th Annual Conference of the European Association for Machine Translation*, pages 269–281.
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017.
Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings of Machine Learning Research*, pages 3319–3328. PMLR.
Shimin Tao, Su Chang, Ma Miaomiao, Hao Yang, Xiang Geng, Shujian Huang, Min Zhang, Jiaxin Guo, Minghan Wang, and Yinglu Li. 2022. CrossQE: HW-TSC
2022 Submission for the Quality Estimation Shared Task. In *Proceedings of the Seventh Conference on* Machine Translation, pages 646–652, Abu Dhabi.
Association for Computational Linguistics.
Marcos Treviso, Nuno M. Guerreiro, Ricardo Rei, and André F. T. Martins. 2021. IST-unbabel 2021 submission for the explainable quality estimation shared task. In *Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems*, pages 133–
145, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yu Wan, Dayiheng Liu, Baosong Yang, Tianchi Bi, Haibo Zhang, Boxing Chen, Weihua Luo, Derek F.
Wong, and Lidia S. Chao. 2021. RoBLEURT submission for WMT2021 metrics task. In Proceedings of the Sixth Conference on Machine Translation, pages 1053–1058, Online. Association for Computational Linguistics.
Yu Wan, Dayiheng Liu, Baosong Yang, Haibo Zhang, Boxing Chen, Derek Wong, and Lidia Chao. 2022.
UniTE: Unified translation evaluation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8117–8127, Dublin, Ireland. Association for Computational Linguistics.
Kerem Zaman and Yonatan Belinkov. 2022. A Multilingual Perspective Towards the Evaluation of Attribution Methods in Natural Language Inference.
Chrysoula Zerva, Frédéric Blain, Ricardo Rei, Piyawat Lertvittayakumjorn, José G. C. de Souza, Steffen Eger, Diptesh Kanojia, Duarte Alves, Constantin Orasan, Marina Fomicheva, André F. T. Martins, and ˘
Lucia Specia. 2022. Findings of the WMT 2022 Shared Task on Quality Estimation. In Proceedings of the Seventh Conference on Machine Translation, pages 69–99, Abu Dhabi. Association for Computational Linguistics.
## A Model Details
In Section 2.1, we employed the latest publicly available model (wmt22-comet-da) for COMET,
which emerged as a top-performing metric in the WMT 2022 Metrics task (Freitag et al., 2022). To ensure a comparable setting for UNITE (Wan et al.,
2022), we trained our own model. In doing so, we utilized the same data employed in the development of the COMET model by (Rei et al., 2022a),
without pretraining any synthetic data, as originally suggested. Additionally, our implementation did not incorporate monotonic regional attention, as our preliminary experiments revealed no discernible benefits from its usage. The hyperparameters used are summarized in Table 3, while Table 4 presents the number of Direct Assessments utilized during training. Furthermore, Table 5 displays the segment-level correlations with WMT 2021 MQM
data for the News and TED domains.
Regarding computational infrastructure, a single NVIDIA A10G GPU with 23GB memory was used.
The resulting UNITE model has 565M parameters while COMET has 581M parameters.
## A.1 Output Distribution
To better understand the output of the models and what scores are deemed low, we plotted the output distributions for the two models we used in our study. The average score for English→German data is 0.856 for the COMET model and 0.870 for the UNITE model we trained. From Figure 3 we can observe the distribution of scores. This means that the 0.6692 score from the example in Figure 1 corresponds to a low quality output (5th percentile).
## A.2 Smaug Corpus
As we have seen in Section 4.2, we have created synthetic translation errors for the following pathologies: negation errors, hallucinations via insertions, named entity errors, and errors in numbers.
Table 7 presents a summary of the examples created using SMAUG and in Table 8 we show examples of each error category.
## B Comparison Between Comet And Xlm-R **Alignments**
From Table 1, it is evident that the alignments between the reference and/or source and the translation yield effective explanations for COMET. This raises the question of how these alignments compare to the underlying encoder of COMET before the fine-tuning process with human annotations. To investigate this, we examine the results for XLM-R
without any fine-tuning, as presented in Table 2.
Overall, the explanations derived from the alignments of COMET prove to be more predictive of error spans than those obtained from XLM-R alignments. This suggests that during the fine-tuning phase, COMET models modify the underlying XLM-R representations to achieve better alignment with translation errors.
## C Examples
In Tables 9 and 10, we show examples of COMET explanations for Chinese→English and English→German language pairs, respectively. We highlight in gray the corresponding MQM annotation performed by an expert linguist and we sort the examples from highest to lowest COMET scores.
From these examples we can observe the following:
- Highlights provided by COMET explanations have a high recall with human annotations. In all examples, subword tokens corresponding to translation errors are highlighted in red but we often see that not everything is incorrect.
- Explanations are consistent with scores. For example, in the third example from Table 10, the red highlights do not correspond to errors and in fact the translation only has a major error griffen .
Nonetheless, the score assigned by COMET is a low score of 0.68 which is faithful to the explanations that was given even if the assessment does not agree with human experts.
| METRIC | EXPLAINABILITY | en→de | zh→en | en→ru | Avg. | | | | |
|---------------------------|----------------------|---------|---------|---------|--------|-------|-------|-------|-------|
| METHOD | AUC | R@K | AUC | R@K | AUC | R@K | AUC | R@K | |
| embed–align[mt, src] | 0.587 | 0.359 | 0.668 | 0.311 | 0.576 | 0.199 | 0.610 | 0.289 | |
| XLM-R | embed–align[mt, ref] | 0.671 | 0.405 | 0.689 | 0.345 | 0.634 | 0.244 | 0.664 | 0.331 |
| embed–align[mt, src; ref] | 0.666 | 0.395 | 0.690 | 0.347 | 0.616 | 0.242 | 0.657 | 0.328 | |
| embed–align[mt, src] | 0.590 | 0.371 | 0.674 | 0.314 | 0.577 | 0.220 | 0.614 | 0.301 | |
| COMET | embed–align[mt, ref] | 0.694 | 0.425 | 0.696 | 0.355 | 0.647 | 0.275 | 0.679 | 0.352 |
| embed–align[mt, src; ref] | 0.688 | 0.416 | 0.697 | 0.357 | 0.622 | 0.279 | 0.669 | 0.350 | |
| Hyperparameter | UNITE | COMET |
|--------------------|---------------|---------|
| Encoder Model | XLM-R (large) | |
| Optimizer | AdamW | |
| No. frozen epochs | 0.3 | |
| Learning rate (LR) | 1.5e-05 | |
| Encoder LR. | 1.0e-06 | |
| Layerwise Decay | 0.95 | |
| Batch size | 16 | |
| Loss function | MSE | |
| Dropout | 0.1 | |
| Hidden sizes | [3072, 1024] | |
| Embedding layer | Frozen | |
| FP precision | 16 | |
| No. Epochs | 1 | 2 |
Table 3: Hyperparameters used to train UNITE and COMET checkpoints used in this work. The only difference between the two is the number of training epochs due to the fact that, for UNITE, the best validation checkpoint is the first one.
| Language Pair | SIZE |
|-----------------|---------|
| zh-en | 126947 |
| en-de | 121420 |
| de-en | 99183 |
| en-zh | 90805 |
| ru-en | 79280 |
| en-ru | 62749 |
| en-cs | 60937 |
| fi-en | 46145 |
| en-fi | 34335 |
| tr-en | 30186 |
| et-en | 29496 |
| cs-en | 27847 |
| en-mr | 26000 |
| de-cs | 13804 |
| en-et | 13376 |
| pl-en | 11816 |
| en-pl | 10572 |
| lt-en | 10315 |
| en-ja | 9578 |
| gu-en | 9063 |
| si-en | 9000 |
| ro-en | 9000 |
| ne-en | 9000 |
| en-lt | 8959 |
| ja-en | 8939 |
| en-kk | 8219 |
| en-ta | 7890 |
| ta-en | 7577 |
| en-gu | 6924 |
| kk-en | 6789 |
| de-fr | 6691 |
| en-lv | 5810 |
| en-tr | 5171 |
| km-en | 4722 |
| ps-en | 4611 |
| fr-de | 3999 |
| Total | 1027155 |
![9_image_0.png](9_image_0.png)
BLEU CHRF YISI-1 BLEURT UNITE UNITE UNITE COMET
SRC REF SRC+REF wmt22-comet-da
EN
→DENews
ρ 0.077 0.092 0.163 0.307 0.274 **0.321** 0.304 0.297
τ 0.069 0.092 0.144 0.240 0.222 **0.248** 0.241 0.232
TEDρ 0.151 0.158 0.236 **0.325** 0.311 **0.335 0.338 0.329**
τ 0.113 0.146 0.212 0.283 0.264 **0.301 0.298** 0.278
EN
→RUNews
ρ 0.153 0.252 0.263 0.359 0.333 **0.391 0.382** 0.363
τ 0.106 0.178 0.216 0.276 0.276 **0.298** 0.297 0.293
TEDρ 0.154 0.268 0.235 0.286 0.239 0.289 **0.318 0.308**
τ 0.112 0.189 0.204 0.255 0.232 **0.262 0.264 0.268**
ZH
→ENNews
ρ 0.215 0.231 0.301 0.428 0.413 0.438 0.426 **0.445**
τ 0.165 0.188 0.289 0.341 0.331 0.358 0.352 **0.371**
TEDρ 0.155 0.181 0.287 0.295 0.244 0.301 **0.310 0.307**
τ 0.113 0.144 0.216 0.246 0.224 0.265 **0.266 0.269**
| Error Type | NUM EXAMPLES |
|--------------|----------------|
| NE | 978 |
| NEG | 669 |
| HALL | 530 |
| NUM | 432 |
| Total | 2609 |
| Language Pair | TOKENS / SENT. | ERRORS / SPANS |
|-----------------|------------------|------------------|
| en-de | 528704 / 15310 | 25712 / 3567 |
| en-ru | 525938 / 15074 | 17620 / 7172 |
| zh-en | 603258 / 16506 | 43984 / 10042 |
Table 7: Statistics about MQM data from WMT 2021 Metrics task (Freitag et al., 2021) used in our experiments.
| Source: | |
|-------------------------------------------------------------------------------------------------------------------|----|
| 格里沃里表示,分析人士对越南所提出的和平倡议给予认可。 | |
| .. | |
| Translation: | |
| Great and the proposed by Vietnam. | |
| Reference: | |
| Great proposed by Vietnam | |
| NE Error: | |
| Group of the proposed by Russia. The proposed by Russia . | |
| Source: | |
| 英国的这一决定预计将会使西班牙的旅游业大受影响 . | .. |
| Translation: | |
| This decision by the United Kingdom is expected to greatly affect Spain's tourism industry. | .. |
| Reference: | |
| The following the state | |
| NEG Error: | |
| The process of the proce | |
| Source: | |
| 由于疫情,人们开始在互联网上花费更多的时间。" | |
| Translation: | |
| "" | |
| Reference: | |
| For reason of the pandemic, people are starting to spend more time on the Internet. " | |
| H ALL Error: | |
| Because we have a lot of friends around during the epidemic, people are starting to spend more time on the | |
| mobile devices than on the Internet." | |
| .. | |
| Source: | |
| __ | |
| Translation: | |
| The exhibition and sales area will be open until July 29. | |
| Reference: | |
| The exhibition will last until July 29. | |
| Num Error: | |
| The exhibition and sales area will be open until July 2018 | |
| Table 8: Synthetically-generated critical errors (highlighted in gray) created with SMAUG (Alves et al., 2022) to | |
assess whether our explanations can be accurately attributed to critical errors.
Source:
And yet, the universe is not a silent movie because the universe isn't silent. Translation: Und dennoch ist das Universum kein Stummfilm, weil das Universum nicht still ist.
COMET **score:** 0.8595
COMET **explanation:**
_Und _dennoch _ist _das _Univers um _kein _Stu mm film , _weil _das _Univers um _nicht _still _ist .
Source:
And yet black holes may be heard even if they're not seen, and that's because they bang on space-time like a drum.
Translation: Und dennoch werden Schwarze Löcher vielleicht gehört , auch wenn sie nicht gesehen werden, und das liegt daran, dass sie wie eine Trommel auf die Raumzeit schlagen.
COMET **score:** 0.7150 COMET **explanation:**
_Und _dennoch _werden _Schwarz e _Lö cher _vielleicht _gehört , _auch _wenn _sie _nicht _gesehen _werden , _und _das _liegt _daran , _dass _sie _wie _eine _Tro mmel _auf _die _Raum zeit schlagen .
Source:
Ash O'Brien and husband Jarett Kelley say they were grabbing a bite to eat at Dusty Rhodes dog park in San Diego on Thursday, with their three-month-old pug in tow.
Translation:
Ash O'Brien und Ehemann Jarett Kelley sagen, dass sie am Donnerstag im Hundepark Dusty Rhodes in San Diego einen Happen zu essen griffen , mit ihrem drei Monate alten Mops im Schlepptau.
COMET **score:** 0.6835 COMET **explanation:**
_Ash _O ' Bri en _und _Ehe mann _Ja rett _Kel ley _sagen , _dass _sie _am _Donnerstag _im _Hunde park _Du sty _Rhod es _in _San _Diego _einen _Happ en _zu _essen _ griff en _ , _mit _ihrem _drei
_Monate _alten _M ops _im _Schle ppt au .
Source: It was Einstein's great general theory of relativity. Translation: Es war Einsteins große allgemeine Forschungen vor Relativitätstheorie.
COMET **score:** 0.6692 COMET **explanation:**
_Es _war _Einstein s _große _allgemein e _Forschung en _vor _Relativ ität s the ori e .
Source:
There's mask-shaming and then there's full on assault.
Translation:
Es gibt Maskenschämen und dann ist es voll bei Angriff.
COMET **score:** 0.2318 COMET **explanation:**
_Es _gibt _Mask en schä men _und _dann _ist _es _voll _bei _Angriff _ .
Table 9: Saliency map for COMET explanation scores for a set of en→de examples. Comparing the token-level explanations with the MQM annotation ( highlighted in gray ) reveals the source of correspondence between specific token-level translation errors and the resulting scores.
| Source: 我想告诉大家 宇宙有着自己的配乐, 而宇宙自身正在不停地播放着。 因为太空可以想鼓一样振动。 Translation: I want to tell you that the universe has its own iconic soundtrack and the universe itself is constantly playing non-stop because space can vibrate like a drum. COMET score: 0.8634 COMET explanation: _I _want _to _tell _you _that _the _univers e _has _its _own _icon ic _soundtrack _and _the _univers e _itself _is _constantly _playing _non - stop _because _space _can _vibra te _like _a _drum . Source: 另外,吉克隽逸和刘烨作为运动助理,也围绕运动少年制造了不少爆笑话题。 Translation: In addition, as sports assistants, Ji Kejunyi and Liu Ye have also created a lot of hilarious topics around sports teenagers. COMET score: 0.8214 COMET explanation: _In _addition , _as _sports _assistant s , _Ji _Ke ju nyi _and _Li u _Ye _have _also _created _a _lot _of _ hila rious _topic s _around _sports _teenager s . Source: 一番言论让场上的少年和运动领队们都倒吸一口凉气。 Translation: The remarks made the teenagers and the sports leaders on the field gasp a sigh of relief . COMET score: 0.7793 COMET explanation: _The _re marks _made _the _teenager s _and _the _sports _leaders _on _the _field _gas p _a _sig h _of _relief _ . Source: 强烈的阳光是如此地刺眼, Translation: The intense sunlight is so harsh; COMET score: 0.7561 COMET explanation: _The _intense _sun light _is _so _har sh ; Source: 如今,我们希望能够 给这部关于宇宙的 宏伟的视觉作品 配上声音。 Translation: Today , we hope to be able to give this magnificent visual work of the universe a sound. COMET score: 0.7073 COMET explanation: _Today , _we _hope _to _be _able _to _give _this _magnific ent _visual _work _of _the _univers e _a _sound . |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Table 10: Saliency map for COMET explanation scores for a set of zh→en examples. Comparing the token-level explanations with the MQM annotation ( highlighted in gray ) reveals the source of correspondence between specific token-level translation errors and the resulting scores.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Yes. Section 6
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✓ A4. Have you used AI writing assistants when working on this paper?
Assistance purely with the language of the paper along every section. Grammarly and DeepL write
B ✓ **Did you use or create scientific artifacts?**
Section 3 explains the methods we used. We will release the adaptations required to use the explainability methods over COMET framework, the UniTE model we trained, and all data synthetically-generated data.
✓ B1. Did you cite the creators of artifacts you used?
Section 2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
footnote on the first page. The License will be Apache 2.0 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? in the Appendix we have several statistics for training and testing data.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
C ✓ **Did you run computational experiments?**
Appendix provides detailed information about the trained model.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix provides detailed information about the trained model including GPU infrastructure and total number of parameters.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix has all information needed about test data and performance of the models.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 2 and Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
tasawong-etal-2023-typo | Typo-Robust Representation Learning for Dense Retrieval | https://aclanthology.org/2023.acl-short.95 | Dense retrieval is a basic building block of information retrieval applications. One of the main challenges of dense retrieval in real-world settings is the handling of queries containing misspelled words. A popular approach for handling misspelled queries is minimizing the representations discrepancy between misspelled queries and their pristine ones. Unlike the existing approaches, which only focus on the alignment between misspelled and pristine queries, our method also improves the contrast between each misspelled query and its surrounding queries. To assess the effectiveness of our proposed method, we compare it against the existing competitors using two benchmark datasets and two base encoders. Our method outperforms the competitors in all cases with misspelled queries. Our code and models are available at \url{https://github.com/panuthept/DST-DenseRetrieval}. |
## Typo-Robust Representation Learning For Dense Retrieval
Panuthep Tasawong†, Wuttikorn Ponwitayarat†**, Peerat Limkonchotiwat**†,
Can Udomcharoenchaikit†, Ekapol Chuangsuwanich‡, **Sarana Nutanong**†
†School of Information Science and Technology, VISTEC, Thailand
‡Department of Computer Engineering, Chulalongkorn University, Thailand
{panuthep.t_s20,wuttikorn.p_s22,peerat.l_s19
,canu_pro,snutanon}@vistec.ac.th, [email protected]
## Abstract
Dense retrieval is a basic building block of information retrieval applications. One of the main challenges of dense retrieval in real-world settings is the handling of queries containing misspelled words. A popular approach for handling misspelled queries is minimizing the representations discrepancy between misspelled queries and their pristine ones. Unlike the existing approaches, which only focus on the alignment between misspelled and pristine queries, our method also improves the contrast between each misspelled query and its surrounding queries. To assess the effectiveness of our proposed method, we compare it against the existing competitors using two benchmark datasets and two base encoders. Our method outperforms the competitors in all cases with misspelled queries. Our code and models are available at https://github.
com/panuthept/DST-DenseRetrieval.
## 1 Introduction
Dense retrieval is a fundamental component in many information retrieval applications, such as open-domain question answering and ad-hoc retrieval. The objective is to score and rank a large collection of candidate passages based on their similarity to a given query. The performance of dense retrieval relies on representation learning. A popular approach is to finetune a pre-trained language model to create an embedding space that puts each query closer to its corresponding passages (Zhan et al., 2020; Khattab and Zaharia, 2020; Xiong et al., 2021; Qu et al., 2021; Ren et al., 2021a,b).
One of the major challenges of dense retrieval is the handling of misspelled queries which induces representations of the misspelled queries to be closer to irrelevant passages than their corresponding passages. Several studies have demonstrated that misspellings in search queries can substantially degrade retrieval performance (Zhuang and Zuccon, 2021; Penha et al., 2022), specifically when informative terms, such as entity mentions, are misspelled (Sidiropoulos and Kanoulas, 2022).
To create a retrieval model that is capable of handling misspelled queries, researchers have proposed different training methods to align representations of misspelled queries with their pristine ones. Zhuang and Zuccon (2021, 2022) devise augmentation methods to generate misspelled queries and propose training methods, Typos-aware Training and Self-Teaching (ST), to encourage consistency between outputs of misspelled queries and their non-misspelled counterparts. Alternatively, Sidiropoulos and Kanoulas (2022) apply contrastive loss to enforce representations of misspelled queries to be closer to their corresponding non-misspelled queries. Although these methods can improve the performance of retrieval models for misspelled queries, there is still a substantial performance drop for misspelled queries.
In this paper, we propose a training method to improve dense retrieval for handling misspelled queries based on the following desired properties:
- *Alignment*: the method should be able to align queries with their corresponding passages.
- *Robustness*: the method should be able to align misspelled queries with their pristine queries.
- *Contrast*: the method should be able to separate queries that refer to different passages and passages that correspond to different queries.
In contrast to the existing methods for handling misspelled queries that only satisfy the *Alignment* and *Robustness* properties, our method also aims to satisfy the *Contrast* property. Increasing the distance between dissimilar queries should help distinguish misspelled queries from other distinct queries. We design the following components for our training method: (i) Dual Self-Teaching (DST) incorporates the ideas of Dual Learning (Xia et al.,
2017; Li et al., 2021) and Self-Teaching (Zhuang and Zuccon, 2022) to train robust dense retrieval in a bidirectional manner: passage retrieval and 1106 query retrieval. (ii) Query Augmentation generates a numerous number of misspelling variations for each query to supply our training objective.
Experimental studies were conducted to assess the efficiency of the proposed method in comparison to existing approaches. We conduct experiments based on two different pre-trained language models. We evaluate using two passage retrieval benchmark datasets, a standard one and a specialized one for misspellings robustness evaluation.
For each dataset, we measure performance on both misspelled and non-misspelled queries, where the misspelled queries are both generated and realworld queries. The experimental results show that the proposed method outperforms the best existing methods for enhancing the robustness of dense retrieval against misspellings without sacrificing performance for non-misspelled queries.
We summarize our contributions as follows:
- We propose a novel training method to enhance the robustness of dense retrieval against misspellings by incorporating three desired properties: Alignment, *Robustness*, and *Contrast*.
- We introduce Dual Self-Teaching (DST) which adopts the idea of Dual Learning and SelfTeaching to learn robust representations. In addition, we propose Query Augmentation to generate multiple views of a particular query under different misspelling scenarios.
- We evaluate our method on misspelled and nonmisspelled queries from two passage retrieval datasets. The results show that our method outperforms the previous state-of-the-art methods by a significant margin on misspelled queries.
## 2 Methodology
We propose a training pipeline to enhance the dense retrieval capability for handling spelling variations and mistakes in queries. As shown in Figure 1, the training pipeline comprises three steps. (i) *Query* Augmentation: we augment each query in the training set into multiple misspelled queries using the typo generators provided by Zhuang and Zuccon
(2021). (ii) *Similarity Score Calculation*: we compute similarity score distributions between queries and passages for passage retrieval and query retrieval tasks using in-batch negative queries and passages, with additional hard negative passages.
(iii) *Dual Self-Teaching Loss Calculation*: we compute the DST loss using the similarity score distributions to achieve all three desired properties.
## 2.1 Query Augmentation
The purpose of this step is to guide the learning with a broad array of possible misspelling patterns. Let Q denote a set {q1, q2*, ..., q*N }
of N queries. From all queries in Q, we generate a set of K × N misspelled queries Q′ = {⟨q′1,k, q′2,k, ..., q′N,k⟩}K
k=1, where K is the misspelling variations. We use five typo generators proposed by Zhuang and Zuccon (2021), including: RandInsert, RandDelete, RandSub, SwapNeighbor, and SwapAdjacent. Please refer to Appendix A.2 for examples of the misspelled queries.
## 2.2 Similarity Score Calculation
Let S(a, B) denote a function that computes a similarity score distribution of any vector a over any set of vectors B:
Let of vectors $\mathbf{B}$. $S(a,\mathbf{B})=\left\{b_{i}\in\mathbf{B}\left|\,\frac{\exp(a\cdot b_{i})}{\sum_{b_{j}\in\mathbf{B}}\exp(a\cdot b_{j})}\right.\right\}$ (1) Given $\mathbf{P}=\left\{p_{1},p_{2},...,p_{M}\right\}$ to be a set of $M$ pass.
Given P = {p1, p2*, ..., p*M} to be a set of M passages and Q′k
= {q′1,k, q′2,k, ..., q′N,k} to be the k th set of misspelled queries in Q′, we compute two groups of score distributions as follows:
- *Passage retrieval*: we calculate score distributions in a query-to-passages direction for each original query sp = S(qn, P) and misspelled query s′k p = S(q′n,k, P).
- *Query retrieval*: we calculate score distributions in a passage-to-queries direction for original queries sq = S(pm, Q) and each set of misspelled queries s′k q = S(pm, Q′k
).
This way, we produce four different score distributions (sp, s′k p
, sq, s′k q
) for our training objective.
## 2.3 Dual Self-Teaching Loss Calculation
We design the *Dual Self-Teaching loss* (LDST) to capture the three desired properties: *Alignment*,
Robustness, and *Contrast*.
$${\mathcal{L}}_{\mathrm{DST}}=\underbrace{(1-\beta){\mathcal{L}}_{\mathrm{DCE}}}+\underbrace{\beta{\mathcal{L}}_{\mathrm{DKL}}}\tag{2}$$
Dual Cross-Entropy loss (LDCE) satisfies the Alignment and *Contrast* properties by utilizing cross-entropy losses to learn score distributions of the original queries for passage retrieval (sp) and query retrieval (sq) given labels yp and yq.
$$\mathcal{L}_{\mathrm{DCE}}=\underbrace{(1-\gamma)\mathcal{L}_{\mathrm{CE}}^{(P)}(s_{p},y_{p})}_{\mathrm{Passage~Retrieval}}+\underbrace{\gamma\mathcal{L}_{\mathrm{CE}}^{(Q)}(s_{q},y_{q})}_{\mathrm{Query~Retrieval}}\tag{3}$$
![2_image_0.png](2_image_0.png)
Minimizing the L
(P)
CE term will increase the similarity scores between queries and their relevant passages to be higher than other irrelevant passages by separating the relevant and irrelevant passages from one another. Minimizing the L
(Q)
CE term will increase the similarity scores between passages and their relevant queries to be higher than other irrelevant queries by separating the relevant and irrelevant queries from one another. In this manner, minimizing one of the two terms will align queries with their corresponding passages, satisfying the *Alignment* property. Moreover, minimizing both terms will separate queries that refer to different passages and passages that belong to different queries, satisfying the *Contrast* property.
Dual KL-Divergence loss (LDKL) aims to fulfill the *Robustness* property by using KL losses to match score distributions of misspelled queries
{s′1 p, s′2 p*, ..., s*′K
p } and {s′1 q, s′2 q, ..., s′K
q } to the score distributions of the original query sp and sq.
$$\begin{split}\mathcal{L}_{\text{DKL}}=\frac{1}{K}\sum_{k=1}^{K}\underbrace{(1-\sigma)\mathcal{L}_{\text{KL}}^{(P)}(s_{p}^{\prime k},s_{p})}_{\text{Passage Retrieval Consistency}}\\ +\underbrace{\sigma\mathcal{L}_{\text{KL}}^{(Q)}(s_{q}^{\prime k},s_{q})}_{\text{Query Retrieval Consistency}}\end{split}\tag{4}$$
Minimizing L
(P)
KL and L
(Q)
KL will reduce the discrepancy between misspelled and non-misspelled queries for both query-to-passages and passage-toqueries score distributions. This way, we implicitly align representations of the misspelled queries to the original queries, satisfying the *Robustness* property. To stabilize training, we apply stop-gradient to the score distributions of the original queries (sp and sq) in the LDKL. The β, γ, and σ are the balancing coefficients selected by hyper-parameter tuning on a development set. With this loss combination, we achieve all three desired properties.
## 3 Experimental Settings 3.1 Training Details
We experiment on two pre-trained language models, BERT (Devlin et al., 2019) and CharacterBERT (El Boukkouri et al., 2020). We train models only on the training set of MS MARCO
dataset (Nguyen et al., 2016). Moreover, the training data provided by the Tevatron toolkit (Gao et al., 2022) also contains hard negative passages. We include the training set details and hyper-parameter settings in Appendix A.1.
## 3.2 Competitive Methods
To show the effectiveness of our method, we compare our work with the following baseline and competitive training methods.
- DPR (Karpukhin et al., 2020) is a baseline training method that trains dense retrieval merely on non-misspelled queries using L
(P)
CE loss.
- *DPR+Aug* (Zhuang and Zuccon, 2021) is the Typos-aware Training method which trains dense retrieval on both misspelled and non-misspelled queries using L
(P)
CE loss.
- *DPR+Aug+CL* (Sidiropoulos and Kanoulas, 2022) employs additional contrastive loss to train the misspelled queries.
- *DPR+ST* (Zhuang and Zuccon, 2022) is the SelfTeaching method that trains dense retrieval on both misspelled and non-misspelled queries using L
(P)
CE and L
(P)
KL losses.
Note that their query augmentation method is identical to the Query Augmentation with K = 1. We retrain all models using the same setting described in the previous section.
| BERT-based | CharacterBERT-based | | | | | | | | | |
|--------------|-----------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|
| MS MARCO | DL-typo | MS MARCO | DL-typo | | | | | | | |
| Methods | MRR@10 | R@1000 | nDCG@10 | MRR | MAP | MRR@10 | R@1000 | nDCG@10 | MRR | MAP |
| DPR | .143 (.331) | .696 (.954) | .276 (.682) | .431 (.873) | .175 (.563) | .162 (.321) | .726 (.945) | .268 (.643) | .376 (.832) | .212 (.503) |
| + Aug | .227 (.334) | .857 (.950) | .398 (.682) | .530 (.806) | .286 (.565) | .258 (.326) | .883 (.946) | .414 (.631) | .578 (.783) | .318 (.512) |
| + Aug + CL | .234 (.335) | .867 (.951) | .387 (.668) | .536 (.864) | .267 (.544) | .263 (.330) | .894 (.947) | .466 (.677) | .635 (.819) | .360 (.544) |
| + ST | .237 (.333) | .874 (.950) | .392 (.677) | .525 (.852) | .283 (.557) | .274 (.332) | .900 (.947) | .469 (.650) | .619 (.810) | .359 (.517) |
| + DST (our) | .260†(.336) | .894†(.954) | .432 (.673) | .558 (.833) | .343†(.568) | .288†(.332) | .918†(.949) | .529†(.673) | .742†(.854) | .403 (.537) |
## 3.3 Dataset And Evaluation
Datasets. We evaluate the effectiveness of DST on two passage retrieval datasets, MS MARCO and DL-typo (Zhuang and Zuccon, 2022), each with misspelled and non-misspelled queries. There are 8.8 million candidate passages for both datasets.
The development set of MS MARCO contains 6,980 non-misspelled queries. To obtain misspelled queries, we use the typos generator method proposed by Zhuang and Zuccon (2021) to generate 10 misspelled variations for each original query. The DL-typo provides 60 real-world misspelled queries and 60 corresponding non-misspelled queries that are corrected manually.
Evaluation. We use the standard metrics originally used by each dataset's creators. For MS MARCO,
each misspelled query performance is the average of 10 measurements. We employ Ranx evaluation library (Bassani, 2022) to measure performance and statistical significance. Specifically, we use a two-tailed paired t-test with Bonferroni correction to measure the statistical significance (p < 0.05).
## 4 Experimental Results 4.1 Main Results
As shown in Table 1, the results indicate that DST
outperforms competitive methods for misspelled queries in every case without sacrificing performance for non-misspelled queries in eight out of ten cases. We observe some performance trade-offs for the BERT-based model in the DL-typo dataset's non-misspelling scores (nDCG@10 and MRR).
Aside from that, there is no performance trade-off for the CharacterBERT-based model. These outcomes conform with the observation in Figure 2
(Section 4.4) that DST improves the *Robustness* and *Contrast* of misspelled queries.
## 4.2 Query Augmentation Size Study
To study the benefit of query augmentation and find the optimal augmentation size, we measure the performance of BERT-based dense retrieval models trained with DST using the query augmentation size K of 1, 10, 20, 40, and 60. Note that the query augmentation method used in previous works is a special case of Query Augmentation when K = 1.
We report the results using MRR@10 for the development set of the MS MARCO dataset. We also report training time to show trade-offs between performance and computation.
| Queries | K | | | | |
|--------------------|------|------|------|------|------|
| 1 | 10 | 20 | 40 | 60 | |
| Original | .334 | .334 | .335 | .336 | .332 |
| Misspelled | .251 | .258 | .260 | .260 | .260 |
| Training time (hr) | 18 | 20 | 23 | 31 | 39 |
Table 2: Results of query augmentation size study. We train all models in this experiment on a V100 32G GPU.
As shown in Table 2, the results indicate that increasing K improves the performance of both misspelled and non-misspelled queries, but only up to a certain point, after which the performance begins to decline. We observe that setting K = 40 produces the best results, and there is no further performance improvement after this point.
## 4.3 Loss Ablation Study
In this experiment, we study the benefit of each term in DST by training BERT-based dense retrieval models on variant loss combinations with K = 40. The results in Table 3 reveal that L
(P) KL
and L
(Q)
KL terms positively contribute to the performance of misspelled and non-misspelled queries, with the L
(P)
KL being more significant. The L
(P) CE
term is crucial for retrieval performance, whereas the L
(Q)
CE term indirectly improves the performance 1109
![4_image_0.png](4_image_0.png)
of misspelled queries by separating their pristine queries from the surrounding queries. Disabling query retrieval terms (L
(Q)
CE and L
(Q)
KL ) greatly reduces performances for misspelled queries. The passage retrieval terms (L
(P)
CE and L
(P)
KL ) are indispensable and cannot be substituted.
## 4.4 Query Distributions
The purpose of this section is to study the impact of our training method on the *Robustness* and *Contrast* of misspelled queries. We also compare our method against the baseline and competitive methods to show its effectiveness. The *Robustness* and Contrast of misspelled queries are illustrated using the following kernel density graphs:
- Original-to-Misspell: the cosine similarity distribution between original and misspelled queries.
- Original-to-Neighbor: the cosine similarity distribution between original and neighbor queries.
The *Robustness* property is emphasized by the Original-to-Misspell distribution having high cosine similarity. On the other hand, the *Contrast* property is emphasized by the small overlapping between Original-to-Misspell and Originalto-Neighbor distributions. The results in Figure 2 show that our method (c) produces the best *Robustness* and *Contrast* properties for misspelled queries in comparison to other methods.
## 5 Conclusion
This paper aims to address the misspelling problem in dense retrieval. We formulate three desired properties for making dense retrieval robust to misspellings: Alignment, *Robustness*, and *Contrast*.
Unlike previous methods, which only focus on the Alignment and *Robustness* properties, our method considers all the desired properties. The empirical results show that our method performs best against misspelled queries, revealing the importance of the Contrast property for handling misspellings.
![4_image_1.png](4_image_1.png)
![4_image_2.png](4_image_2.png)
## 6 Limitations
We list the limitations of our work as follows:
- The Query Augmentation is designed for the English alphabet; therefore, other languages with different alphabets will require further work.
- Since the training strategy relies on fine-tuning a pre-trained language model using a large passage retrieval dataset, it may not be suitable for languages with limited resources
## References
Elias Bassani. 2022. ranx: A blazing-fast python library for ranking evaluation and comparison. In *ECIR (2)*,
volume 13186 of *Lecture Notes in Computer Science*,
pages 259–264. Springer.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Hicham El Boukkouri, Olivier Ferret, Thomas Lavergne, Hiroshi Noji, Pierre Zweigenbaum, and Jun'ichi Tsujii. 2020. CharacterBERT: Reconciling ELMo and BERT for word-level open-vocabulary representations from characters. In *Proceedings of the 28th* International Conference on Computational Linguistics, pages 6903–6915, Barcelona, Spain (Online).
International Committee on Computational Linguistics.
Luyu Gao, Xueguang Ma, Jimmy J. Lin, and Jamie Callan. 2022. Tevatron: An efficient and flexible toolkit for dense retrieval. *ArXiv*, abs/2203.05765.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics.
Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In *Proceedings of the 43rd* International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR
'20, page 39–48, New York, NY, USA. Association for Computing Machinery.
Yizhi Li, Zhenghao Liu, Chenyan Xiong, and Zhiyuan Liu. 2021. More robust dense retrieval with contrastive dual learning. In *Proceedings of the 2021* ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR '21, page 287–296, New York, NY, USA. Association for Computing Machinery.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng.
2016. MS MARCO: A human generated machine reading comprehension dataset. In *Proceedings of* the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016, volume 1773 of CEUR
Workshop Proceedings. CEUR-WS.org.
Gustavo Penha, Arthur Câmara, and Claudia Hauff.
2022. Evaluating the robustness of retrieval pipelines with query variation generators. In *Advances in Information Retrieval*, pages 397–412, Cham. Springer International Publishing.
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for opendomain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835–5847, Online. Association for Computational Linguistics.
Ruiyang Ren, Shangwen Lv, Yingqi Qu, Jing Liu, Wayne Xin Zhao, QiaoQiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021a. PAIR: Leveraging passage-centric similarity relation for improving dense passage retrieval. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP*
2021, pages 2173–2183, Online. Association for Computational Linguistics.
Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, QiaoQiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021b. RocketQAv2: A joint training method for dense passage retrieval and passage re-ranking.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 2825–2835, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Georgios Sidiropoulos and Evangelos Kanoulas. 2022.
Analysing the robustness of dual encoders for dense retrieval against misspellings. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.
ACM.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. 2017. Dual supervised learning.
In Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings* of Machine Learning Research, pages 3789–3798.
PMLR.
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and
Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In International Conference on Learning Representations.
Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma. 2020. Repbert: Contextualized text embeddings for first-stage retrieval. *CoRR*,
abs/2006.15498.
Shengyao Zhuang and Guido Zuccon. 2021. Dealing with typos for BERT-based passage retrieval and ranking. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2836–2842, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Shengyao Zhuang and Guido Zuccon. 2022. Characterbert and self-teaching for improving the robustness of dense retrievers on queries with typos. In *Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information* Retrieval, SIGIR '22, page 1444–1454, New York, NY, USA. Association for Computing Machinery.
## A Appendix A.1 Training Setup And Hyperparameters
The MS MARCO is a large-scale English language dataset for machine reading comprehension (MRC).
The dataset consists of anonymized queries sampled from Bing's search query logs, each with human generated answers. The training set we used contains 400,782 training samples, each consisting of a query, positive passage, and a set of hard negative passages, which we randomly select 7 hard negative passages for each training sample. We set a batch size to 16 and use in-batch negative sampling for each training sample. Therefore, we obtain 7
+ 8 * 15 = 127 negative passages for each training sample. We use the AdamW optimizer and learning rate of 1e−5 for 150,000 steps with a linear learning rate warm-up over the first 10,000 steps and a linear learning rate decay over the rest of the training steps. For our training method, we set the hyper-parameters β = 0.5, γ = 0.5, σ = 0.2, and the query augmentation size K = 40. Using one V100 32G GPU, the BERT-based model training time is around 31 hours, while the CharacterBERT-based model training time is roughly 56 hours.
## A.2 Query Augmentation Examples
Table 4 provides examples of misspelled queries generated by the Query Augmentation for each original query.
![7_image_0.png](7_image_0.png)
Table 4: The outputs of Query Augmentation with K = 10. We use different colors to indicate different types of typo: RandInsert , RandDelete , RandSub ,
SwapNeighbor , and SwapAdjacent .
## A.3 Licenses
Datasets: The MS MARCO dataset is available under the MIT license, and the DL-typo dataset is available under the Apache license 2.0. These licenses allow users to use the datasets under nonrestrictive agreements.
Softwares: We employ Hugging Face (Wolf et al.,
2020) and Tevatron (Gao et al., 2022) libraries to train dense retrieval models. We utilize Ranx library (Bassani, 2022) to evaluate retrieval performance. These libraries are available under the Apache license 2.0 which allows both academic and commercial usages. For this reason, we release our code under the Apache license 2.0 to make our code fully accessible and compatible with the other codes we use.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✗ A2. Did you discuss any potential risks of your work?
There is no potential risk associated with increasing the robustness of information retrieval applications to question containing misspellings.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✓ A4. Have you used AI writing assistants when working on this paper?
We use Grammarly to check grammatical errors and QuillBot to polish writing quality. These tools are applied to a certain number of sentences in each section, which are then reviewed by humans.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Section 3.1 for pre-trained language models, training dataset, and training toolkit. Section 3.2 for competitive methods. Section 3.3 for evaluation datasets and evaluation toolkit.
✓ B1. Did you cite the creators of artifacts you used?
Section 3.1 for pre-trained language models, training dataset, and training toolkit. Section 3.2 for competitive methods. Section 3.3 for evaluation datasets and evaluation toolkit.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A.3
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix A.3
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We did not collect any data. The datasets we used are publicly available and widely used in information retrieval literature. The data is already anonymized by the creators of the datasets.
Therefore we do not need to anonymize the data.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix A.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3.3 for the evaluation set Appendix A.1 for the training set The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.1
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.1 for Main Results Section 4.2 for Query Augmentation Size Study Section 4.3 for Loss Ablation Study Section 4.4 for Query Distributions
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Our evaluation is parameter free, therefore there is no parameter settings.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ma-etal-2023-focused | Focused Prefix Tuning for Controllable Text Generation | https://aclanthology.org/2023.acl-short.96 | In a controllable text generation dataset, there exist unannotated attributes that could provide irrelevant learning signals to models that use it for training and thus degrade their performance. We propose focused prefix tuning (FPT) to mitigate the problem and to enable the control to focus on the desired attribute. Experimental results show that FPT can achieve better control accuracy and text fluency than baseline models in single-attribute control tasks. In multi-attribute control tasks, FPT achieves comparable control accuracy with the state-of-the-art approach while keeping the flexibility to control new attributes without retraining existing models. | # Focused Prefix Tuning For Controllable Text Generation
Congda Ma1 Tianyu Zhao2 Makoto Shing2 Kei Sawada2 **Manabu Okumura**1 1Tokyo Institute of Technology 2rinna Co. Ltd.
{ma, oku}@lr.pi.titech.ac.jp [email protected]
## Abstract
In a controllable text generation dataset, there exist unannotated attributes that could provide irrelevant learning signals to models that use it for training and thus degrade their performance.
We propose *focused prefix tuning* (FPT) to mitigate the problem and to enable the control to focus on the desired attribute. Experimental results show that FPT can achieve better control accuracy and text fluency than baseline models in single-attribute control tasks. In multiattribute control tasks, FPT achieves comparable control accuracy with the state-of-the-art approach while keeping the flexibility to control new attributes without retraining existing models.
## 1 Introduction
Controllable text generation aims to generate text associated with a specific attribute. For example, given an attribute TOPIC = *sports* and a prompt
"*There is*," a model is supposed to generate a continuation whose TOPIC is *sports*, such as "*There is* a tennis match ...".
In datasets for the controllable text generation task, there exists the annotated attribute, and we call it an *explicit attribute* (e.g. the TOPIC attribute in the AGNews dataset). In addition to the *explicit attributes*, the datasets tend to have their own tendency. For example, up to 98% of training data pieces in the IMDb dataset exhibit "TOPIC
= *sci/tech*", while up to 94% of training data pieces exhibit "SENTIMENT = *negative*".1 We call the tendency an *implicit attribute* (e.g. the TOPIC attribute in the IMDb dataset).
The existence of the *implicit attributes* could degrade the performance in controlling for an explicit attribute when models are trained on the datasets.
Since implicit attributes are of dataset-level and related to undesired explicit attributes, the probability 1The models used for classification are from (Gu et al.,
2022).
| Model | Desired Attribute | Implicit Attribute |
|-----------------------|---------------------|----------------------|
| Relevance | Relevance | |
| DExperts | 81.95 | 76.54 |
| Vanilla Prefix Tuning | 71.94 | 90.64 |
Table 1: Relevance of texts generated by different models (e.g. DExperts and Vanilla Prefix Tuning) trained on IMDb dataset. We found a lower desired explicit attribute (e.g. SENTIMENT) relevance is related to a higher implicit attribute (e.g. TOPIC = *sci/tech*) relevance. The relevance is calculated by the classifier models in Sec. 4.2.
of generating content with the implicit attributes is first likely to increase. When the text with the implicit attributes was generated, the probability of generating content with other undesired explicit attributes would increase, and the text with them might be generated next. As a result, as shown in Table 1, the model generates content with a high implicit attribute relevance but a low desired explicit attribute relevance (e.g. Vanilla Prefix Tuning (Li and Liang, 2021)). In contrast, if the model generates content with a low implicit attribute relevance, it will have a high desired explicit attribute relevance (e.g. DExperts (Liu et al., 2021). We call this phenomenon *attribute transfer*.
To mitigate the effect of the attribute transfer, we propose *focused prefix tuning* (FPT), which makes the generation focused on the desired explicit attribute. FPT uses *specific* and *general prefixes* to encode the explicit and implicit attributes, respectively. FPT combines the control power of the two prefixes via *logits manipulation* at inference time.
Experimental results show that FPT achieved better control accuracy and fluency in single-attribute control tasks. In multi-attribute control tasks, FPT can achieve comparable performance with the state-ofthe-art approach. Moreover, we show, since FPT
enables the training of each attribute prefix individually, we can incrementally add new attributes without retraining all prefixes.
## 2 Related Work 2.1 Controllable Generation
Methods for controlling text generation have rapidly developed (Ficler and Goldberg, 2017; Dathathri et al., 2020; Madotto et al., 2020; Chan et al., 2021). Keskar et al. (2019) trained a large transformer model to generate contents conditioned on up to 55 attributes. However, the cost of training such a model is too high.
## 2.2 Prefix Tuning
Parameter-efficient fine-tuning (PEFT) methods, such as prompt tuning (Lester et al., 2021) have become particularly significant in driving various natural language processing tasks to reduce the high training cost. Prefix tuning (Li and Liang, 2021) is one of the PEFT methods that steers pretrained models (Radford et al., 2019; Lewis et al.,
2020) by applying an additional continuous vector embedding before every activation layer. Qian et al.
(2022) proposed a contrastive prefix tuning method that improves its performance by utilizing the relations between attributes. However, they focused only on attributes explicitly annotated and ignored the effect of implicit attributes.
## 2.3 Inference-Time Methods
Inference-time methods (Mireshghallah et al.,
2022; Yang and Klein, 2021; Dathathri et al., 2020; Madotto et al., 2020), a lightweight approach without updating the parameters, have been used for controllable text generation. To enhance controllability, Krause et al. (2021) proposed a method to combine the computed classification probability distributions. Liu et al. (2021) found that directly applying probability distributions from language models is a simple but effective approach to control generated texts. Inspired by their work, we propose a method that uses probability distributions from language models to remove the effect of implicit attributes.
## 3 Focused Prefix Tuning
The task of controllable generation is, given a sequence of prompt tokens x<t and an attribute ATTR
= val (e.g. TOPIC = *sports*), to generate a sequence of tokens as a continuation x that conforms to both the prompt and specified attribute.
## 3.1 Vanilla Prefix Tuning
In controllable text generation, a prefix can steer a pre-trained model parameterized by θ to generate texts under a specific attribute value ATTR = val.
In particular, vanilla prefix tuning (Li and Liang, 2021) prepends a set of continuous vectors before every activation layer of the pre-trained transformer.
The continuous vectors are referred to as the prefix Hattr=val ϕ, which is parameterized by ϕ.
During training, we freeze the pre-trained model's parameters θ and update only the prefix parameters ϕ to optimize the following objective:
−
$$\sum_{x\in{\mathcal{D}}^{\mathrm{attr-val}}}l o g P(x_{t}|x_{<t},H_{\phi}^{\mathrm{attr-val}},\theta),\quad(1)$$
where Dattr=val is the subset of the entire dataset D
whose attribute ATTR is val.
Following Li and Liang (2021), we initialize the prefix Hϕ with the activation of actual tokens from the pre-trained model's vocabulary.
## 3.2 Specific And General Prefixes
The prefix in vanilla prefix tuning captures an explicit attribute in a dataset by training it on the subset dataset Dattr=val. To capture only implicit attributes while ignoring any explicit attributes, we propose to train another prefix on the entire dataset D. To distinguish the two prefixes, we refer to the prefix trained on Dattr=val as a *specific prefix* and that trained on D as a *general prefix*.
The specific prefix is the same as the prefix in vanilla prefix tuning, so we still use Equation 1 to update its parameters. To update the general prefix's parameters, we optimize the following objective:
−
$$-\sum\sb{x\in{\mathcal D}}l o g P(x_{t}|x_{<t},H_{\phi^{\prime}}^{\mathrm{genl}},\theta),\qquad(2)$$
where H
genl ϕ′ represents the general prefix, which is parameterized by ϕ′.
## 3.3 Inference-Time Logits Manipulation
As shown in Figure 1, FPT suppresses the probability of words with implicit attributes in the generated text by combining logits z attr=val steered by the specific prefix and logits z genl steered by the general prefix via logits manipulation at inference time. For example, when generating text with the attribute TOPIC = *sports*, the probability of words with implicit attributes (e.g. "*impossible*" with SEN-TIMENT = *negative*) would be suppressed. During inference, at each step t, we first make two forward runs respectively with the specific and general prefixes to obtain their logits, z
![2_image_0.png](2_image_0.png)
attr=val tand z genl t.
Since z attr=val tencodes both the explicit and implicit attributes while z genl tencodes mostly the implicit attributes, we use a subtraction operation at the logits level to suppress the probability of words with implicit attributes:
$$P(x_{t}|x_{<t},\text{ATTR}=val)$$ $$=P(x_{t}|x_{<t},H_{\phi}^{\text{attr=val}},H_{\phi^{\prime}}^{\text{genl}},\theta)$$ $$=\text{softmax}(\alpha z_{t}^{\text{attr=val}}-(\alpha-1)z_{t}^{\text{genl}}),\tag{3}$$
where α is a hyperparameter that can be interpreted as the strength for the control of implicit attributes.
Following Liu et al. (2021), we respectively set α and α − 1 as the weight of z attr=val and z genl tto make the ratio of logits after the logits manipulation equal to 1.
To ensure the fluency of generated texts, we follow Liu et al. (2021) to use top-p filtering to remove the tokens that have low scores in advance before logits manipulation. In particular, we modify the logits produced by the specific prefix by calculating the top-p vocabulary Ve and setting all the logits outside Ve to −∞:
$$\widetilde{z}[v]=\left\{\begin{array}{l l l}{{z[v],}}&{{i f}}&{{v\in\widetilde{V}}}\\ {{-\infty,}}&{{i f}}&{{v\notin\widetilde{V}^{\vphantom{i f}}.}}\end{array}\right.\eqno(4)$$
Therefore, the logits manipulation in Equation 3 is updated as follows:
$$P^{\prime}(x_{t}|x_{<t},\mbox{ATTR}=val)$$ $$=\mbox{softmax}(\alpha z_{t}^{\mbox{\scriptsize{\it{attrval}}}}-(\alpha-1)z_{t}^{\mbox{\scriptsize{\it{genl}}}}).\tag{5}$$
The token at step t is then selected by ancestral sampling from P′(xt|x<t, ATTR = val).
## 3.4 Multi-Attribute Fpt
FPT is also applicable to the multi-attribute control task, where we aim to control multiple different attributes at the same time. Similarly, we first train the specific prefix for each attribute. Then, we adapt logits manipulation to the multi-attribute task as follows:
$$P^{\prime}(x_{t}|x_{<t},\{\text{ATTR}_{i}=val_{i}\}_{1\leq i\leq K})$$ $$=\text{softmax}(\sum_{i=1}^{K}z_{t}^{\text{attr}_{i}}),\tag{6}$$
where K is the number of different attributes. Each z attri tis the combination of the logits from the corresponding specific prefix and general prefix. Since applying top-p filtering to every attribute could possibly result in an empty Ve, we apply the filtering only to the first attribute:
$$z_{t}^{\text{attr}_{i}}=\begin{cases}\alpha z_{t}^{\text{attr}_{i}=\text{val}_{i}}-(\alpha-1)z_{t}^{\text{gen}_{i}},&\text{if}i=1\\ \alpha z_{t}^{\text{attr}_{i}=\text{val}_{i}}-(\alpha-1)z_{t}^{\text{gen}_{i}},&\text{otherwise}\end{cases}\tag{7}$$
## 4 Single-Attribute Control Experiments 4.1 Models
GPT-2 (Radford et al., 2019): We used the public checkpoint of GPT-2 Medium as the most common baseline.2 **DExperts** (Krause et al., 2021):
A fine-tuning method applying logits manipulation in the inference step. **GeDi** (Krause et al.,
2021): A method combining the classification probabilities for possible next tokens in the inference step. **Vanilla prefix-tuning** (Li and Liang, 2021):
The common prefix-tuning method. **Contrastive**
prefix-tuning (Qian et al., 2022): A strong baseline that takes into account the relationship between attributes.
2The checkpoint of GPT-2 Medium is from https://huggingface.co/gpt2-medium.
| Model | Sentiment | Topic | | | | |
|----------------------------|-------------|---------|-----------|------------|-------|-------|
| Relevance | Perplexity | Bias | Relevance | Perplexity | Bias | |
| Baseline Models | | | | | | |
| GPT-2 | 52.89 | 68.52 | 27.45 | 33.79 | 65.13 | 14.48 |
| DExperts | 81.95 | 41.59 | 26.54 | - | - | - |
| GeDi | 97.32 | 127.11 | - | 95.47 | 93.92 | - |
| Vanilla Prefix Tuning | 71.94 | 21.82 | 40.64 | 84.75 | 36.42 | 13.94 |
| Contrastive Prefix Tuning | 78.73 | 23.10 | 39.89 | 85.75 | 38.16 | 12.42 |
| Proposed Models | | | | | | |
| FPT | 80.33 | 20.48 | 34.81 | 86.46 | 34.05 | 12.14 |
| Contrastive FPT | 88.95 | 22.67 | 34.72 | 86.68 | 40.85 | 11.30 |
| Ablated Model | | | | | | |
| FPT without general prefix | 67.88 | 22.42 | 40.00 | 83.72 | 37.18 | 13.65 |
We also set up one variant of FPT: **Contrastive**
FPT: Applying contrastive prefix tuning to train specific prefixes. We also set an ablated model that uses the logits of the frozen GPT-2 instead of the logits from the model guided by our general prefix.
## 4.2 Experimental Settings
Following previous work (Krause et al., 2021; Qian et al., 2022), we evaluated the models on a topic control dataset AGNews (Zhang et al., 2015) and a sentiment control dataset IMDb (Maas et al., 2011). We score the sentiment relevance using HuggingFace's sentiment analysis classifier (Liu et al., 2019) trained on 15 datasets. For scoring topic relevance, we trained the classifier that obtained comparable results to what was reported.
Perplexity was used to evaluate text fluency. Bias
(|relevance score − 50|) is how much the relevance of implicit attributes deviated from unbiased relevance (50). We set TOPIC = *science* as the implicit attribute in the sentiment control generation, and SENTIMENT = *negative* as the implicit attribute in the topic control generation. Prompts from Chan et al. (2021) were used to generate continuation samples. We generated 20 samples for each attribute and prompt. More details are listed in Appendix A.1 and A.2.
## 4.3 Experimental Results
As shown in Table 2, in the single-attribute control tasks, Contrastive FPT achieves higher attribute relevance than prefix tuning-based baselines while having lower bias scores. This indicates that the generated texts are well controlled under the target explicit attribute without transferring by implicit attributes. In FPT, the perplexity score is the best among control-based baselines. The ablation experiment suggests that the proposed general prefix is essential for attribute control.
Table 3 shows the generation samples of SENTI-MENT = *positive* from our models and baselines.
In the FPT based model, there are more words with desired explicit attributes in generated texts, while there are more words with undesired explicit attributes contained in the baselines. More generation samples are given in Appendix B.
## 5 Multi-Attribute Control Experiments 5.1 Models
In the multi-attribute control experiments, we added **Distribution Lens** (Gu et al., 2022) as a strong baseline. It searches for the intersection space of multiple attribute distributions as their combination for generating.
## 5.2 Experimental Settings
To explore the ability of FPT in the mult-attribute control task, we added a toxic comment dataset3 for toxicity control. We used additional Google Per3https://www.kaggle.com/c/jigsaw-toxic-commentclassification-challenge/
| Model | Generated texts |
|---------------------------|---------------------------------------------------------------------------------------------------------------------|
| GPT-2 | The last time Dow and the SEC went shopping for a speed bump was Tuesday, in terms of ... |
| DExperts | The last time I saw Alvin Henderson, he said he hadn't done a rookie autograph. He says he hasn't played since... |
| Vanilla Prefix Tuning | The last time I saw this film was as a kid, I had to see it again for myself. There are... |
| Contrastive Prefix Tuning | The last time I saw the film, I didn't like it, and couldn't quite believe how much I ... |
| FPT | The last time I saw this film, it was a remarkable turning point in my career. It set the tone for the excellent... |
| Contrastive FPT | The last time I saw In the Hands of an Eagle was at this book release party. It was at a nice club... |
Table 3: Samples generated by our models and baselines with the positive attribute. Desired explicit attribute:
positive, undesired explicit attribute: negative.
| Model | Relevance | | | |
|-----------------------------------------|-------------|-----------|---------|------|
| Topic | Sentiment | Non-toxic | Average | |
| Contrastive Prefix Tuning concatenation | 70.7 | 68.0 | 92.3 | 77.0 |
| semi-supervised | 76.9 | 74.4 | 92.7 | 81.3 |
| Distributional Lens | 84.7 | 85.7 | 90.7 | 87.0 |
| FPT | 88.0 | 77.8 | 93.7 | 86.5 |
Table 4: Results of the multi-attribute control tasks.
spective API4to evaluate the relevance of toxicity.
Since it is meaningless to generate toxic content, so we only apply the non-toxic attribute in this task.
We chose the first attribute as the topic attribute because we found that the filtered vocabulary size in logits manipulation of a topic attribute is larger than the other attributes (sentiment and nontoxic). The prompts used for generating samples are the same as in the sentiment control task. For each prompt, we generated 20 samples per attribute combination.
More details are listed in Appendix A.3.
## 5.3 Experimental Results
Table 4 shows that our method can obtain comparable performance with the state-of-the-art approach.
Distribution Lens, however, requires aggregating the datasets of all attributes to train its prefixes. If they hope to add a prefix to control a new attribute, they have to retrain all the prefixes. In contrast, FPT
trains a prefix for each attribute individually and enables new attribute control prefixes to be added incrementally without retraining existing ones.
## 6 Conclusion
We proposed FPT, a prefix tuning-based method, to mitigate the effect of attribute transfer. FPT could encode implicit attributes in a dataset by a general prefix and use it to suppress the attribute transfer via inference-time logits manipulation. Results in the single-attribute control experiments showed that, with FPT, the generated texts can be more effectively controlled under the desired attribute with higher text fluency. Experimental results in the multi-attribute control suggested that FPT can achieve comparable performance to the state-ofthe-art approach while keeping the flexibility of adding new prefixes without retraining.
4https://www.perspectiveapi.com/
## 7 Limitations
Although FPT shows better control ability, there are two points that need to be improved in the future.
First, as in Gu et al. (2022), we need to select hyperparameter α to balance between the control ability and fluency in generated texts. Second, as shown in Table 5, although the time cost of FPT
is lower than that of GeDi, it is higher than those of other prefix tuning-based methods and grows approximately linearly by a factor of 2 × number of attributes.
| Model | Time (sec) |
|---------------------------|--------------|
| GPT-2 | 1.3 |
| GeDi | 3.2 |
| Vanilla Prefix Tuning | 1.3 |
| Contrastive Prefix Tuning | 1.3 |
| FPT | 2.5 |
Table 5: Time cost to generate a sample by different models.
## References
Alvin Chan, Yew-Soon Ong, Bill Pung, Aston Zhang, and Jie Fu. 2021. Cocon: A self-supervised approach for controlled text generation. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.
OpenReview.net.
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models:
A simple approach to controlled text generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In *Proceedings of the Workshop on Stylistic Variation*,
pages 94–104, Copenhagen, Denmark. Association for Computational Linguistics.
Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, and Bing Qin. 2022. A distributional lens for multi-aspect controllable text generation. *CoRR*, abs/2210.02889.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: decoding-enhanced bert with disentangled attention. In *9th International* Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A
conditional transformer language model for controllable generation. *ArXiv*, abs/1909.05858.
Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2021. GeDi: Generative discriminator guided sequence generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4929–4952, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597, Online. Association for Computational Linguistics.
Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-time controlled text generation with experts and anti-experts.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691–6706, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Andrea Madotto, Etsuko Ishii, Zhaojiang Lin, Sumanth Dathathri, and Pascale Fung. 2020. Plug-and-play conversational models. In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 2422–2433, Online. Association for Computational Linguistics.
Fatemehsadat Mireshghallah, Kartik Goyal, and Taylor Berg-Kirkpatrick. 2022. Mix and match: Learningfree controllable text generationusing energy language models. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 401–415, Dublin, Ireland. Association for Computational Linguistics.
Jing Qian, Li Dong, Yelong Shen, Furu Wei, and Weizhu Chen. 2022. Controllable natural language generation with contrastive prefixes. In *Findings of the Association for Computational Linguistics: ACL 2022*,
pages 2912–2924, Dublin, Ireland. Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Kevin Yang and Dan Klein. 2021. FUDGE: Controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3511–3535, Online. Association for Computational Linguistics.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In *Advances in neural information processing systems*, pages 649–657.
## A Experiment Setting Details
All the experiments are conducted on the basis of a GPT-2 Medium model. We freeze the parameters of the GPT-2 model when training all the prefixes.
The length of all prefixes is set equal to 10. The GPU used for all training is a P40.
## A.1 Topic Control
Following the previous work (Qian et al., 2022),
we use half of the data pieces in the AGNews dataset to obtain the general prefix and specific prefix. The number of specific prefixes for this task is 4 (e.g. worlds, sports, *business*, and *science*).
We set epochs to 10 and the batch size to 8. We use AdamW as the optimizer and set the learning rate to 1e-4. To balance the performance between fluency and controllability, the hyperparameters α for generation are set to 1.1 and the top-p is set to 0.8. The average training time for each prefix is 3 hour for 1 GPU. Following Gu et al. (2022), the classifier is trained on the Deberta model (He et al.,
2021), which is used to compute attribute relevance in this task.
The prompts for evaluation: "*In summary,*",
"*This essay discusses*", "*Views on*", "*The connection*", "*Foundational to this is*", "*To review*", "In brief ", "*An illustration of* ", "*Furthermore*", "The central theme", "*To conclude*", "*The key aspect*",
"*Prior to this*", "*Emphasized are*", "*To summarize*",
"*The relationship*", "*More importantly*", "It has been shown", "*The issue focused on*", and "In this essay".
## A.2 Sentiment Control
Following the previous work (Qian et al., 2022),
we use half of the data pieces in the IMDb to get the general prefix and specific prefix. The number of specific prefixes for this task is 2 (e.g. *positive* and *negative*). We set the batch size to 8, and the number of epochs to 50. We use AdamW as the optimizer, and the learning rate is set to 2e-5.
To balance the performance between fluency and controllability, the hyperparameter α for generation is set to 3 and the top-p is set to 0.8. We spend 4 hours on average for each prefix.
The prompts for evaluation: "*Once upon a time*",
"*The book*", "*The chicken*", "*The city*", "*The country*", "*The horse*", "*The lake*", "*The last time*", "The movie", "*The painting*", "*The pizza*", "*The potato*",
"*The president of the country*", "*The road*", and
"*The year is 1910*".
## A.3 Multi-Attribute Control
For the non-toxic attribute, we use 10,000 pieces of non-toxic labeled data to train the specific prefix.
Then use another 20,000 pieces randomly sampled from the whole dataset to train the general prefix. In the multi-attribute control task, we set the batch size to 8 for training the non-toxic specific prefix and general prefix. We use AdamW as the optimizer, and the learning rate is set to 1e-4. To balance the performance among attributes from different aspects, the combination of hyperparameters for generation is:
| Combination | Weight |
|-----------------------------|----------|
| Worlds:Negative:Non-toxic | 6:5:1.5 |
| Sports:Negative:Non-toxic | 6:5:1.5 |
| Business:Negative:Non-toxic | 7:6:1.5 |
| Sci/Tech:Negative:Non-toxic | 7:6:1.5 |
| Worlds:Positive:Non-toxic | 3:12:1.5 |
| Sports:Positive:Non-toxic | 4:14:1.5 |
| Business:Positive:Non-toxic | 4:14:1.5 |
| Sci/Tech:Positive:Non-toxic | 4:14:1.5 |
Table 6: Specialized weights in multi-attribute control task for attribute balance.
To decide the first attribute, we choose 20 different prompts as input and obtain the filtered vocabulary sizes of different attributes. The average sizes of filtered vocabularies are shown in Table 7.
We choose the attribute with the largest filtered vocabulary size in logits manipulation. When new attributes are added, this method can be used to decide the first attribute.
The prompts used for evaluation: "Once upon a time", "*The book*", "*The chicken*", "*The city*",
"*The country*", "*The horse*", "*The lake*", "The last time", "*The movie*", "*The painting*", "*The pizza*",
"*The potato*", "*The president of the country*", "The road", and "*The year is 1910*".
## B Generated Samples
The more samples generated by our models and baselines are shown in Table 8, 9, 10, 11.
| First attribute | Filtered Vocabulary Size |
|-------------------|----------------------------|
| Topic | 488.7 |
| Sentiment | 165.7 |
| Untoxic | 347.0 |
| Overlaps | 138.8 |
| Cover Ratio | 85.62% |
Table 7: Results of average filtered vocabulary size. We set all the α as 1.5. After filtering the vocabulary in logits manipulation, the specific prefix of the topic attribute guided model has the largest vocabulary size among these three attributes. We also found that the filtered vocabulary of the topic attribute can cover 85% of the filtered vocabulary of the sentiment attribute.
| Model | Generated texts |
|---------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
| GPT-2 | The potato's ability to survive brings a new challenge to the traditional food truck love stage... |
| DExperts | The potato samples ranged in size from 0.6 mm to 5.1 mm in thickness. Analysis of proteins showing correlation with CSF CSF CSF... |
| Vanilla Prefix Tuning | The potato chip looks like a generic type of cheapo pin-up. It's supposed to be fun... |
| Contrastive Prefix Tuning | The potato chip's and biscuit's come up with the idea of making a film that is supposedly a true reflection of the experiences of students on campus... |
| FPT | The potato bomb! Potato bombs are one of the dumbest inventions ever. Their only purpose is to scare children |
| Contrastive FPT | The potato crossing movie was stupid. Dumbly rushed and poorly acted. Dumb and poorly acted?... |
Table 8: Samples generated by our models and baselines with the negative attribute. Desired explicit attribute:
negative, undesired explicit attribute: positive.
| Model | Generated texts |
|---------------------------|-------------------------------------------------------------------------------------------------------|
| GPT-2 | Prior to this I took an uncommon entrance several times in this tavern. It had the ambience... |
| Vanilla Prefix Tuning | Prior to this season, it seemed likely that we would have no other explanation for what had happened... |
| Contrastive Prefix Tuning | Prior to this month, Alberth in court for arraignment on tax evasion charges the US District Court... |
| FPT | Prior to this season, during which the Red Sox and the Cubs had each won the World Series... |
| Contrastive FPT | Prior to this season, we'd have heard rumours of an effort to rebuild the Knicks roster... |
Table 9: Samples generated by our models and baselines with the sports attribute. Desired explicit attribute: sports, undesired explicit attributes: world, business, science.
| Model | Generated texts |
|---------------------------|------------------------------------------------------------------------------------------------------------------------------|
| GPT-2 | Emphasised are the events beyond the grave. The progenitor of darkness So I thought... |
| Vanilla Prefix Tuning | Emphasised are three key claims by Secretary of Defense Donald Rumsfeld on the war on terrorism |
| Contrastive Prefix Tuning | Emphasised are odd and silly pension - and were he not so rich, he might have considered quitting politics... |
| FPT | Emphasised are the facts of the inner workings of the commodity markets and the profitability of global commodity trading... |
| Contrastive FPT | Emphasised are most oil-intensive', Australian manufacturing is the thirdmost-dependant on crude, official figures show... |
Table 10: Samples generated by our models and baselines with the business attribute. Desired explicit attribute:
business, undesired explicit attributes: world, sports, science.
| Model | Generated texts |
|---------------------------|---------------------------------------------------------------------------------------------------------------------------------|
| GPT-2 | An illustration of the inner workings of the World Health Organization's Private Sector Vaccination Center... |
| Vanilla Prefix Tuning | An illustration of the Diamandis-Priest Fasting (2 cents) An illustration of the Diamandis-Priest Fasting... |
| Contrastive Prefix Tuning | An illustration of the biggest day in Spanish history in December 2017. Spanish government launches new campaign to promote ... |
| FPT | An illustration of the SBS / Getty Images virtual reality device at E3 last week. AP/E3Harms.com To catch up on the... |
| Contrastive FPT | An illustration of a proposed satellite CNET/Adrian Levy/UPI The most controversial satellite program in the past few years... |
Table 11: Samples generated by our models and baselines with the science attribute. Desired explicit attribute:
science, undesired explicit attributes: world, sports, business.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✗ A2. Did you discuss any potential risks of your work?
Our work is a foundational research and does not contain potential risks. Our experiments are fair.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5
✓ B1. Did you cite the creators of artifacts you used?
Section 4 and Section 5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4 and Section 5
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 4 and Section 5
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We used open-source datasets, so there is no problem of anonymization.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4, Section 5 and Appendix A
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4, Section 5 and Appendix A
## C ✓ **Did You Run Computational Experiments?** Section 4 And Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 and Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhang-etal-2023-reaugkd | {R}e{A}ug{KD}: Retrieval-Augmented Knowledge Distillation For Pre-trained Language Models | https://aclanthology.org/2023.acl-short.97 | Knowledge Distillation (KD) is one of the most effective approaches to deploying large-scale pre-trained language models in low-latency environments by transferring the knowledge contained in the large-scale models to smaller student models. Prior KD approaches use the soft labels and intermediate activations generated by the teacher to transfer knowledge to the student model parameters alone. In this paper, we show that having access to non-parametric memory in the form of a knowledge base with the teacher{'}s soft labels and predictions can further improve student generalization. To enable the student to retrieve from the knowledge base effectively, we propose a new framework and loss function that preserves the semantic similarities of teacher and student training examples. We show through extensive experiments that our retrieval mechanism can achieve state-of-the-art performance for task-specific knowledge distillation on the GLUE benchmark. | # Reaugkd: Retrieval-Augmented Knowledge Distillation For Pre-Trained Language Models
Jianyi Zhang1, Aashiq Muhamed2, Aditya Anantharaman2**, Guoyin Wang**2, Changyou Chen3, Kai Zhong2, Qingjun Cui2, **Yi Xu**2, Belinda Zeng 2, Trishul Chilimbi 2, **Yiran Chen** 1
{jianyi.zhang,yiran.chen}@duke.edu, [email protected],
{muhaaash,aditanan,guoyiwan,kaizhong,qingjunc,zengb,trishulc}@amazon.com, 1 Duke University, 2 Amazon 3 University at Buffalo, SUNY
## Abstract
Knowledge Distillation (KD) (Hinton et al.,
2015) is one of the most effective approaches for deploying large-scale pre-trained language models in low-latency environments by transferring the knowledge contained in the largescale models to smaller student models. Previous KD approaches use the soft labels and intermediate activations generated by the teacher to transfer knowledge to the student model parameters alone. In this paper, we show that having access to non-parametric memory in the form of a knowledge base with the teacher's soft labels and predictions can further enhance student capacity and improve generalization. To enable the student to retrieve from the knowledge base effectively, we propose a new Retrieval-augmented KD framework with a loss function that aligns the relational knowledge in teacher and student embedding spaces. We show through extensive experiments that our retrieval mechanism can achieve state-of-the-art performance for taskspecific knowledge distillation on the GLUE
benchmark (Wang et al., 2018a).
## 1 Introduction
Large pre-trained language models, such as BERT
(Devlin et al., 2018), RoBERTa (Liu et al., 2019)
and Electra (Clark et al., 2020) have achieved significant success on several different NLP tasks
(Ding et al., 2019; Wang et al., 2018a) with finetuning. However, these models usually contain millions and billions of parameters, preventing their execution on resource-restricted devices. To deploy these models, Knowledge distillation (KD)
is an effective compression technique to derive a smaller student model from a larger teacher model by transferring the knowledge embedded in the teacher's network. Previous KD methods typically store knowledge in the student's parameters and train the student by minimizing divergence between the student's and teacher's output prediction and intermediate activation distributions (Park et al.,
2019; Zhang et al., 2018). However, the student's parametric memory is often limited and cannot be quickly expanded or revised. Moreover, after training, the teacher model's soft labels and activations, which contain essential task-specific knowledge, are not utilized by the student at inference time.
To address the issues mentioned above, we propose the *Retrieval-augmented Knowledge Distillation* (ReAugKD) framework. ReAugKD introduces a non-parametric external memory in addition to the implicit parametric memory of the model and uses kNN retrieval to retrieve from this memory. The key intuition of ReAugKD is to enhance the effective capacity of the student by using an external memory derived from relevant task-specific knowledge of the teacher. While this external memory could include any task-specific knowledge, in this work, it is composed of the soft labels and embeddings generated by the teacher model.
Our framework consists of an inference phase and a training phase. In the inference phase, we aggregate the soft labels of those teacher embeddings in our memory that are most similar to the student embedding. We demonstrate the efficacy of our framework by achieving state-of-the-art results on the GLUE benchmark (Wang et al., 2018a)
with less than 3% latency overhead over the baseline without retrieval augmentation. ReAugKD
also comprises a training phase, where we train the student to retrieve from the external memory effectively. We train with a novel relational KD loss that minimizes the divergence between teacher-teacher and teacher-student embedding distributions. We not only observe that training with this loss is necessary to align the student and teacher embedding spaces for retrieval but also that this loss improves student generalization even in the absence of retrieval augmentation. This suggests that incorporating the ability to retrieve information can significantly enhance generalization during the process 1128
## Of Knowledge Distillation.
In summary, our contributions include
- We propose ReAugKD, a novel framework for knowledge distillation that introduces a nonparametric memory to increase the effective student size. We show that retrieving from a memory composed of training set teacher predictions at inference time can significantly improve generalization on the GLUE tasks.
- To effectively retrieve from the non-parametric memory, we introduce a novel loss function that transfers the relational knowledge between teacherteacher embedding and teacher-student embedding distribution. This loss function improves student generalization even in the absence of retrieval augmentation at inference time.
- We study the accuracy and latency cost with the number of neighbors (k) retrieved in ReAugKD.
ReAugKD with approximate kNN introduces a small overhead of <3% latency increase.
## 2 Related Work
Knowledge distillation KD can be broadly classified into task-specific KD, where the student model will be used for the same task as the teacher model (Mirzadeh et al., 2020; Jin et al., 2019; Zhang et al.,
2018; Sun et al., 2019) and task-agnostic KD where the student may be used for a different task, after finetuning on the new task (Jiao et al., 2019; Sun et al., 2020; Sanh et al., 2019; Wang et al., 2020; Zhang et al., 2018; Xu et al., 2019). In this work, we show that ReAugKD can be applied to enhance task-specific distillation as well as when finetuning task-agnostic distilled models. Closest to our work is RKD (Park et al., 2019) that introduces a loss to transfer relational knowledge between teacherteacher embedding and student-student embedding distributions. Our work differs in that we transfer relational knowledge between teacher-teacher embedding and teacher-student embedding distribution to enhance the student model's ability to retrieve from the external memory. MetaDistil (Zhou et al., 2022) is a strong task-specific distillation baseline that employs meta-learning to better transfer knowledge to the student. Unlike MetaDistill, we show that ReAugKD can significantly improve the student model's generalization without retraining the whole teacher with meta-learning.
Retrieval-augmented language models There has been growing interest in retrieval-augmented methods for Knowledge-Intensive generative NLP
Tasks, such as text generation and question answering (Weston et al., 2018; Lewis et al., 2020; Guu et al., 2020; Lin et al., 2022), where querying training examples during inference significantly improves likelihood. Closest to our work is BERTkNN (Kassner and Schütze, 2020) which combines BERT with a kNN search over a large datastore of an embedded text collection, to improve clozestyle QA. In our work, we apply retrieval augmentation to enhance the capacity of student models during KD, and show improvement even on nonknowledge intensive tasks like GLUE.
## 3 Methodology 3.1 Training Phase
Our framework consists of two main phases, the training phase and the inference phase. The training phase has two steps. In the first step, we prepare the teacher model for KD by adding a linear projection head L on the top of the teacher model encoder that has been finetuned for a specific downstream task. The input dimension of this projection head is the embedding dimension of the teacher. The output dimension is the embedding dimension of the student. We then freeze the other parameters of the teacher model and finetune the parameters in L with supervised contrastive loss (Khosla et al.,
2020). This step a) reduces the dimension of the teacher's embeddings, to the student model dimension for retrieval, and b) uses supervised contrastive loss to derive a kNN classifier for BERT that is robust to natural corruptions, and hyperparameter settings (Li et al., 2021). Fine-tuning L also greatly reduces the computational cost compared to retraining the whole teacher model (Zhou et al., 2022).
In the second step, we perform KD by generating the teacher embeddings with L and teacher soft labels using the original teacher's classifier head for a batch of data. Then, we use the loss function we proposed in Section 3 to train our student model.
## 3.2 Loss Function
We present some mathematical notations to introduce our loss function. Given a batch of data
{di}, i = 1, 2, · · · , N, where N is the batch size, we denote the embedding generated by the teacher's projection head as zi and the soft labels generated by the teacher's classifier as y¯i. Similarly, we adopt xi, yito denote the student's embeddings and predictions. Then we construct a probability distribution qi,j over each teacher's embeddings zj
![2_image_0.png](2_image_0.png)
to capture the similarity with respect to an anchor point zi,
$$q_{i,j}={\frac{\exp{(z_{i}\cdot z_{j})/\tau}}{\sum_{k=1}^{N}\exp{(z_{i}\cdot z_{k})/\tau}}},$$
, (1)
where the P
τ stands for temperature. Note that N
j=1 qi,j = 1. qi,j reflects the cosine distance relational knowledge among different embeddings generated by the teacher model in the batch. If zj is closer to zi, cosine distance, qi,j will be larger.
Similarly, given a student's embedding xi as an anchor point, we formulate another probability distribution q¯i,j over each teacher's embeddings zj of the data in the batch.
$$\bar{q}_{i,j}=\frac{\exp{(x_{i}\cdot z_{j})/\tau}}{\sum_{k=1}^{N}\exp{(x_{i}\cdot z_{k})/\tau}}.$$
. (2)
The q¯i,j reflects the cosine distance relationship between different embeddings generated by the teacher model and the student's embedding. Our loss function aims to minimize the divergence of these two distributions q¯i,j and qi,j since the teacher model is a strong kNN classifier after finetuning with supervised contrastive loss function in the first step of our training. In the ideal case, given a student's embedding xi, the student retriever should retrieve the same set of embeddings as the corresponding teacher's embedding zi. We adopt KL
divergence to measure that divergence. In addition, we adopt the commonly-used cross-entropy loss to calculate the divergence between the student's prediction yi and the teacher's prediction y¯i.
Our loss function can be formulated as
$$C E(y_{i},\bar{y}_{i})+\alpha K L(q_{i,j},\bar{q}_{i,j}),$$
$$({\mathfrak{I}})$$
where CE is the cross entropy loss and KL is KLdivergence. α is the hyperparameter controlling the trade-off between the two losses.
$$(1)$$
## 3.3 Inference Phase
After training, we construct a knowledge base (KB)
comprising of projected teacher embeddings and predictions. Given new data di at inference time, we obtain (xi, yi) using the student model. and use the HNSW algorithm (Malkov and Yashunin, 2018)
to derive the K nearest teacher's embeddings and their corresponding soft labels {(zk, y¯k)}i=1,2,··· ,K
from the KB. Then we compute the weighted average of these soft labels Avg({y¯})i based on q¯i,k
$$A v g(\{y\})_{i}=\sum_{k=1}^{K}\frac{\bar{q}_{i,k}}{\sum_{k=1}^{K}\bar{q}_{i,k}}\bar{y}_{k}$$
.$\,\bar{y_i^\prime}\,\mbox{for}\,d_i\,\mbox{v}$ .
$$(2)$$
We derive a new prediction y¯′ifor di with Avg({y¯})i.
$$\bar{y}_{i}^{\prime}=\beta\bar{y}_{i}+(1-\beta)A v g(\{\bar{y}\})_{i},$$
β is the hyperparameter controlling the trade-off between the two predictions.
## 4 Experimental Results
We apply our method to distill BERT-Base (Devlin et al., 2018) into a 6-layer BERT with a hidden size of 768. We evaluate our proposed approach, ReAugKD, on the GLUE benchmark (Wang et al.,
2018a). These datasets can be broadly divided into three families of problems: single-set tasks that include linguistic acceptability (CoLA) and sentiment analysis (SST-2), similarity, and paraphrasing tasks (MRPC and QQP); inference tasks
Method #Param GLUE
CoLA
(8.5k)
QNLI
(105k)
QQP
(364k)
RTE
(2.5k)
SST-2
(67k)
MRPC
(3.7k) Avg
BERT-Base (teacher) (Devlin et al., 2018) 110M 58.9 91.2 91.4 71.4 93.0 87.6 82.25
BERT-6L (student)(Turc et al., 2019) 66M 53.5 88.6 90.4 67.9 91.1 84.4 79.32
Task-specific Distillation
KD (Hinton et al., 2015) 66M 54.1 89.2 90.9 67.7 91.2 85.2 79.72
PKD (Sun et al., 2019) 66M 54.5 89.5 90.9 67.6 91.3 84.7 79.75
TinyBERT w/o DA (Jiao et al., 2019) 66M 52.4 89.8 90.6 67.7 91.9 86.5 79.82
RCO (Jin et al., 2019) 66M 53.6 89.7 90.6 67.6 91.4 85.1 79.67
TAKD (Mirzadeh et al., 2020) 66M 53.8 89.6 90.7 68.5 91.4 85.0 79.83
RKD (Park et al., 2019) 66M 53.4 89.5 90.9 68.6 91.7 86.1 80.03
DML (Zhang et al., 2018) 66M 53.7 89.6 90.3 68.4 91.5 85.1 79.77
ProKT (Shi et al., 2020) 66M 54.3 89.7 90.9 68.4 91.3 86.3 80.15
SFTN (Park et al., 2021) 66M 53.6 89.5 90.4 68.5 91.5 85.3 79.80
MetaDistil (Zhou et al., 2022) 66M 58.6 90.4 91.0 69.4 92.3 **86.8** 81.42
ReAugKD (ours) 66M **59.4 90.7 91.24 70.39 92.5** 86.3 **81.76**
ReAugKD w/o retrieval 66M 59.1 90.6 91.21 69.31 92.3 85.8 81.39
that include Natural Language Inference (MNLI
and RTE); and Question Answering (QNLI). We compare our method with vanilla KD (Hinton et al.,
2015), TAKD (Mirzadeh et al., 2020), RCO (Jin et al., 2019), RKD (Park et al., 2019), DML (Zhang et al., 2018), PKD (Sun et al., 2019) ProKT (Shi et al., 2020), SFTN (Park et al., 2021) and MetaDistil (Zhou et al., 2022). Following similar setting as MetaDistill, we perform a grid search over the sets of the weight of KD loss from {0.9, 0.99}, the predictions weight β from {0, 0.1, ... 1} and the top-k from 1 to 20. We set the student learning rate to 2e-5 and the batch size to 64.
Experimental Results on GLUE We report the experimental results on the development set of the six GLUE tasks in Table 1. Notably, our method achieves start-of-the-art results on five out of the six datasets with an average improvement of 0.34% over the previous best KD method MetaDistil
(Zhou et al., 2022). Although MetaDistil achieves slightly better performance on the MRPC dataset, our method has the advantage of not needing to conduct meta-learning on the whole large teacher model, which significantly increases extra training cost in terms of time and memory (Zhou et al.,
2022). In addition, we also observe a performance gain of 0.37% with the retrieval component of ReAugKD as compared to ReAugKD without retrieval which verifies the benefit of retrieval augmentation in our approach. Even without the retrieval process, the student model trained by our Table 2: Analysis of the sensitivity of top k on model performance and retrieval time designed loss can still achieve comparable performance to MetaDistill on most datasets. Since our loss is designed to improve the student retrieval function, this demonstrates the importance of retrieval capability in KD.
Number of Neighbors Retrieved (k) To understand the time overhead of retrieval on the student model's inference time, we investigate the performance and additional time overhead of the retrieval process while varying the number of neighbors retrieved (k) in Table 2. From the results, it is clear that retrieval improves the student model performance with an additional time overhead of less than 3% of the original inference time. The retrieval process is conducted only on CPU, and does not take up GPU resources during training.
## 5 Conclusion
| Method | QNLI | SST-2 | CoLA | | | |
|-----------------------|------------------------------------------|---------|--------|-------|------|-------|
| acc | time | acc | time | mcc | time | |
| ReAugKD w/o Retrieval | 90.6 | 45.70s | 92.3 | 7.80s | 59.1 | 8.67s |
| ReAugKD (k=5) | 90.72 +1.31s 92.43 +0.199s 58.87 +0.143s | | | | | |
| ReAugKD (k=10) | 90.70 +1.32s 92.54 +0.201s 59.39 +0.147s | | | | | |
| ReAugKD (k=15) | 90.74 +1.33s 92.54 +0.202s 59.35 +0.147s | | | | | |
| ReAugKD (k=20) | 90.72 +1.33s 92.43 +0.204s 59.37 +0.148s | | | | | |
In this paper, we present ReAugKD, a knowledge distillation framework with a retrieval mechanism that shows state-of-the-art performance on the GLUE benchmark. In the future, we plan to expand the knowledge base with more information from the teacher and extend it to additional tasks.
Limitations Our method relies on having access to teacher embeddings and prediction which may not always be possible in a black-box distillation setting. Retrieval augmentation also requires maintaining a knowledge base that is memory intensive.
The cost of the retrieval process is dependent on the size of the training corpus, which can be a limitation when dealing with very large training datasets.
Conducting dataset distillation (Wang et al., 2018b)
on the training corpus to further reduce memory cost and retrieval time is an important future step for our framework.
Acknowledgments This work was done when Jianyi Zhang was an intern at Amazon Search. In addition, Jianyi Zhang and Yiran Chen disclose support from grants CNS-2112562, IIS-2140247, and CNS-1822085. We thank Yuchen Bian for the valuable discussion and thank all reviewers for their valuable comments.
## References
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
arXiv preprint arXiv:2003.10555.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multihop reading comprehension at scale. arXiv preprint arXiv:1905.05460.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In *International Conference on Machine Learning*, pages 3929–3938.
PMLR.
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015.
Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7).
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019.
Tinybert: Distilling bert for natural language understanding. *arXiv preprint arXiv:1909.10351*.
Xiao Jin, Baoyun Peng, Yichao Wu, Yu Liu, Jiaheng Liu, Ding Liang, Junjie Yan, and Xiaolin Hu. 2019.
Knowledge distillation via route constrained optimization. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pages 1345–
1354.
Nora Kassner and Hinrich Schütze. 2020. Bertknn: Adding a knn search component to pretrained language models for better qa. *arXiv preprint* arXiv:2005.00766.
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. *Advances in Neural* Information Processing Systems, 33:18661–18673.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. *Advances in Neural Information Processing Systems*, 33:9459–9474.
Linyang Li, Demin Song, Ruotian Ma, Xipeng Qiu, and Xuanjing Huang. 2021. Knn-bert: fine-tuning pretrained models with knn classifier. arXiv preprint arXiv:2110.02523.
Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, and Xiang Ren. 2022. Unsupervised crosstask generalization via retrieval augmentation. arXiv preprint arXiv:2204.07937.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Yu A Malkov and Dmitry A Yashunin. 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE
transactions on pattern analysis and machine intelligence, 42(4):824–836.
Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. 2020. Improved knowledge distillation via teacher assistant. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 5191–5198.
Dae Young Park, Moon-Hyun Cha, Daesin Kim, Bohyung Han, et al. 2021. Learning student-friendly teacher networks for knowledge distillation. *Advances in Neural Information Processing Systems*,
34:13292–13303.
Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho.
2019. Relational knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3967–3976.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108.
Wenxian Shi, Yuxuan Song, Hao Zhou, Bohan Li, and Lei Li. 2020. Learning from deep model via exploring local targets.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019.
Patient knowledge distillation for bert model compression. *arXiv preprint arXiv:1908.09355*.
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. Mobilebert: a compact task-agnostic bert for resource-limited devices. *arXiv preprint arXiv:2004.02984*.
Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better:
On the importance of pre-training compact models.
arXiv preprint arXiv:1908.08962.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018a.
Glue: A multi-task benchmark and analysis platform for natural language understanding. *arXiv preprint* arXiv:1804.07461.
Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. 2018b. Dataset distillation. arXiv preprint arXiv:1811.10959.
Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2020. Minilmv2: Multi-head self-attention relation distillation for compressing pretrained transformers. *arXiv preprint* arXiv:2012.15828.
Jason Weston, Emily Dinan, and Alexander H Miller.
2018. Retrieve and refine: Improved sequence generation models for dialogue. arXiv preprint arXiv:1808.04776.
Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, and Hongkai Xiong. 2019. Trained rank pruning for efficient deep neural networks. In *2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS Edition (EMC2-NIPS)*,
pages 14–17.
Ying Zhang, Tao Xiang, Timothy M Hospedales, and Huchuan Lu. 2018. Deep mutual learning. In *Proceedings of the IEEE conference on computer vision* and pattern recognition, pages 4320–4328.
Wangchunshu Zhou, Canwen Xu, and Julian McAuley.
2022. Bert learns to teach: Knowledge distillation with meta learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7037–
7049.
## A Appendix
| Model | #Param | QNLI | QQP | RTE | SST-2 | MRPC | MNLI-m | CoLA | Avg |
|------------------------------------------------------------|----------|--------|-------|-------|---------|--------|----------|--------|-------|
| Teacher Model (24 × 1024 RoBERTa-large (Liu et al., 2019)) | | | | | | | | | |
| RoBERTa-large | 354M | 94.7 | 92.2 | 86.6 | 96.4 | 90.9 | 90.2 | 68 | 88.43 |
| Distilled Student Model (6×768 MiniLMv2) | | | | | | | | | |
| Pretraining Distillation | 81M | 92.7 | 91.4 | 78.7 | 94.5 | 90.4 | 87.0 | 54.0 | 83.8 |
| ReAugKD | 81M | 93.1 | 91.9 | 80.5 | 95.0 | 90.2 | 88.5 | 57.9 | 85.30 |
| ReAugKD w/o Retrieval | 81M | 93.0 | 91.8 | 79.8 | 94.9 | 90.2 | 88.3 | 57.2 | 85.02 |
## A.1 Reaugkd With Task-Agnostic Distillation
Table 3: Results of our method improving finetuned task performance of MiniLMv2 Previous results have demonstrated the effectiveness of our method for task-specific distillation. Our method can further improve the finetuned performance of task-agnostic distilled models. We adopt RoBERTa-large as the teacher model and the MiniLMv2 as the student model to verify the effectiveness of our method. Our method can achieve around 2% improvement in performance.
## A.2 Details About Training Teacher Model'S Projection Head
We adopt the L
sup out version of the loss function in (Khosla et al., 2020) to finetune the parameters of the projection head, which is
$$L_{o u t}^{s u p}=-\sum_{i=1}^{N}\frac{1}{N}\sum_{j\in P(i)}l o g\frac{\exp\left(z_{i}\cdot z_{j}\right)/\tau}{\sum_{k=1}^{N}\exp\left(z_{i}\cdot z_{k}\right)/\tau}.$$
$$\quad(4)$$
Here, there are N data samples diin the batch and we denote the embedding generated by the teacher's projection head for the i-th data di as zi. P(i) here represents the set of all the positive data samples for data di. The data samples from the same class are considered as positive pairs and the data samples from different classes are considered as negative pairs. Regarding the use of data augmentation in training the projection head, we chose not to adopt data augmentation as we found that using supervised contrastive loss without data augmentation was sufficient to achieve results comparable to the cross-entropy loss used in supervised learning. We use the AdamW optimizer with a learning rate of 0.00002. The batch size was set to 512, and the temperature for the supervised contrastive loss (SCL) was set to 0.07. We trained the model 3 epochs.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
It lies before the reference.
✗ A2. Did you discuss any potential risks of your work?
We think our work will not have any potential risk.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
The packages we used are confidential due to our company's policy D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
xia-etal-2023-debiasing | Debiasing Generative Named Entity Recognition by Calibrating Sequence Likelihood | https://aclanthology.org/2023.acl-short.98 | Recognizing flat, overlapped and discontinuous entities uniformly has been paid increasing attention. Among these works, Seq2Seq formulation prevails for its flexibility and effectiveness. It arranges the output entities into a specific target sequence. However, it introduces bias by assigning all the probability mass to the observed sequence. To alleviate the bias, previous works either augment the data with possible sequences or resort to other formulations. In this paper, we stick to the Seq2Seq formulation and propose a reranking-based approach. It redistributes the likelihood among candidate sequences depending on their performance via a contrastive loss. Extensive experiments show that our simple yet effective method consistently boosts the baseline, and yields competitive or better results compared with the state-of-the-art methods on 8 widely-used datasets for Named Entity Recognition. |
## Debiasing Generative Named Entity Recognition By Calibrating Sequence Likelihood
Yu Xia1, Yongwei Zhao2, Wenhao Wu1**, Sujian Li**1 1Key Laboratory of Computational Linguistics, Peking University, MOE, China 2SKL of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
{yuxia, waynewu, lisujian}@pku.edu.cn, [email protected]
## Abstract
Recognizing flat, overlapped and discontinuous entities uniformly has been paid increasing attention. Among these works, Seq2Seq formulation prevails for its flexibility and effectiveness. It arranges the output entities into a specific target sequence. However, it introduces bias by assigning all the probability mass to the observed sequence. To alleviate the bias, previous works either augment the data with possible sequences or resort to other formulations. In this paper, we stick to the Seq2Seq formulation and propose a reranking-based approach. It redistributes the likelihood among candidate sequences depending on their performance via a contrastive loss. Extensive experiments show that our simple yet effective method consistently boosts the baseline, and yields competitive or better results compared with the state-of-the-art methods on 8 widelyused datasets for Named Entity Recognition.
## 1 Introduction
Recently, recognizing flat, overlapped and discontinuous entities in a unified manner has been paid increasing attention. Among the existing works for unified Named Entity Recognition (NER), Seq2Seq formulation prevails for its flexibility and effectiveness in unified modeling (Yan et al., 2021; Lu et al.,
2022; Ye et al., 2022). Typically, it arranges the output entities into a fixed order to form a target sequence, and trains the generative model by maximum likelihood estimation (MLE).
However, this estimation introduces bias by assuming a deterministic target distribution, where the model learns to assign all the probability mass to the observed target sequence. The biased estimation hurts the performance during decoding where predicted sequence likelihoods often do not accurately rank the performance of the generated sequences. To alleviate the bias, (Zhang et al.,
2022) propose two data augmentation methods that sample possible sequences from the target space.
| topK/B CoNLL03 OntoNotes5.0 | ACE04 | ACE05 | | |
|-------------------------------|---------|---------|-----------------|-------|
| 1/5 | 93.14 | 90.27 | 86.85 | 84.76 |
| 5/5 | 96.58 | 96.43 | 93.14 | 92.26 |
| 10/10 | 97.20 | 97.09 | 94.38 | 93.24 |
| topK/B | GENIA | CADEC | ShARe13 ShARe14 | |
| 1/5 | 78.93 | 70.53 | 79.69 | 80.35 |
| 5/5 | 89.66 | 81.17 | 89.36 | 90.68 |
| 10/10 | 91.64 | 83.01 | 91.11 | 91.87 |
Table 1: Oracle F1, *i.e.*, maximum F1 over topK candidates, on NER datasets based on BARTNER (Yan et al.,
2021). topK/B denotes picking topK candidates out of candidates generated by beam search with beam size B.
Others resort to other formulations, *e.g.*, W2NER
(Li et al., 2022) reformulates NER as a word-word relation classification. In this study, we stick to the Seq2Seq formulation and explore how to mitigate the bias from another perspective orthogonal to (Zhang et al., 2022).
Beam search decoding algorithms maintain B
candidates in descending likelihoods and output the highest one. However, the rest candidates could contain predictions with better performance. We measure this phenomenon with oracle scores. As shown in Table 1, the beam candidates contain predictions with up to 8.1 points higher F1 over the outputted one, averaged on eight datasets. Doubling the beam size further increases the advantage to 9.38 points.
Recently, reranking-based methods proposed for the abstractive summarization task offer a potential technique (Liu and Liu, 2021; Ravaut et al., 2022).
They train a discriminator on the candidates to predict a score for picking out the best candidate. For example, SimCLS (Liu and Liu, 2021) regards the cosine similarity between the input and candidate representations as the score. However, when applying reranking-based methods to our task, we find a challenge originating from the nature of information extraction. Candidates of the same input share most of the words and the discriminators trained from scratch have difficulty differentiating them 1137 People can communicate with international friends without the hefty phone bills.
| Ground Truth | PER | PER | |
|--------------------------------------------------------|------------|----------|-----------------------------------|
| Candidates | Likelihood | F1 score | Calibrated Likelihood 60.5% 99.7% |
| People PER international LOC international friends PER | 92.8% | 0.8 | |
| People PER international friends PER | 84.0% | 1.0 | |
| People PER international PER international friends PER | 74.5% | 0.8 | 37.7% 43.8% |
| People PER international GPE international friends PER | 51.2% | 0.8 | |
| People PER international international friends PER | 30.0% | 0.7 | 33.5% |
Figure 1: Illustration of Sequence Likelihood Calibration. After guiding the estimated sequence likelihood by F1 score, the likelihood is more consistent with the F1 score. More cases can be found in Appendix 8.
(detailed in Sec. 3.3).
To address the above issue, we propose RerankNER to debias generative NER based on a reranking framework adapted for the task. Specifically, we first train the generative model in the standard way, resulting in a biased model. Then, we generate several candidates for each input with beam search. Instead of training a separated discriminator on the candidates sharing most of the words, we calibrate the generative model with a contrastive loss defined on the candidates. The contrastive loss aims to make the estimated sequence likelihoods consistent with their relative task performance as shown in Figure 1. This objective softens the target distribution and thus alleviates the bias.
Our contributions are summarized as follows:
1. To the best of our knowledge, we are the first to explore reranking-based methods in the field of generative information extraction (Ye et al., 2022).
2. We propose a method for generative NER tackling the bias problem.
3. Experimental results show that our method consistently boosts the baseline, and yields competitive results compared with the stateof-the-art methods on 8 widely-used datasets for NER.
## 2 Method 2.1 Task Formulation
We unify three NER subtasks (i.e. the flat, overlapped, and discontinuous NER) as follows.
Given an input sentence of n tokens X =
x1x2 *. . . x*n, the m output entities are arranged into a target sequence Y = E1E2 *. . . E*m, Ei =
y 1 i y 2 i
. . . y j−1 iy j i li, where y 1 i
, ..., y j i denotes the tokens in the i-th entity and li denotes the label of the i-th entity. Our goal is to model the conditional probability P (Y |X), which is factorized
![1_image_0.png](1_image_0.png)
auto-regressively into Q|Y | t=0 P (yt|*X, Y*<t).
## 2.2 Overview
Given a generative NER model trained on the target sequences with the standard MLE, we perform sequence likelihood calibration to alleviate the bias.
First, we generate several candidates for each input with beam search and evaluate their task performance (F1 score is used). Then, we continue training the model with the contrastive loss to make the estimated sequence likelihoods consistent with their relative task performance. Finally, we generate the answer with the standard beam search by the calibrated model.
## 2.3 Sequence Likelihood Calibration
The contrastive loss depicted in Figure 2 is composed of three terms LMLE,LRank,LGold.
LMLE is identical to the standard MLE used in the first training stage. It maintains the generating ability of the model during the calibration process.
LMLE maximizes the sequence likelihood of the gold target sequence Y , where the sequence likelihood is calculated as the product of token-level likelihood:
$$\begin{array}{l}{{{\mathcal{L}}_{\mathrm{MLE}}=-S(Y)}}\\ {{S(Y)=\sum_{t}\log P_{\theta}(y_{t}|X,Y_{<t})}}\end{array}$$
and θ denotes model parameters.
LRank improves the consistency between the estimated sequence likelihoods and the task performance of the candidate sequences. We adopt the margin ranking loss (Hopkins and May, 2011) for this term, *i.e.*,
$${\mathcal{L}}_{\mathrm{Rank}}=\sum_{i,j}\operatorname*{max}\left(0,S({\hat{Y}}_{j})-S({\hat{Y}}_{i})+\lambda\right)$$
where Yˆi, Yˆj is a pair of candidates generated by beam search, provided that Yˆi has a higher F1 score than Yˆj . λ denotes the margin, a hyper-parameter.
Apart from the supervision of relative order in the candidates, we utilize the supervision of the gold sequence as well. LGold ensures the sequence likelihoods of the generated candidates do not overstep the likelihood of the gold.
$${\mathcal{L}}_{\mathrm{Gold}}=\sum_{i}\operatorname*{max}\left(0,S({\hat{Y}}_{i})-S(Y)+\lambda\right)$$
where Yˆi denotes a candidate sequence, provided that it is not an equivalent of the gold.
The contrastive loss is the sum of the terms:
$${\mathcal{L}}={\mathcal{L}}_{\mathrm{MLE}}+\alpha{\mathcal{L}}_{\mathrm{Rank}}+\bar{\alpha}{\mathcal{L}}_{\mathrm{Gold}}$$
where α and α¯ are coefficients.
## 3 Experiments 3.1 Main Results
We conduct experiments on eight datasets of three NER subtasks in total. Precision (P), Recall (R)
and Micro F1 score (F1) are reported as previous works. We use BART-large as our backbone. For fair comparison, we reproduce BARTNER (Yan et al., 2021) using the public code 1and get similar results reported in the paper. We compare our model principally with SOTA generative NER models, including (Yan et al., 2021; Zhang et al., 2022; Lu et al., 2022). Performances of SOTA discriminative NER models (Li et al., 2022) are also listed for reference. Refer to Appendix A for more details.
1https://github.com/yhcc/BARTNER/
The results for flat, overlapped and discontinuous NER are shown in Table 2, Table 3 and Table 4 respectively. On eight datasets, our proposed sequence calibration consistently boosts the baseline. It achieves SOTA performance among the generative methods. Noting that our method gets competitive results even compared with discriminative methods that use extra embedding and domain pretrained model, which shows the potential of generative models.
## 3.2 Analysis Of Improvement
We manually analyze the predictions corrected by the calibration. Apart from reranking the correct candidate to the top beam, RerankNER can generate new candidates with boundary or type corrected.
More cases can be found in Appendix B.
In addition to manually observing examples, we also quantitatively analyze the sources of gain.
We find that the gain mostly comes from samples with low likelihood, which means sequence likelihood calibration is more effective for samples with higher difficulty. Specifically, we group the samples in the test set into ten groups according to their original sequence likelihood and evaluate their performance before (colored in orange) and after (colored in blue) calibration. It can be seen from Figure 3 that the F1 scores of most groups get improved after calibration, and the improvement is greater for samples with lower likelihoods.
We also conduct the hit@top-k evaluation.
Specifically, we iterate over the test samples and increase the number of hits when a gold answer exists among the top-k candidates. Table 5 shows that calibration slightly increase the hit@top-k across various datasets.
## 3.3 Variants Of Reranker
As stated in Section 1, we observe that previous methods have difficulty capturing the subtle nuance among the candidates. We have investigated three variants: (1) SimCLS (Liu and Liu, 2021). (2) SimCLS with our modification which concatenates the input and the candidate representation and projects it to a score to replace the cosine similarity. (3)
Picking out the best candidate based on the estimated likelihood of our model. Overall, we find their training losses fluctuate and their performance consistently lower than the baseline which selects the top beam with the highest likelihood. Future work could investigate this phenomenon in more depth.
Table 2: Results on flat NER datasets. 1 means using extra embedding (*e.g.* character embedding and POS
embedding). 2 means using extra context. 3 means reproduction from (Yan et al., 2021).
| Discriminative Generative |
|-----------------------------|
Table 3: Results on overlapped NER datasets. 1 means using extra embedding. 2 means using extra context. 3 means using domain pretrained model (*e.g.* ClinicalBERT and BioBERT). 4 means reproduction from (Yan et al., 2021)
| Model | CoNLL03 | OntoNotes5.0 | | | | | |
|-----------------------------------------------------|----------------|----------------|-------|-------|-------|-------|----|
| P | R | F1 | P | R | F1 | | |
| (Akbik et al., 2019) 1 [BERT-Large] | - | - | 92.86 | - | - | - | |
| (Li et al., 2020) 3 [BERT-Large] | 92.47 | 93.27 | 92.87 | 91.34 | 88.39 | 89.84 | |
| 2 [BERT-Large] | 92.13 | 93.73 | 92.94 | - | - | - | |
| (Shen et al., 2021) | 1 [BERT-Large] | - | - | 93.21 | - | - | - |
| (Wang et al., 2021a) (Li et al., 2022) [BERT-Large] | 92.71 | 93.44 | 93.07 | 90.03 | 90.97 | 90.50 | |
| (Straková et al., 2019) 1 [BERT-Large] | - | - | 93.07 | - | - | - | |
| (Zhang et al., 2022) [T5-Base] | 92.78 | 93.51 | 93.14 | 89.77 | 91.07 | 90.42 | |
| (Lu et al., 2022) [UIE (T5-Large)] | - | - | 92.99 | - | - | - | |
| (Yan et al., 2021) [BART-Large] | 92.61 | 93.87 | 93.24 | 89.99 | 90.77 | 90.38 | |
| Ours [BART-Large] | 93.26 | 93.69 | 93.48 | 90.03 | 91.24 | 90.63 | |
Model CADEC ShARe13 ShARe14
P R F1 P R F1 P R F1
(Tang et al., 2018) 67.80 64.99 66.36 - - - - - -
(Dai et al., 2020) [ELMO] 68.90 69.00 69.00 80.50 75.00 77.70 78.10 81.20 79.60
(Li et al., 2020) [BERT-large] - - 69.90 - - 82.50 - - -
(Wang et al., 2021b)
1[BERT-Large] 70.50 72.50 71.50 84.30 78.20 81.20 78.20 84.70 81.30
(Li et al., 2022)
1[BERT-Large] 74.09 72.35 73.21 85.57 79.68 82.52 79.88 83.71 81.75
| Model | ACE04 | ACE05 | Genia | | | | | | |
|--------------------------------------|---------|---------|---------|-------|-------|-------|-------|-------|-------|
| P | R | F1 | P | R | F1 | P | R | F1 | |
| (Yu et al., 2020) 2 [BERT-Large] | 87.3 | 86.0 | 86.7 | 85.2 | 85.6 | 85.4 | 81.8 | 79.3 | 80.5 |
| (Li et al., 2020) 4 [BERT-Large] | 85.83 | 85.77 | 85.80 | 85.01 | 84.13 | 84.57 | 81.25 | 76.36 | 78.72 |
| (Xu et al., 2021) [BERT-Large] | 86.9 | 85.8 | 86.3 | 85.7 | 85.2 | 85.4 | 80.3 | 78.9 | 79.6 |
| (Shen et al., 2021) 2 [BERT-Large] | 87.44 | 87.38 | 87.41 | 86.09 | 87.27 | 86.67 | 80.19 | 80.89 | 80.54 |
| (Li et al., 2022) 3 [BERT-Large] | 87.33 | 87.71 | 87.52 | 85.03 | 88.62 | 86.79 | 83.10 | 79.76 | 81.39 |
| (Straková et al., 2019) [BERT-Large] | - | - | 84.40 | - | - | 84.33 | - | - | 78.31 |
| (Zhang et al., 2022) [T5-Base] | 86.36 | 84.54 | 85.44 | 82.92 | 87.05 | 84.93 | 81.04 | 77.21 | 79.08 |
| (Lu et al., 2022) [UIE (T5-Large)] | - | - | 86.89 | - | - | 85.78 | - | - | - |
| (Yan et al., 2021) [BART-Large] | 87.27 | 86.41 | 86.84 | 83.16 | 86.38 | 84.74 | 78.57 | 79.3 | 78.93 |
| Ours [BART-Large] | 87.64 | 87.61 | 87.63 | 85.01 | 87.47 | 86.22 | 79.51 | 79.48 | 79.49 |
Generative
(Zhang et al., 2022) [T5-Base] 71.35 **71.86** 71.60 81.09 78.13 79.58 77.88 **83.77** 80.72
(Yan et al., 2021) [BART-Large] 70.08 71.21 70.64 **82.09** 77.42 79.69 77.2 83.75 80.34
Ours [BART-Large] **72.33** 71.01 **71.66** 81.86 **78.48 80.14 78.68** 83.63 **81.01**
Table 4: Results on discontinuous NER datasets. 1 means using domain pretrained model (*e.g.* ClinicalBERT and BioBERT).
Table 5: Hit@top-k evaluation. Each element in the table denotes the hit count among top-k candidates before/after calibration.
## 4 Related Work
| Discriminative Generative |
|-----------------------------|
| Discriminative Generative |
|-----------------------------|
Named Entity Recognition The existing methods for NER can be broadly classified into sequence labeling formulation, span-based formulation and generative-based formulation. A majority of initial works adopt sequence labeling formulation which assigns each token a tag from a predefined tagging scheme (Huang et al., 2015; Lample et al., 2016). Then, the span-based formulation is proposed which enumerates all possible spans and
| CoNLL03 OntoNotes5.0 | ACE04 | ACE05 | |
|------------------------|-----------|-----------------------|---------|
| hit@3 3196/3119 | 7732/7734 | 559/566 | 759/779 |
| hit@5 3240/3138 | 7858/7869 | 582/586 | 786/797 |
| GENIA | CADEC | ShARe13 | ShARe14 |
| hit@3 1135/1161 | 981/962 | 8046/8077 14405/14578 | |
| hit@5 1245/1254 | 1005/980 | 8085/8124 14481/14659 | |
![4_image_0.png](4_image_0.png)
performs classification at the span-level (Wang and Lu, 2019). Recently, researchers have grown more interest in tackling the three subtasks uniformly, i.e., flat NER, overlapped NER and discontinuous NER. We refer to them as unified NER in the rest of the passage. The above two formulations have major drawbacks in modeling unified NER. For example, sequence labeling methods need to design different tagging schemas for each subtask (Dai et al., 2020). While span-based methods have to trade-off between maximal span length and computation efficiency due to the enumeration operation
(Luan et al., 2019). Generative-based formulation prevails in unified NER for its flexibility in generating variable-length entities (Lu et al., 2022; Yan et al., 2021). In this paper, we adopt BARTNER (Yan et al., 2021) as our backbone generative model.
Bias in Generative NER Since the generative model generates outputs in an autoregressive manner which differs largely from the extraction objective of NER, it introduces incorrect biases during training. (Zhang et al., 2022) analyze these biases from the causality perspective and attribute them to two confounders namely pre-context confounder
(the model may be biased to pre-generated words which have no causal relation with the word to be generated) and entity-order confounder. They propose two data augmentation methods to address them respectively. (Tan et al., 2021) observe that overlapped NER is essentially an unordered task and propose a sequence-to-set network to predict entity spans simultaneously in a non-autoregressive manner. W2NER (Li et al., 2022) abandons the generative-based formulation and model unified NER as a word-word relation classification based on the proposed relation schema. In this paper, we improve the generative-based method by exploiting Reranking Reranking has been explored in various tasks of Natural Language Processing for long.
In question answering, passage reranking is used as the first stage to retrieve relevant passages where the answer might locate and reorder them according to their scores. Similarly, answer reranking is used as the last stage to refine the answer selection.
In neural machine translation, (Bhattacharyya et al.,
2021) apply an energy-based model on the top of BERT to reorder candidates according to their BLEU scores. In abstractive summarization, SimCLS (Liu and Liu, 2021) trains a separate secondstage model with discriminative ranking loss to select the best summary candidate. BRIO (Liu et al., 2022) optimizes the autoregressive language model by a contrastive loss over the discrete space of the generated texts. SummaReranker (Ravaut et al., 2022) adopts a mixture-of-expert architecture as the reranker to measure the quality of the candidates with multiple metrics. To the best of our knowledge, there is no work exploring reranking methods on generative IE.
## 5 Conclusion
Through pilot experiments, we find the decoded candidates provide potential supervision. Based on this finding, we propose RerankNER to debias generative NER based on a reranking framework adapted for the task. It consistently boosts the baseline and achieves competitive results with state-ofthe-art generative methods on eight NER datasets, which verifies the effectiveness of candidate order supervision. Future work could consider extending this method to other generative IE tasks. Another meaningful direction is to consider incorporating Large Language Models into the reranking process.
## Limitations
RerankNER conducts calibration after the regular training, which introduces extra computational overhead. This drives us to further improve the overall efficiency of our method. Recent works find that few-shot learning serves as an effective finetuning method of pretrained language models.
It is reasonable to investigate our model under fewshot learning to reduce the overhead. Although we get competitive results with the state-of-the-art methods, there is still a gap between the oracle score and the best results. We leave them as our future work.
## Acknowledgement
We thank the anonymous reviewers for their helpful comments on this paper. This work was partially supported by National Key R&D Program of China
(No. 2022YFC3600402) and National Social Science Foundation Project of China (21&ZD287).
The corresponding author of this paper is Sujian Li.
## References
Alan Akbik, Tanja Bergmann, and Roland Vollgraf.
2019. Pooled contextualized embeddings for named entity recognition. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis,*
MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 724–728. Association for Computational Linguistics.
Sumanta Bhattacharyya, Amirmohammad Rooshenas, Subhajit Naskar, Simeng Sun, Mohit Iyyer, and Andrew McCallum. 2021. Energy-based reranking:
Improving neural machine translation using energybased models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4528–4537, Online. Association for Computational Linguistics.
Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. An effective transition-based model for discontinuous NER. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 5860–5870, Online. Association for Computational Linguistics.
George R. Doddington, Alexis Mitchell, Mark A. Przybocki, Lance A. Ramshaw, Stephanie M. Strassel, and Ralph M. Weischedel. 2004. The automatic content extraction (ACE) program - tasks, data, and evaluation. In *Proceedings of the Fourth International*
Conference on Language Resources and Evaluation, LREC 2004, May 26-28, 2004, Lisbon, Portugal. European Language Resources Association.
Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In *Conference on Empirical Methods in* Natural Language Processing.
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.
Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73–81.
J.-D. Kim, T. Ohta, Y. Tateisi, and J. Tsujii. 2003. GENIA corpus—a semantically annotated corpus for bio-textmining. *Bioinformatics*, 19(suppl 1):i180–
i182.
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016.
Neural architectures for named entity recognition. In *Proceedings of the 2016 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics.
Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022.
Unified named entity recognition as word-word relation classification. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pages 10965–10973.
Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC
framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5849–5859. Association for Computational Linguistics.
Yixin Liu and Pengfei Liu. 2021. SimCLS: A simple framework for contrastive learning of abstractive summarization. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 1065–1072, Online. Association for Computational Linguistics.
Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing order to abstractive summarization. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903, Dublin, Ireland. Association for Computational Linguistics.
Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information
extraction. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 5755–5772, Dublin, Ireland. Association for Computational Linguistics.
Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036–3046, Minneapolis, Minnesota. Association for Computational Linguistics.
Danielle L. Mowery, Sumithra Velupillai, Brett R.
South, Lee M. Christensen, David Martínez, Liadh Kelly, Lorraine Goeuriot, Noémie Elhadad, Sameer Pradhan, Guergana K. Savova, and Wendy W. Chapman. 2013a. Task 1: Share/clef ehealth evaluation lab 2013. In Conference and Labs of the Evaluation Forum.
Danielle L. Mowery, Sumithra Velupillai, Brett R.
South, Lee M. Christensen, David Martínez, Liadh Kelly, Lorraine Goeuriot, Noémie Elhadad, Sameer S.
Pradhan, Guergana K. Savova, and Wendy W. Chapman. 2013b. Task 2 : Share/clef ehealth evaluation lab 2014.
Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, CoNLL 2013, Sofia, Bulgaria, August 8-9, 2013, pages 143–152. ACL.
Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022.
SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4504–4524, Dublin, Ireland.
Association for Computational Linguistics.
Erik F. Tjong Kim Sang and Fien De Meulder. 2003.
Introduction to the conll-2003 shared task: Languageindependent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLT-NAACL 2003, Edmonton, Canada, May 31 -
June 1, 2003, pages 142–147. ACL.
Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, and Weiming Lu. 2021. Locate and label: A two-stage identifier for nested named entity recognition. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2782–2794, Online. Association for Computational Linguistics.
Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326–5331, Florence, Italy. Association for Computational Linguistics.
Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, and Yueting Zhuang. 2021. A sequence-to-set network for nested named entity recognition. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, IJCAI-21.
Buzhou Tang, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. 2018. Recognizing continuous and discontinuous adverse drug reaction mentions from social media using LSTM-CRF. *Wirel. Commun. Mob. Comput.*,
2018.
Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus LDC2006T06.
Bailin Wang and Wei Lu. 2019. Combining spans into entities: A neural two-stage approach for recognizing discontiguous entities. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 6216–6224, Hong Kong, China. Association for Computational Linguistics.
Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, and Kewei Tu. 2021a.
Improving named entity recognition by external context retrieving and cooperative learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 1800–1812. Association for Computational Linguistics.
Yucheng Wang, Bowen Yu, Hongsong Zhu, Tingwen Liu, Nan Yu, and Limin Sun. 2021b. Discontinuous named entity recognition as maximal clique discovery. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1:
Long Papers), Virtual Event, August 1-6, 2021, pages 764–774. Association for Computational Linguistics.
Yongxiu Xu, Heyan Huang, Chong Feng, and Yue Hu. 2021. A supervised multi-head self-attention network for nested named entity recognition. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI
2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14185–14193.
AAAI Press.
Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 5808–5822, Online.
Association for Computational Linguistics.
Hongbin Ye, Ningyu Zhang, Hui Chen, and Huajun Chen. 2022. Generative knowledge graph construction: A review. *arXiv preprint arXiv:2210.12714*.
Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020.
Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470–
6476, Online. Association for Computational Linguistics.
Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, and Weiming Lu. 2022. De-bias for generative extraction in unified ner task. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 808–818.
## A Details A.1 Dataset Statistics
The statistics of the datasets are listed in Table 6.
Flat NER subtask We conduct experiments on CoNLL-2003 (Sang and Meulder, 2003) and OntoNotes5.0 (Pradhan et al., 2013) in English.
We follow the experimental settings as previous works (Lample et al., 2016; Yan et al., 2021).
Overlapped NER subtask We conduct experiments on ACE 2004 (Doddington et al., 2004),
ACE 2005 (Walker et al., 2006), and GENIA (Kim et al., 2003). For ACE 2004 and ACE 2005, we shuffle and split the documents into training, development, and testing in a ratio of 8:1:1 following (Yu et al., 2020). For GENIA, the ratio is set to 8.1:0.9:1.0 following (Yan et al., 2021).
Discontinuous NER subtask We conduct experiments on CADEC (Karimi et al.,
2015), ShARe13 (Mowery et al., 2013a), and ShARe14 (Mowery et al., 2013b). These datasets contains approximately 10% discontinuous entities.
We follow the experimental settings from (Dai et al., 2020).
## A.2 Implementation Details
For the fine-tuning stage, we use the code, the hyper-parameters, the package version from (Yan et al., 2021) and get comparable results on all datasets reported in the paper. We set the max epoch as 30 with early stop (patience=5). We use AdamW optimizer with the same learning rate as
(Yan et al., 2021). Linear learning rate scheduling is employed. For all subtasks, we do predictions on the word-level, i.e., only the position index of the first BPE of each entity word is used.
For the calibration training, we use the standard beam search to generate 5 candidates for each input sentence. We adopt the hyper-parameters as the fine-tuning stage except for the newly added ones.
We implement both the fixed margin and the linear margin. The linear margin λ = λ¯(j − i) denotes the linear margin depending on the order difference of the candidates, and λ¯ is a hyper-parameter. We search the value of the margin λ¯ within [0.01, 0.1].
We search the value of coefficient α within [0.1, 1]. Table 7 "mask out tie" means whether we mask out the comparison between candidates with the same F1 score in the contrastive loss. Effects of
"add L*Gold*" and "mask out tie" differs across 8 datasets, so we view them as hyper-parameters. All experiments are conducted on the NVIDIA RTX
3090 GPU with 24G memory.
## A.3 Baselines
The following methods can adapt to all NER subtasks. Please refer to the original papers for the other methods designed specifically for a certain NER subtask.
BERT-MRC (Li et al., 2020) reformulates NER
as a machine reading comprehension (MRC) task and extract entities by answering questions such as "find locations in the text".
UIE (Lu et al., 2022) represents various information structures with a structured extraction language and tackles general information extraction tasks with a unified text-to-structure generation framework.
(Zhang et al., **2022)** analyzes incorrect biases in the generative NER models from the causality perspective and proposes two data augmentation methods to address them. Note that T5-Base they use has the same number of Transformer layers as BART-Large.
W2NER (Li et al., 2022) reformulates unified NER as a word-word relation classification task based on the proposed relation schema.
| Sentence | Mention | | | | | | | | | |
|--------------------|-------------|-------|---------|-------|----------------------|-------|-------|-------|------|------|
| #All | #Train #Dev | #Test | Avg.Len | #All | #Ovlp. #Dis. Avg.Len | | | | | |
| Flat | CoNLL2003 | 20744 | 17291 | - | 3453 | 14.38 | 35089 | - | - | 1.45 |
| OntoNotes5.0 76714 | 59924 | 8528 | 8262 | 18.11 | 104151 | - | - | 1.83 | | |
| GENIA | 18546 | 15023 | 1669 | 1854 | 25.41 | 56015 | 10263 | - | 1.97 | |
| Ovlp. | ACE04 | 8512 | 6802 | 813 | 897 | 20.12 | 27604 | 12626 | - | 2.50 |
| ACE05 | 9697 | 7606 | 1002 | 1089 | 17.77 | 30711 | 12404 | - | 2.28 | |
| CADEC | 7597 | 5340 | 1097 | 1160 | 16.18 | 6316 | 920 | 679 | 2.72 | |
| Dis. | ShARe13 | 18767 | 8508 | 1250 | 9009 | 14.86 | 11148 | 663 | 1088 | 1.82 |
| ShARe14 | 34614 | 17404 | 1360 | 15850 | 15.06 | 19070 | 1058 | 1656 | 1.74 | |
Table 6: Dataset Statistics. "Ovlp." and "Dis." denote overlapped and discontinuous mentions respectively.
| Hyper-parameter | Value |
|-------------------------------------------------------------------------------|--------------------|
| epoch | 30 |
| warmup step | 0.01 |
| learning rate | [1e-5, 2e-5, 4e-5] |
| batch size | [16, 24, 32] |
| beam size | 5 |
| margin λ¯ | [0.01, 0.1] |
| coefficient α = ¯α [0.1, 1.0, 5.0] add LGold [Yes, No] mask out tie [Yes, No] | |
Table 7: Hyper-parameter settings.
## B Case Study
Table 8 shows some examples corrected by the sequence likelihood calibration.
## C Generative Model
Our method is agnostic to the generative model. In this study, we adopt BARTNER (Yan et al., 2021),
an Encoder-Decoder framework with pointer mechanism, to model the probability P(Y |X):
Encoder encodes the input sentence X into vectors HEnc, which can be denoted as:
$$H^{\mathrm{Enc}}=\mathrm{Encoder}(X)\qquad\qquad{\mathrm{(1)}}$$
where HEnc ∈ Rn×dand d is the dimension of the hidden state.
Decoder predicts the index probability distribution step-by-step according to P(yt|*X, Y*<t). Since Y<t consists of the indices of the pointers and tags, it needs to be mapped to the vocabulary indices before inputted to the Decoder. We get the hidden state at the t-th step by:
$$h_{t}^{\mathrm{Dec}}=\mathrm{Decoder}(H^{\mathrm{Enc}};{\hat{Y}}_{<t})\qquad\quad(2)$$
Finally, we get the index probability distribution Pt by:
$$G^{\rm Dec}=\mbox{Embed}(G)$$ $$E^{\rm Enc}=\mbox{Embed}(X)$$ $$\hat{H}^{\rm Enc}=\alpha*H^{\rm Enc}+(1-\alpha)*E^{\rm Enc}$$ $$P(y_{t}|X,Y_{<t})=\mbox{Softmax}([\hat{H}^{\rm Enc}\otimes h_{t}^{\rm Dec};G^{\rm Dec}\otimes h_{t}^{\rm Dec}])\tag{3}$$
where Embed(·) is the embedding layer shared between the Encoder and Decoder, G denotes the label token while X denotes the entity words. Hˆ Enc denotes the input representation. ⊗ denotes the dot product. For training, we use the cross-entropy loss with teacher forcing. During inference, we generate the target sequence auto-regressively.
CADEC (2=ADR) Muscle twitching, stiff neck, constant lightheadedness, always worrying about a brain tumor or something.
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
![9_image_2.png](9_image_2.png) 0.75,-0.15,Muscle twitching 2 stiff neck 2 constant lightheadedness 2 always worrying 2
0.50,-0.18,Muscle twitching 2 stiff neck 2 constant lightheadedness 2 always worrying about a 2
0.50,-0.23,Muscle twitching 2 stiff neck 2 constant lightheadedness 2 always worrying about a tumor 2 0.86,-0.07,Muscle twitching 2 stiff neck 2 lightheadedness 2
0.75,-0.22,Muscle twitching 2 stiff neck 2 lightheadedness 2 always worrying about a brain tumor 2
0.50,-0.33,Muscle twitching 2 stiff neck 2 constant lightheadedness 2 always worrying about a brain tumor 2 0.75,-0.34,Muscle twitching 2 stiff neck 2 lightheadedness 2 always worrying about a brain tumor or something 2 0.75,-0.39,Muscle twitching 2 stiff neck 2 stiff neck 2 lightheadedness 2 always worrying about a brain tumor 2
Possibly diarrhea and stomach pain, but most likely none because I am taking this
with a nasty antibiotic for a sinus infection that definitely causes diarrhea and nausea.
I stopped taking it the next day and within 72 hours my swelling decreased significantly, muscle aches and joint pain disappeared, memory loss is not as severe, breathing is easier, stamina is back etc.
0.67,-0.05,diarrhea 2 stomach pain 2 0.80,-0.004,swelling 2 muscle aches 2 joint pain 2 memory loss 2 breathing is 2 stamina is 2
1.00,-0.14,diarrhea 2 stomach pain 2 diarrhea 2 nausea 2 0.80,-0.16,swelling 2 muscle aches 2 joint pain 2 memory loss 2 breathing is 2 stamina is 2
0.86,-0.49,diarrhea 2 stomach pain 2 nausea 2 0.89,-0.22,swelling 2 muscle aches 2 joint pain 2 memory loss 2 breathing is 2 stamina is 2 0.88,-0.57,diarrhea 2 stomach pain 2 diarrhea 2 nausea 2 0.89,-0.30,swelling 2 muscle aches 2 joint pain 2 memory loss 2 breathing is 2 stamina is 2 0.86-0.76,diarrhea 2 stomach pain 2 diarrhea 2 nausea 2 0.89,-0.41,swelling 2 muscle aches 2 joint pain 2 memory loss 2 breathing is 2 stamina is 2
1.00,-0.08,diarrhea 2 stomach pain 2 diarrhea 2 nausea 2 1.00,-0.09,swelling 2 muscle aches 2 joint pain 2 memory loss 2
0.86,-0.23,diarrhea 2 stomach pain 2 diarrhea 2 nausea 2 0.80,-0.23,swelling 2 muscle aches 2 joint pain 2 memory loss 2 breathing is 2 stamina is 2 0.86,-0.29,diarrhea 2 stomach pain 2 diarrhea 2 nausea 2 0.80,-0.25,swelling 2 muscle aches 2 joint pain 2 memory loss 2 breathing is 2 stamina is 2
1.00,-0.44,diarrhea 2 stomach pain 2 diarrhea 2 nausea 2 0.89,-0.26,swelling 2 muscle aches 2 joint pain 2 memory loss 2 breathing is 2 stamina is 2
1.00,-0.53,diarrhea 2 stomach pain 2 diarrhea 2 nausea 2 stomach pain 2 0.80-0.27,swelling 2 muscle aches 2 joint pain 2 memory loss 2 breathing is 2 stamina 2
CONLL2003 (2=LOC,3=PER,4=ORG,5=MISC)
POLAND GOT MONEY FROM POST-WAR SWISS ACCOUNTS. Mike Cito, 17, was expelled from St Pius X High School in Albuquerque after an October game in
which he used the sharpened chin strap buckles to injure two opposing players and the referee.
0.80,-0.08,POLAND 2 POST-WAR 5 SWISS 5 0.67,-0.01,Mike Cito 3 St Pius X 4 Albuquerque 2
1.0,-0.23,POLAND 2 SWISS 5 1.0,-0.19,Mike Cito 3 St Pius X High School 4 Albuquerque 2
0.50,-0.37,POLAND 2 POST-WAR SWISS 5 0.67,-0.48,Mike Cito 3 St Pius X School 4 Albuquerque 2
1.0,-1.33,POLAND 2 POST-WAR POST-WAR 5 SWISS 5 0.67,-0.58,Mike Cito 3 St Pius X 3 Albuquerque 2
0.80,-1.41,POLAND 2 POST-WAR ACCOUNTS 5 SWISS 5 0.67,-0.61,Mike Cito 3 St Pius X 2 Albuquerque 2
1.0,-0.1.0,POLAND 2 SWISS 5 1.0,-0.11,Mike Cito 3 St Pius X High School 4 Albuquerque 2
0.80,-0.25,POLAND 2 POST-WAR 5 SWISS 5 0.67,-0.14,Mike Cito 3 St Pius X 4 Albuquerque 2 0.80,-0.74,POLAND 2 POST-WAR 5 SWISS 5 SWISS 5 0.80,-0.41,Mike Cito 3 St Cito X High School 4 Albuquerque 2
1.0,-0.76,POLAND 2 SWISS 5 SWISS 5 0.67,-0.44,Mike Cito 3 St Pius X School 4 Albuquerque 2
0.80,-0.78,POLAND 2 POST-WAR 5 SWISS 5 POST-WAR 5 0.86,-0.48,Mike Cito 3 St Pius X High School 4 Albuquerque 2 St Pius X 4
There is the international prestige Singapore would enjoy, but "more importantly there is a genuine national interest in fostering better global free trade and an open market", said Tan Kong Yam,
head of Business Policy at the National University of Singapore. 0.86,-0.04,Singapore 2 Tan Kong Yam 3 Business Policy 4 National University of Singapore 4 1.0,-0.07,Singapore 2 Tan Kong Yam 3 National University of Singapore 4 0.86,-0.36,Singapore 2 Tan Kong Yam 3 Business Policy 5 National University of Singapore 4
0.67,-0.68,Singapore 2 Tan Kong Yam 3 Business Policy 4 National University of Singapore 4
0.80,-0.68,Singapore 2 Tan Kong Yam 3 Business Policy of National University of Singapore 4 1.0,-0.06,Singapore 2 Tan Kong Yam 3 National University of Singapore 4 1.0,-0.34,Singapore 2 Tan Kong Yam 3 National University of Singapore 4 National University of Singapore 4
1.0,-0.44,Singapore 2 Tan Kong Yam 3 National University of Singapore 4 Tan Kong Yam 3
0.86,-0.44,Singapore 2 Tan Kong Yam 3 National University of Singapore 4 Singapore 2 0.80,-0.45,Singapore 2 Tan Kong Yam 3 National University of Singapore 4 ACE04 (2=LOC,3=GPE,4=WEA,5=VEH,6=PER,7=ORG,8=FAC) I believe our issues do relate directly to the appointing of electors for the state of Florida. 0.67,-0.06,I 6 our 3 electors for the state of Florida 6 the state of Florida 3
0.80,-0.31,I 6 our 3 electors for the state of Florida 6 the state of Florida 3 Florida 3
0.89,-0.32,I 6 our 6 electors for the state of Florida 6 the state of Florida 3 0.50,-0.32,I 6 our 3 electors for the state of Florida 6 the state of Florida 3 0.50,-0.33,I 6 our 3 electors for the state of Florida 6 the state of Florida 3 1.0,-0.01,I 6 our 6 electors for the state of Florida 6 the state of Florida 3 Florida 3 0.89,-0.10,I 6 electors for the state of Florida 6 the state of Florida 3 Florida 3 0.80,-0.20,I 6 our 3 electors for the state of Florida 6 the state of Florida 3 Florida 3
0.91,-0.47,I 6 our 6 electors for the state of Florida 6 the state of Florida 3 state 3 Florida 3
0.89,-0.48,I 6 our 6 electors for the state of Florida 6 the state of Florida 3 One hundred South Koreans will be in the northern capital Pyongyang, to meet their North Korean relatives.
0.83,-0.08,One hundred South Koreans 6 the northern capital 3 the northern capital Pyongyang 3 their 6 their North Korean relatives 6 North Korean 3
0.77,-0.09,One hundred South Koreans 6 South 3 the northern capital 3 the northern capital Pyongyang 3 their 6 their North Korean relatives 6 North Korean 3 0.77,-0.17,One hundred South Koreans 6 South 2 the northern capital 3 the northern capital Pyongyang 3 their 6 their North Korean relatives 6 North Korean 3 0.67,-0.20,One hundred South Koreans 6 the northern capital 3 the northern capital Pyongyang 3 their 3 their North Korean relatives 6 North Korean 3
0.62,-0.20,One hundred South Koreans 6 South 3 the northern capital 3 the northern capital Pyongyang 3 their 3 their North Korean relatives 6 North Korean 3
0.92,-0.03,One hundred South Koreans 6 South 3 the northern capital 3 Pyongyang 3 their 6 their North Korean relatives 6 North Korean 3
1.0,-0.03,One hundred South Koreans 6 the northern capital 3 Pyongyang 3 their 6 their North Korean relatives 6 North Korean 3
0.92,-0.12,One hundred South Koreans 6 South Koreans 6 the northern capital 3 Pyongyang 3 their 6 their North Korean relatives 6 North Korean 3
0.77,-0.17,One hundred South Koreans 6 South Koreans 6 the northern capital 3 the northern capital Pyongyang 3 their 6 their North Korean relatives 6 North Korean 3 0.92,-0.23,One hundred South Koreans 6 South Koreans 3 the northern capital 3 Pyongyang 3 their 6 their North Korean relatives 6 North Korean 3 Netanyahu supporters are calling either for a change in the law or for simultaneous elections for the Knesset and Prime Minister, which would allow their candidate to run. 0.73,-0.07,Netanyahu 6 Netanyahu supporters 6 the Knesset and Prime Minister 6 their 6 their candidate 6
1.0,-0.17,Netanyahu 6 Netanyahu supporters 6 the Knesset 7 Prime Minister 6 their 6 their candidate 6
0.5454545454544859,-0.347797691822052,Netanyahu 6 Netanyahu candidate 6 the Knesset and Prime Minister 6 their 6 their candidate 6
0.7999999999999359,-0.3507174551486969,Netanyahu 6 Netanyahu supporters 6 the Knesset and Prime Prime Minister 6 their 6 their candidate 6
0.8333333333332694,-0.35162267088890076,Netanyahu 6 Netanyahu supporters 6 Knesset 7 Prime Minister 6 their 6 their candidate 6 0.9999999999999332,-0.010539926588535309,Netanyahu 6 Netanyahu supporters 6 the Knesset 7 Prime Minister 6 their 6 their candidate 6
0.7272727272726645,-0.10700102150440216,Netanyahu 6 Netanyahu supporters 6 the Knesset and Prime Minister 6 their 6 their candidate 6
0.7272727272726645,-0.37671196460723877,Netanyahu 6 Netanyahu supporters 6 the Knesset and Prime Minister 7 their 6 their candidate 6 0.7272727272726645,-0.6056239604949951,Netanyahu 6 Netanyahu supporters 6 the Knesset and Prime Minister candidate 6 their 6 their candidate 6 0.8333333333332694,-0.6295873522758484,Netanyahu 6 Netanyahu supporters 6 the Knesset 6 Prime Minister 6 their 6 their candidate 6
Table 8: Case Study. Candidates before (upper) and after (lower) calibration. Each candidate is formatted as "F1, log-probability, target sequence". The number denotes the corresponding entity type.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
After Conclusion.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The first.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix A
✓ B1. Did you cite the creators of artifacts you used?
Appendix A
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We obtained proper licencing and will not distribute the artifacts.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We stick to the intended use only.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
A.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
A.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
A.2
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
A.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
torroba-hennigen-kim-2023-deriving | Deriving Language Models from Masked Language Models | https://aclanthology.org/2023.acl-short.99 | Masked language models (MLM) do not explicitly define a distribution over language, i.e., they are not language models per se. However, recent work has implicitly treated them as such for the purposes of generation and scoring. This paper studies methods for deriving explicit joint distributions from MLMs, focusing on distributions over two tokens, which makes it possible to calculate exact distributional properties. We find that an approach based on identifying joints whose conditionals are closest to those of the MLM works well and outperforms existing Markov random field-based approaches. We further find that this derived model{'}s conditionals can even occasionally outperform the original MLM{'}s conditionals. | # Deriving Language Models From Masked Language Models
Lucas Torroba Hennigen Yoon Kim Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory [email protected] [email protected]
## Abstract
Masked language models (MLM) do not explicitly define a distribution over language, i.e.,
they are not language models *per se*. However, recent work has implicitly treated them as such for the purposes of generation and scoring.
This paper studies methods for deriving explicit joint distributions from MLMs, focusing on distributions over two tokens, which makes it possible to calculate exact distributional properties.
We find that an approach based on identifying joints whose conditionals are closest to those of the MLM works well and outperforms existing Markov random field-based approaches.
We further find that this derived model's conditionals can even occasionally outperform the original MLM's conditionals.
## 1 Introduction
Masked language modeling has proven to be an effective paradigm for representation learning (Devlin et al., 2019; Liu et al., 2019; He et al., 2021).
However, unlike regular language models, masked language models (MLM) do not define an explicit joint distribution over language. While this is not a serious limitation from a representation learning standpoint, having explicit access to joint distributions would be useful for the purposes of generation (Ghazvininejad et al., 2019), scoring (Salazar et al., 2020), and would moreover enable evaluation of MLMs on standard metrics such as perplexity.
Strictly speaking, MLMs do define a joint distribution over tokens that have been masked out. But they assume that the masked tokens are conditionally independent given the unmasked tokens—an assumption that clearly does not hold for language.
How might we derive a language model from an MLM such that it does not make unrealistic independence assumptions? One approach is to use the set of the MLM's *unary conditionals*—the conditionals that result from masking just a single token in the input—to construct a fully-connected 1149 Markov random field (MRF) over the input (Wang and Cho, 2019; Goyal et al., 2022). This resulting MRF no longer makes any independence assumptions. It is unclear, however, if this heuristic approach actually results in a good language model.1 This paper adopts an alternative approach which stems from interpreting the unary conditionals of the MLM as defining a *dependency network* (Heckerman et al., 2000; Yamakoshi et al., 2022).2 Dependency networks specify the statistical relationship among variables of interest through the set of conditional distributions over each variable given its Markov blanket, which in the MLM case corresponds to all the other tokens. If the conditionals from a dependency network are *compatible*,
i.e., there exists a joint distribution whose conditionals coincide with those of the dependency network's, then one can recover said joint using the Hammersley–Clifford–Besag (HCB; Besag, 1974) theorem. If the conditionals are incompatible, then we can adapt approaches from statistics for deriving near-compatible joint distributions from incompatible conditionals (AG; Arnold and Gokhale, 1998).
While these methods give statistically-principled approaches to deriving explicit joints from the MLM's unary conditionals, they are intractable to apply to derive distributions over full sequences.
We thus study a focused setting where it is tractable to compute the joints exactly, viz., the *pairwise* language model setting where we use the MLM's unary conditionals of two tokens to derive a joint 1MRFs derived this way are still not language models in the strictest sense (e.g., see Du et al., 2022) because the probabilities of sentences of a given length sum to 1, and hence the sum of probabilities of all strings is infinite (analogous to left-to-right language models trained without an [EOS]
token; Chen and Goodman, 1998). This can be remedied by incorporating a distribution over sentence lengths.
2Recent work by Yamakoshi et al. (2022) has taken this view, focusing on sampling from the dependency network as a means to *implicitly* characterize the joint distribution of an MLM. Here we focus on an *explicit* characterization of the joint.
over these two tokens (conditioned on all the other tokens). Experiments under this setup reveal that AG method performs best in terms of perplexity, with the the HCB and MRF methods performing similarly. Surprisingly, we also find that the unary conditionals of the near-compatible AG joint occasionally have lower perplexity than the original unary conditionals learnt by the MLM, suggesting that regularizing the conditionals to be compatible may be beneficial insofar as modeling the distribution of language.3
## 2 Joint Distributions From Mlms
Let V be a vocabulary, T be the text length, and w ∈ VT be an input sentence or paragraph. We are particularly interested in the case when a subset S ⊆ [T] ≜ {1*, . . . , T*} of the input w is replaced with [MASK] tokens; in this case we will use the notation q{t}|S
(· | wS
) to denote the output distribution of the MLM at position t ∈ S, where we mask out the positions in S, i.e., for all k ∈ S we modify w by setting wk = [MASK]. If S = {t},
then we call qt|t ≜ q{t}|{t}
a *unary conditional*.
Our goal is to use these conditionals to construct joint distributions qS|S
(· | wS
) for any S.
Direct MLM construction. The simplest approach is to simply mask out the tokens over which we want a joint distribution, and define it to be the product of the MLM conditionals,
$$q_{S|\overline{S}}^{\rm MLM}({\bf w}_{S}\mid{\bf w}_{\overline{S}})\stackrel{{\Delta}}{{=}}\prod_{i\in S}q_{\{i\}}|\overline{S}(w_{i}\mid{\bf w}_{\overline{S}}).\tag{1}$$
This joint assumes that the entries of wS are conditionally independent given wS
. Since one can show that MLM training is equivalent to learning the conditional marginals of language (App. A),
this can be seen as approximating conditionals with a (mean field-like) factorizable distribution.
MRF construction. To address the conditional independence limitation of MLMs, prior work (Wang and Cho, 2019; Goyal et al., 2022)
has proposed deriving joints by defining an MRF
using the unary conditionals of the MLM. Accordingly, we define
$$q_{S|\overline{{{S}}}}^{\mathrm{MRF}}(\mathbf{w}_{S}\mid\mathbf{w}_{\overline{{{S}}}})\propto\prod_{t\in S}q_{t|\overline{{{t}}}}(w_{t}\mid\mathbf{w}_{\overline{{{t}}}}),\quad\quad(2)$$
which can be interpreted as a fully connected MRF, whose log potential is given by the sum of the unary 3Our code and data is available at: https://github.com/
ltorroba/lms-from-mlms.
log probabilities. One can similarly define a variant of this MRF where the log potential is the sum of the unary *logits*. MRFs defined this way have a single fully connected clique and thus do not make any conditional independence assumptions. However, such MRFs can have unary conditionals that deviate from the MLM's unary conditionals even if those are compatible (App. B). This is potentially undesirable since the MLM unary conditionals could be close to the true unary conditionals,4 which means the MRF construction could be worse than the original MLM in terms of unary perplexity.
Hammersley–Clifford–Besag construction.
The Hammersley–Clifford–Besag theorem (HCB;
Besag, 1974) provides a way of reconstructing a joint distribution from its unary conditionals. Without loss of generality, assume that S = {1*, . . . , k*}
for some k ≤ T. Then given a *pivot point* w′ = (w′1
, . . . , w′k
) ∈ Vk, we define
$$q_{S|\overline{S}}^{\rm HCB}({\bf w}_{S}\mid{\bf w}_{\overline{S}})\propto\prod_{t\in S}\frac{q_{t|\overline{t}}(w_{t}\mid{\bf w}_{>t},{\bf w}_{<t}^{\prime})}{q_{t|\overline{t}}(w_{t}^{\prime}\mid{\bf w}_{>t},{\bf w}_{<t}^{\prime})},\tag{3}$$ where ${\bf w}_{<i}^{\prime}\stackrel{{\triangle}}{{=}}(w_{1}^{\prime},\ldots,w_{i-1}^{\prime})$, and similarly
w>i ≜ (wi+1*, . . . , w*T ). Importantly, unlike the MRF approach, if the unary conditionals of the MLM are compatible, then HCB will recover the true joint, irrespective of the choice of pivot.
Arnold–Gokhale construction. If we assume that the unary conditionals are not compatible, then we can frame our goal as finding a near-compatible joint, i.e., a joint such that its unary conditionals are close to the unary conditionals of the MLM.
Formally, for any S and fixed inputs wS
, we can
define this objective as, $$q_{S|\overline{S}}^{\rm AG}(\cdot\mid{\bf w}_{\overline{S}})=\mathop{\rm argmin}_{\mu}\sum_{t\in S}\sum_{{\bf w}^{\prime}\in{\cal V}^{|S|-1}}J(t,{\bf w}^{\prime}),\tag{4}$$ where $J(t,{\bf w}^{\prime})$ is defined as: $${\rm KL}(q_{t|S\setminus\{t\},\overline{S}}(\cdot\mid{\bf w}^{\prime},{\bf w}_{\overline{S}})\mid\mid\mu_{t|S\setminus\{t\},\overline{S}}(\cdot\mid{\bf w}^{\prime},{\bf w}_{\overline{S}})).$$
We can solve this optimization problem using Arnold and Gokhale's (1998) algorithm (App. C).
## 2.1 Pairwise Language Model
In language modeling we are typically interested in the probability of a sequence p(w). However, the above methods are intractable to apply to full sequences (except for the baseline MLM). For example, the lack of any independence assumptions 4As noted by https://machinethoughts.wordpress.
com/2019/07/14/a-consistency-theorem-for-bert/
in the MRF means that the partition function requires full enumeration over V
Tsequences.5 We thus focus our empirical study on the pairwise setting where |S| = 2.
6In this setting, we can calculate qS|S
(· | wS
) with O(V ) forward passes of the MLM for all methods.
## 3 Evaluation
We compute two sets of metrics that evaluate the resulting joints in terms of (i) how good they are as probabilistic models of language and (ii)
how faithful they are to the original MLM conditionals (which are trained to approximate the true conditionals of language, see App. A). Let D = {(w(n), S(n))}
N
n=1 be a dataset where w(n)
is an English sentence and S
(n) = (a
(n), b(n)) are the two positions being masked. We define the following metrics to evaluate a distribution q′:
Language model performance. We consider two performance metrics. The first is the pairwise perplexity (**P-PPL**) over two tokens,
$$\exp\!\left(\!\!\frac{-1}{2N}\!\!\sum_{n=1}^{N}\log q_{a(n),b(n)|\overline{S}^{(n)}}^{\prime}(w_{a^{(n)}}^{(n)},w_{b^{(n)}}^{(n)}\mid\mathbf{w}_{\overline{S}^{(n)}}^{(n)})\right)$$ We would expect a good joint to obtain lower
!
pairwise perplexity than the original MLM, which
(wrongly) assumes conditional independence. The second is unary perplexity (**U-PPL**),
exp −1 2N X N (i,j)∈ {S (n),S(n) r } log q ′ i|j,S (n) (w (n) i| w (n) j, w (n) S (n)) ! n=1 X
$\left({\boldsymbol{\varepsilon}}_{\text{r}}\right)_{\text{r}}$.
where for convenience we let S
(n)
r ≜ (b
(n), a(n))
as the reverse of the masked positions tuple S
(n).
Note that this metric uses the unary conditionals derived from the pairwise joint, i.e., q′i|j,S, except in the MLM construction case which uses the MLM's original unary conditionals.
Faithfulness. We also assess how faithful the new unary conditionals are to the original unary conditionals by calculating the average conditional KL divergence (**A-KL**) between them,
$$\sum_{n=1}^{N}\sum_{w^{\prime}\in{\mathcal{V}}}{\frac{\mathrm{D}(S^{(n)},w^{\prime},{\bf w}_{\overline{{{S}}}^{(n)}})+{\bf D}(S_{r}^{(n)},w^{\prime},{\bf w}_{\overline{{{S}}}^{(n)}})}{2N|{\mathcal{V}}|}}$$
where we define D(*S, w*′, wS
) ≜ KL(qa|b,S
(· | w′, wS
)|| q′a|b,S
(· | w′, wS
)) for S = (*a, b*). If the new joint is completely faithful to the MLM, this number should be zero. The above metric averages the KL across the entire vocabulary V, but in practice we may be interested in assessing closeness only when conditioned on the gold tokens. We thus compute a variant of the above metric where we only average over the conditionals for the gold token (**G-KL**):
n=1
$$\frac{\mathrm{D}(S^{(n)},w_{b^{(n)}}^{(n)},\mathbf{w}_{\overline{{{S}}}^{(n)}}^{(n)})+\mathrm{D}(S_{r}^{(n)},w_{a^{(n)}}^{(n)},\mathbf{w}_{\overline{{{S}}}^{(n)}}^{(n)})}{2N}.$$
X
N
This metric penalizes unfaithfulness in common contexts more than in uncommon contexts. Note that if the MLM's unary conditionals are compatible, then both the HCB and AG approach should yield the same joint distribution, and their faithfulness metrics should be zero.
## 3.1 Experimental Setup
We calculate the above metrics on 1000 examples7 from a natural language inference dataset (SNLI; Bowman et al., 2015) and a summarization dataset (XSUM; Narayan et al., 2018). We consider two schemes for selecting the tokens to be masked for each sentence: masks over two tokens chosen uniformly at random (**Random pairs**),
and also over random *contiguous* tokens in a sentence (**Contiguous pairs**). Since inter-token dependencies are more likely to emerge when adjacent tokens are masked, the contiguous setup magnifies the importance of deriving a good pairwise joint. In addition, we consider both BERTBASE and BERTLARGE (cased) as the MLMs from which to obtain the unary conditionals.8 For the AG joint, we run t = 50 steps of Arnold and Gokhale's (1998)
algorithm (App. C), which was enough for convergence. For the HCB joint, we pick a pivot using the mode of the pairwise joint of the MLM.9
## 4 Results
The results are shown in Tab. 1. Comparing the PPL's of MRF and MRFL (i.e., the MRF using logits), the former consistently outperforms the latter, indicating that using the raw logits generally results in a worse language model. Comparing the MRFs to MLM, we see that the unary perplexity
(U-PPL) of the MLM is lower than those of the MRFs, and that the difference is most pronounced in the contiguous masking case. More surprisingly, we see that the pairwise perplexity (P-PPL) is often
(much) higher than the MLM's, even though the MLM makes unrealistic conditional independence assumptions. These results suggest that the derived MRFs are in general worse unary/pairwise probabilistic models of language than the MLM itself, implying that the MRF heuristic is inadequate (see App. D for a qualitative example illustrating how this can happen). Finally, we also find that the MRFs' unary conditionals are not faithful to those of the MRFs based on the KL measures. Since one can show that the MRF construction can have unary conditionals that have nonzero KL to the MLM's unary conditionals even if they are compatible (App. B), this gives both theoretical and empirical arguments against the MRF construction.
The HCB joint obtains comparable performance to MRF in the random masking case. In the contiguous case, it exhibits similar failure modes as the MRF in producing extremely high pairwise perplexity (P-PPL) values. The faithfulness metrics are similar to the MRF's, which suggests that the conditionals learnt by MLMs are incompatible.
The AG approach, on the other hand, outperforms the MRFL, MRF and HCB approaches in virtually all metrics. This is most evident in the contiguous masking case, where AG attains lower pairwise perplexity than all models, including the MLM itself.
In some cases, we find that the AG model even outperforms the MLM in terms of unary perplexity, which is remarkable since the unary conditionals of the MLM were *trained* to approximate the unary conditionals of language (App. A). This indicates that near-compatibility may have regularizing effect that leads to improved MLMs. Since AG
was optimized to be near-compatible, its joints are unsurprisingly much more faithful to the original MLM's conditionals. However, AG's G-KL tends to be on par with the other models, which suggests that it is still not faithful to the MLM in the contexts that are most likely to arise. Finally, we analyze the effect of masked position distance on language modeling performance, and find that improvements are most pronounced when the masked tokens are close to each other (see App. E).
| SNLI | |
|--------|-----------|
| B | XSUM SNLI |
| L | XSUM |
## 5 Related Work
Probabilistic interpretations of MLMs. In one of the earliest works about sampling from MLMs, Wang and Cho (2019) propose to use unary condi-
| Random pairs | Contiguous pairs | | | | | | | | |
|----------------|--------------------|-------|-------|-------|-------|-----------------|-------|-------|------|
| Dataset | Scheme | U-PPL | P-PPL | A-KL | G-KL | U-PPL | P-PPL | A-KL | G-KL |
| MLM | 11.22 | 19.01 | 1.080 | 0.547 | 13.78 | 74.68 | 4.014 | 1.876 | |
| MRFL | 13.39 | 71.44 | 0.433 | 0.267 | 23.45 | 13 568.17 1.543 | 0.607 | | |
| MRF | 12.30 | 21.65 | 0.658 | 0.179 | 18.35 | 126.05 | 1.967 | 0.366 | |
| HCB | 12.51 | 22.62 | 0.593 | 0.168 | 17.71 | 589.02 | 2.099 | 0.416 | |
| AG | 10.76 | 12.68 | 0.007 | 0.085 | 13.26 | 21.59 | 0.018 | 0.181 | |
| MLM | 4.88 | 6.12 | 0.404 | 0.227 | 4.91 | 39.33 | 4.381 | 2.128 | |
| MRFL | 5.17 | 9.12 | 0.148 | 0.085 | 6.55 | 2209.94 | 1.561 | 0.383 | |
| MRF | 5.00 | 6.23 | 0.262 | 0.049 | 5.53 | 47.62 | 2.242 | 0.185 | |
| HCB | 5.08 | 6.21 | 0.256 | 0.052 | 6.46 | 174.32 | 2.681 | 0.328 | |
| AG | 5.00 | 5.29 | 0.003 | 0.044 | 5.27 | 8.42 | 0.016 | 0.143 | |
| MLM | 9.50 | 18.57 | 1.374 | 0.787 | 10.42 | 104.12 | 4.582 | 2.463 | |
| MRFL | 11.52 | 76.23 | 0.449 | 0.276 | 15.43 | 8536.92 | 1.470 | 0.543 | |
| MRF | 10.57 | 19.54 | 0.723 | 0.193 | 13.07 | 93.33 | 1.992 | 0.359 | |
| HCB | 10.71 | 20.70 | 0.797 | 0.215 | 14.43 | 458.25 | 2.563 | 0.552 | |
| AG | 8.57 | 10.11 | 0.007 | 0.097 | 9.64 | 15.64 | 0.019 | 0.173 | |
| MLM | 3.80 | 5.67 | 0.530 | 0.413 | 3.91 | 103.86 | 5.046 | 3.276 | |
| MRFL | 3.94 | 7.06 | 0.156 | 0.068 | 4.62 | 1328.20 | 1.441 | 0.290 | |
| MRF | 3.87 | 4.94 | 0.322 | 0.036 | 4.16 | 36.66 | 2.258 | 0.145 | |
| HCB | 3.91 | 5.14 | 0.346 | 0.059 | 5.67 | 164.15 | 2.954 | 0.400 | |
| AG | 3.88 | 4.13 | 0.003 | 0.042 | 4.21 | 6.62 | 0.016 | 0.126 | |
tionals to sample sentences. Recently Yamakoshi et al. (2022) highlight that, while this approach only constitutes a pseudo-Gibbs sampler, the act of re-sampling positions uniformly at random guarantees that the resulting Markov chain has a unique, stationary distribution (Bengio et al., 2013, 2014).
Alternatively, Goyal et al. (2022) propose defining an MRF from the MLM's unary conditionals, and sample from this via Metropolis-Hastings. Concurrently, Young and You (2023) conduct an empirical study of the compatibility of BERT's conditionals.
Compatible distributions. The statistics community has long studied the problem of assessing the compatibility of a set of conditionals (Arnold and Press, 1989; Gelman and Speed, 1993; Wang and Kuo, 2010; Song et al., 2010). Arnold and Gokhale (1998) and Arnold et al. (2002) explore algorithms for reconstructing near-compatible joints from incompatible conditionals, which we leverage in our work. Besag (1974) also explores this problem, and defines a procedure (viz., eq. 3) for doing so when the joint distribution is strictly positive and the conditionals are compatible. Lowd
(2012) apply a version of HCB to derive Markov networks from incompatible dependency networks
(Heckerman et al., 2000).
## 6 Conclusion
In this paper, we studied four different methods for deriving an explicit joint distributions from MLMs, focusing in the pairwise language model setting where it is possible to compute exact distributional properties. We find that the Arnold–Gokhale (AG)
approach, which finds a joint whose conditionals are closest to the unary conditionals of an MLM,
works best. Indeed, our results indicate that said conditionals can attain lower perplexity than the unary conditionals of the original MLM. It would be interesting to explore whether explicitly regularizing the conditionals to be compatible during MLM training would lead to better modeling of the distribution of language.
## 7 Limitations
Our study illuminates the deficiencies of the MRF
approach and applies statistically-motivated approaches to craft more performant probabilistic models. However, it is admittedly not clear how these insights can immediately be applied to improve downstream NLP tasks. We focused on models over pairwise tokens in order to avoid sampling and work with exact distributions for the various approaches (MRF, HCB, AG). However this limits the generality of our approach (e.g., we cannot score full sentences). We nonetheless believe that our empirical study is interesting on its own and suggests new paths for developing efficient and faithful MLMs.
## Ethics Statement
We foresee no ethical concerns with this work.
## Acknowledgements
We thank the anonymous reviewers for their helpful comments. This research is supported in part by funds from the MLA@CSAIL initiative and MITIBM Watson AI lab. LTH acknowledges support from the Michael Athans fellowship fund.
## References
Barry C. Arnold, Enrique Castillo, and José María Sarabia. 2002. Exact and near compatibility of discrete conditional distributions. *Computational Statistics &*
Data Analysis, 40(2):231–252.
Barry C. Arnold and Dattaprabhakar V. Gokhale. 1998.
Distributions most nearly compatible with given families of conditional distributions. *Test*, 7(2):377–390.
Barry C. Arnold and James S. Press. 1989. Compatible conditional distributions. *Journal of the American* Statistical Association, 84(405):152–156.
Yoshua Bengio, Éric Thibodeau-Laufer, Guillaume Alain, and Jason Yosinski. 2014. Deep generative stochastic networks trainable by backprop. In *Proceedings of the 31st International Conference on Machine Learning*, volume 32 of *Proceedings of Machine Learning Research*, pages 226–234, Bejing, China. PMLR.
Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. 2013. Generalized denoising auto-encoders as generative models. In *Proceedings of the 26th* International Conference on Neural Information Processing Systems, NIPS, page 899–907, Red Hook, New York, USA. Curran Associates Inc.
Julian Besag. 1974. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society, 36(2):192–236.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical report, Harvard University.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Li Du, Lucas Torroba Hennigen, Tiago Pimentel, Clara Meister, Jason Eisner, and Ryan Cotterell. 2022. A
measure-theoretic characterization of tight language models.
Andrew Gelman and Terence P. Speed. 1993. Characterizing a joint probability distribution by conditionals.
Journal of the Royal Statistical Society, 55(1):185–
188.
Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-Predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6112–
6121, Hong Kong, China. Association for Computational Linguistics.
Kartik Goyal, Chris Dyer, and Taylor Berg-Kirkpatrick.
2022. Exposing the implicit energy networks behind masked language models via Metropolis–Hastings.
In *International Conference on Learning Representations*.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: Decoding-enhanced BERT with Disentangled Attention. In *International* Conference on Learning Representations.
David Heckerman, Max Chickering, Chris Meek, Robert Rounthwaite, and Carl Kadie. 2000. Dependency networks for inference, collaborative filtering, and data visualization. *Journal of Machine Learning* Research, 1:49–75.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *CoRR*.
Daniel Lowd. 2012. Closed-form learning of markov networks from dependency networks. In *Proceedings* of the 28th Conference on Uncertainty in Artificial Intelligence, pages 533–542, Catalina Island, California, USA. Association for Uncertainity in Artificial Intelligence.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
Topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 2699–2712, Online. Association for Computational Linguistics.
Chwan-Chin Song, Lung-An Li, Chong-Hong Chen, Thomas J. Jiang, and Kun-Lin Kuo. 2010. Compatibility of finite discrete conditional distributions.
Statistica Sinica, 20(1):423–440.
Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. In *Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation*, pages 30–36, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Yuchung J. Wang and Kun-Lin Kuo. 2010. Compatibility of discrete conditional distributions with structural zeros. *Journal of Multivariate Analysis*, 101(1):191–
199.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Takateru Yamakoshi, Thomas Griffiths, and Robert Hawkins. 2022. Probing BERT's priors with serial reproduction chains. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3977–
3992, Dublin, Ireland. Association for Computational Linguistics.
Tom Young and Yang You. 2023. On the inconsistencies of conditionals learned by masked language models.
## A Mlms As Learning Conditional Marginals
One can show that the MLM training objective corresponds to learning to approximate the conditional marginals of language, i.e., the (single-position) marginals of language when we condition on any particular set of positions. More formally, consider an MLM parameterized by a vector θ ∈ Θ and some distribution µ(·) over positions to mask S ⊆ [T]. Then the MLM learning objective is given by:
$${\hat{\boldsymbol{\theta}}}={\underset{\boldsymbol{\theta}}{\operatorname{argsup}}}\operatorname*{\mathbb{E}}_{S\sim\mu(\cdot)}\operatorname*{\mathbb{E}}_{\mathbf{w}\sim p(\cdot)}\left[{\frac{1}{|S|}}\sum_{t\in S}\log q_{t|{\overline{{S}}}}(w_{t}\mid\mathbf{w}_{{\overline{{S}}}};{\boldsymbol{\theta}})\right],$$
where p(·) denotes the true data distribution. Analogously, let pS|S
(· | wS
) and pS
(·) denote the conditionals and marginals of the data distribution, respectively. Then the above can be rewritten as:
θˆ = argsup θ E S∼µ(·) E wS∼pS (·) "1 |S| X t∈S E wS∼pS|S (·) hlog qt|S (wt| wS ; θ) i# = arginf θ E S∼µ(·) E wS∼pS (·) "1 |S| X t∈S KL(pt|S (· | wS )|| qt|S (· | wS ; θ))#,
Thus, we can interpret MLM training as learning to approximate the conditional marginals of language, i.e., ∀S ⊆ [T] and ∀t ∈ S, in the limit we would expect that, for any observed context wS
, we have qt|S
(· | wS
) ≈ pt|S
(· | wS
).
## B Unfaithful Mrfs
Here we show that even if the unary conditionals used in the MRF construction are compatible (Arnold and Press, 1989), the unary conditionals of the probabilistic model implied by the MRF construction can deviate (in the KL sense) from the true conditionals. This is important because (i) it suggests that we might do better (at least in terms of U-PPL) by simply sticking to the conditionals learned by MLM, and
(ii) this is not the case for either the HCB or the AG constructions, i.e., if we started with the correct conditionals, HCB and AG's joint would be compatible with the MLM. Formally, Proposition B.1. Let w1, w2 ∈ V and further let p1|2(· | w2), p2|1(· | w1) be the true (i.e., population)
unary conditional distributions. Define an MRF as q1,2(w1, w2) ∝ p1|2(w1 | w2) p2|1(w2 | w1),
and let q1|2(· | w2), q2|1(· | w1) be the conditionals derived from the MRF. Then there exists p1|2, p2|1 such that
$$\mathrm{KL}(p_{1|2}(\cdot\mid w_{2})\mid\mid q_{1|2}(\cdot\mid w_{2}))>0.$$
Proof. Let w2 ∈ V be arbitrary. We then have: q1|2(w1 | w2) = p1|2(w1 | w2) p2|1(w2 | w1) Pw′∈V p1|2(w′| w2) p2|1(w2 | w′) Now, consider the KL between the true unary conditionals and the MRF unary conditionals: KL(p1|2(· | w2)|| q1|2(· | w2)) = X w∈V p1|2(w | w2) log p1|2(w | w2) q1|2(w | w2) w∈V p1|2(w | w2) log Pw′∈V p1|2(w′| w2) p2|1(w2 | w′) p2|1(w2 | w) = log Ew∼p1|2(·|w2)[p2|1(w2 | w)] − Ew∼p1|2(·|w2)[log p2|1(w2 | w)] This term is the Jensen gap, and in general it can be non-zero. To see this, suppose V = {a, b} and = X
consider the joint
$$p_{1,2}(w_{1},w_{2})={\begin{cases}{\frac{97}{100}}&{w_{1},w_{2}=a}\\ {\frac{1}{100}}&{{\mathrm{otherwise}}}\end{cases}}$$
$$1155$$
with corresponding conditionals p2|1(x | b) = p1|2(x | b) = 12 for all x ∈ V and p2|1(x | a) = p1|2(x | a) = (97 98 x = a 1 98 x = b Now, take w2 = b. We then have KL(p1|2(· | b)|| q1|2(· | b)) = log Ew∼p1|2(·|b)[p2|1(b | w)] − Ew∼p1|2(·|b)[log p2|1(b | w)] = log 12 1 98 + 1 2 − 1 2 log 1 98 + log 12 = log 1 196 + 1 4 − 1 2 log 1 196≈ 1.27 which demonstrates that the KL can be non-zero.
## C Arnold–Gokhale Algorithm
Arnold and Gokhale (1998) study the problem of finding a near-compatible joint from unary conditionals, and provide and algorithm for the case of |S| = 2. The algorithm initializes the starting pairwise distribution q AG(1)
a,b|S
(·, · | wS
) to be uniform, and performs the following update until convergence:
$$q_{a,b}^{\text{AG(t+1)}}(w_{a},w_{b}\mid\mathbf{w}_{\overline{S}})\propto\frac{q_{a|b,\overline{S}}(w_{a}\mid w_{b},\mathbf{w}_{\overline{S}})+q_{b|a,\overline{S}}(w_{b}\mid w_{a},\mathbf{w}_{\overline{S}})}{\left(q_{a}^{\text{AG(t)}}(w_{a}\mid\mathbf{w}_{\overline{S}})\right)^{-1}+\left(q_{b}^{\text{AG(t)}}(w_{b}\mid\mathbf{w}_{\overline{S}})\right)^{-1}}.\tag{5}$$
## D Qualitative Example Of Mrf Underperformance
This example from SNLI qualitatively illustrates a case where both the unary and pairwise perplexities from the MRF underperforms the MLM: "The [MASK]1 [MASK]2 at the casino", where the tokens "man is" are masked. In this case, both MRFs assign virtually zero probability mass to the correct tokens, while the MLM assigns orders of magnitude more (around 0.2% of the mass of the joint). Upon inspection, this arises because q2|1,S
(is | man) ≈ 0.02 and q1|2,S
(man | is) ≈ 2 × 10−5, which makes the numerator of q MRF
1,2|S
(man, is) be ≈ 0. The MRF could still assign high probability to this pair if the denominator is also
≈ 0, but in this case we have q2|1,S
(was | man) ≈ 0.33 and q1|2,S
(man | was) ≈ 0.03, which makes the denominator well above 0. This causes the completion "man is" to have disproportionately little mass in the joint compared other to combinations ("man was") that were ascribed more mass by BERT's unary conditionals.
## E Token Distance Analysis
We also explore the effect of the distance between masked tokens on the pairwise negative log-likelihood
(PNLL, lower is better; note this is equivalent to the log PPPL) of the joints built using the different approaches we considered. We considered two different kinds of distance functions between tokens: (i)
the absolute difference in the positions between the two masked tokens, and (ii) their syntactic distance
(obtained by running a dependency parser on unmasked sentences).
We plot the results in Fig. 1 (SNLI) and Fig. 2 (XSUM). Note that the black bars denote the number of datapoints with that distance between the two masked tokens, where a syntactic distance of 0 means that the two masked tokens belong to the same word, whereas a token distance of 0 means that the two masked tokens are adjacent. The graphs indicate that the language modeling performance improvement
(compared to using the MLM joint) is most prominent when masked tokens are close together, which is probably because when the masked tokens are close together they are more likely to be dependent. In this case, AG tends to do best, HCB and MRF tend to do similarly, followed by MRF-L and, finally, the conditionally independent MLM, which follows the trends observed in the paper.
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 3
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Available online
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Consistent with intended use B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3
## C ✓ **Did You Run Computational Experiments?** Section 4
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Available online The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
mao-etal-2023-unitrec | {U}ni{TR}ec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation | https://aclanthology.org/2023.acl-short.100 | Prior study has shown that pretrained language models (PLM) can boost the performance of text-based recommendation. In contrast to previous works that either use PLM to encode user history as a whole input text, or impose an additional aggregation network to fuse multi-turn history representations, we propose a unified local- and global-attention Transformer encoder to better model two-level contexts of user history. Moreover, conditioned on user history encoded by Transformer encoders, our framework leverages Transformer decoders to estimate the language perplexity of candidate text items, which can serve as a straightforward yet significant contrastive signal for user-item text matching. Based on this, our framework, UniTRec, unifies the contrastive objectives of discriminative matching scores and candidate text perplexity to jointly enhance text-based recommendation. Extensive evaluation shows that UniTRec delivers SOTA performance on three text-based recommendation tasks. | # Unitrec: A Unified Text-To-Text Transformer And Joint Contrastive Learning Framework For Text-Based Recommendation
Zhiming Mao1,2, Huimin Wang1,3, Yiming Du1,2**, Kam-Fai Wong**1,2 1The Chinese University of Hong Kong, Hong Kong, China 2MoE Key Laboratory of High Confidence Software Technologies, China 3Jarvis Lab, Tencent, Shenzhen, China
{zmmao,ydu,kfwong}@se.cuhk.edu.hk [email protected]
## Abstract
Prior study has shown that pretrained language models (PLM) can boost the performance of text-based recommendation. In contrast to previous works that either use PLM to encode user history as a whole input text, or impose an additional aggregation network to fuse multiturn history representations, we propose a unified local- and global-attention Transformer encoder to better model two-level contexts of user history. Moreover, conditioned on user history encoded by Transformer encoders, our framework leverages Transformer decoders to estimate the language perplexity of candidate text items, which can serve as a straightforward yet significant contrastive signal for user-item text matching. Based on this, our framework, UniTRec, unifies the contrastive objectives of discriminative matching scores and candidate text perplexity to jointly enhance text-based recommendation. Extensive evaluation shows that UniTRec delivers SOTA performance on three text-based recommendation tasks.1
## 1 Introduction
Text-based recommendation (Li et al., 2010; Gu et al., 2016; Okura et al., 2017; Malkiel et al., 2020)
aims to recommend relevant textual content (e.g.,
news articles, Twitter posts) to people based on their behaviors as represented in historical log texts.
For instance, engagement recommendation (Cheng et al., 2022) on social media (e.g., Twitter and Reddit) helps users discover and engage with interested threads by modeling their browsing history.
Pretrained language models (Devlin et al., 2019; Brown et al., 2020) have made waves in recent text-based recommendation research (Zhang et al.,
2021; Qi et al., 2022; Geng et al., 2022). The most common practice is using PLM encoders
(BERT family) to learn representations of user history and candidate item texts. Recommendation matching scores are computed over the user and item representations and finally optimized by noise contrastive estimation (NCE) loss (Gutmann and Hyvärinen, 2010) for ranking multiple candidates.
Unlike encoding single text, using PLM to encode multi-turn texts of user history is nontrivial.
Existing works (Malkiel et al., 2020; Qi et al., 2022; Geng et al., 2022) concatenate multi-turn history texts as a whole input text, then use one PLM encoder to learn the holistic user representation. This is a standard PLM encoding manner but ignores the relation among history turns, as all word tokens from different history turns are *equally attended*2.
In contrast, previous studies point out that learning the relation among user history turns is also beneficial (Zeng et al., 2020; Qi et al., 2021). Another approach is using PLM encoders to learn representations from multi-turn history texts, followed by an additional aggregation network to fuse the multi-turn representations (Wu et al., 2021; Li et al., 2022). However, the imposed aggregation networks (with newly initialized parameters)
weaken the representation power of PLM encoders which are already pretrained on large-scale corpora.
This work introduces UniTRec, a Unified text-totext Transformer framework for text-based Recommendation. In the encoder component of UniTRec, we design local- and global-attention to learn user history representations through tailored attention masking, which aims to jointly model word-level and turn-level relations of user history. UniTRec can utilize the full power of PLM encoders because it preserves the intact structure of PLM encoders without newly imposed parameters.
Different from most previous works that predict user-candidate matching scores solely based on the representations learned by Transformer encoders, we argue that conditioned on user representations 1Our code is available at https://github.com/Veasonsilverbullet/UniTRec.
2There is no inductive bias of turn-level and history-level relations introduced to Transformer self-attention computation, where each token plays an equal role.
![1_image_0.png](1_image_0.png)
learned by Transformer encoders, candidate text perplexity (PPL) estimated by pretrained Transformer decoders is also a straightforward yet significant signal for text-based recommendation. As shown in Figure 1, we hypothesize that the candidate text perplexity estimated by pretrained LM
decoders can directly measure the text matching degree between user history and candidate texts. It is because the perplexity estimates the likelihood of candidate texts based on encoder outputs, which naturally indicates the probabilities of candidate texts given the user history. Besides, UniTRec can use the last hidden states of Transformer decoders to directly predict matching scores. Hence, this work unifies the contrastive objectives of discriminative matching scores and candidate text perplexity to jointly enhance text-based recommendation.
The contributions of this work are: (1) We propose local- and global-attention to model two-level relation of user history without additional parameters, which enjoys the full power of PLM encoders.
(2) We introduce PLM perplexity to measure usercandidate text matching and unify the objectives of discriminative matching scores and candidate text perplexity to enhance text-based recommendation.
(3) Experiments on three text-based recommendation datasets validate the effectiveness of UniTRec.
## 2 Approach 2.1 Unified User-History Modeling
Formally, multi-turn history of a user is represented as H = [t1, t2*, ..., t*N ], and each turn text ti contains |ti| words as ti = [x 1 i
, x2 i
, ..., x |ti| i]. UniTRec aims to unify learning word- and turn-level context representations in one Transformer encoder.
Local attention on word-level context. We first concatenate the multi-turn history texts as the input tokens X = [x 11
, x21
, ..., x |t1| 1, ..., x1N , x2N *, ..., x* |tN | N].
Inspired by Dong et al. (2019), we tailor the attention masking in Transformer self-attention to learn the word-level context of each turn. Specifically, we allow word tokens from the same turn to attend to each other, while tokens from different turns are excluded from self-attention computation:
$\mathbf{M}_{i,j}=\begin{cases}0,&\text{token$x_{i}$and$x_{j}$in the same turn}\\ -\infty,&\text{otherwise}\end{cases}$
Attention($Q,K,V$) = softmax($\frac{QK^{T}}{\sqrt{d_{k}}}+$M)$V$
$\left(1\right)^{2}$
, where *Q, K, V* are self-attention query, key, and value in Vaswani et al. (2017), M is the mask matrix to achieve local-attention inside each turn text.
The local self-attention blocks consist of L1 layers, by which original PLM encoders can be adapted to learn word-level context representations of turns.
Global attention on turn-level context. Over the local self-attention layers, we leverage global self-attention to model the relation among history turns. Specifically, tokens from all turns attend to each other in self-attention computation (by setting the mask matrix M = 0). In this way, Transformer encoders can perform global interaction among each token (and turn) to learn turn-level context representations of user history. There are L2 layers in the global self-attention blocks, which can also be inherited from PLM encoders directly.
## 2.2 Joint Contrastive Ranking Objectives
Conditioned on the history representation, we input the candidate text to Transformer decoders to predict how likely it should be recommended. It is worth noting that Transformer decoders can naturally perform effective **cross-attention** interaction between history and candidate hidden states.
## 2.2.1 Objective On Discriminative Scores
Motivated by Lewis et al. (2020), we feed the last hidden state of decoder output hT to an MLP scorehead which predicts the user-candidate matching score S
d = ScoreHead(hT ). The matching score is discriminative, as higher scores indicate higher user-candidate matching probabilities.
Following previous works (Li et al., 2022; Qi et al., 2022), we adopt negative sampling with NCE
loss to optimize matching score prediction. Given the user history and its ground truth matched candidate Ci, UniTRec predicts the matching score
![2_image_0.png](2_image_0.png)
as S
d+
i. In addition, K unmatched negative candidates {Cj}
K
j=1 are sampled from the candidate set, and their matching scores are {S
d−
j}
K
j=1. The NCE
loss is represented in a contrastive form:
$${\mathcal{L}}_{i}^{d}=-\log\frac{\exp(S_{i}^{d+})}{\exp(S_{i}^{d+})+\sum_{j=1}^{K}\exp(S_{j}^{d-})}\quad(2)$$
## 2.2.2 Objective On Candidate Text Perplexity
As aforementioned, UniTRec leverages perplexity to rank candidate texts. Since lower perplexity indicates higher user-candidate matching probability, regarding the candidate text Y = [y1, y2*, ..., y*T ],
we define the perplexity-based matching score S
p as its negative perplexity3:
$$S^{p}=-{\rm PPL}(Y)=\frac{1}{T}\sum_{i=1}^{T}\log p_{\theta}(y_{i}|y_{<i})\tag{3}$$
, where pθ(·) denotes the target probability output from the UniTRec Transformer decoder. Similar to Eq. (2), we optimize the perplexity-based matching score S
pin the NCE loss form. As perplexity empirically varies in a wide range, we introduce a temperature parameter τ to balance the joint NCE
loss gradients following Radford et al. (2021).
$$\mathcal{L}_{i}^{p}=-\log\frac{\exp(\tau\cdot S_{i}^{p+})}{\exp(\tau\cdot S_{i}^{p+})+\sum_{j=1}^{K}\exp(\tau\cdot S_{j}^{p-})}\tag{4}$$ where $\tau$ is learnable and initialized to 1. On the
training dataset D, the joint contrastive learning
objective is formulated as:
$${\mathcal{L}}=\sum\nolimits_{i=1}^{|{\mathcal{D}}|}\left({\mathcal{L}}_{i}^{d}+{\mathcal{L}}_{i}^{p}\right)$$
(5)
## 2.3 Model Initialization And Inference
As UniTRec is a standard text-to-text Transformer, we initialize the parameters from pretrained BART
(Lewis et al., 2020). In inference, UniTRec predicts the discriminative and perplexity-based scores for each candidate item, respectively. The two separate scores S
dand S
pare normalized, averaged, and finally ranked as the output. Detailed ranking process is provided in Appendix B.
## 3 Experiments
We evaluate UniTRec on three text-based recommendation tasks: 1) *NewsRec*, to recommend news articles to users based on their browsing history.
We use the *MIND-small* dataset (Wu et al., 2020)
for experiments. 2) *QuoteRec*, to recommend quotations to users based on their conversation history.
We use the *Reddit-quotation* dataset (Wang et al.,
2021) for experiments. 3) *EngageRec*, to recommend social media posts for users to engage with based on their comment history. We use the dataset released by Zeng et al. (2020) for experiments. Detailed dataset statistics is provided in Appendix A.
Implementation Details. The UniTRec encoder and decoder both consist of 6 Transformer layers with 768-dimensional hidden states and 12 attention heads. We set L1 = 3 and L2 = 3. We use AdamW optimizer (Loshchilov and Hutter, 2019)
to train UniTRec with cosine learning rate decay.
Baselines. We compare UniTRec with competitive baselines: 1) GRU4Rec (Balázs et al., 2016)
utilizes a GRU network to learn multi-turn history.
2) SASRec (Kang and McAuley, 2018) encodes user history with a self-attention based sequential model. 3) BERT4Rec (Sun et al., 2019) employs bidirectional self-attention to model user history. 4)
RoBERTa-Sim, a simple yet strong baseline men-
$$({\boldsymbol{5}})$$
Model MRR NDCG@5/10 HR@5/10 MRR NDCG@5/10 HR@5/10 MRR NDCG@5/10 HR@5/10
GRU4Rec 32.91 36.20/42.53 50.33/68.35 34.08 34.65/37.93 44.45/54.63 2.12 1.04/1.51 1.27/2.65 SASRec 32.60 36.03/42.37 50.63/68.64 33.63 34.30/37.49 44.32/54.20 2.40 1.49/1.95 2.16/3.47
BERT4Rec 32.87 36.18/42.40 50.21/67.97 33.59 34.26/37.27 43.76/53.05 3.04 1.98/3.23 2.81/6.67 RoBERTa-Sim 32.96 36.47/42.81 51.06/69.08 37.13 37.96/41.18 48.14/58.06 3.74 2.66/3.75 4.42/**7.70**
UNBERT 33.09 36.53/42.84 50.87/68.82 39.75 40.74/43.69 50.90/60.04 2.83 1.96/2.67 3.11/5.24 UniTRec 33.76 37.63/43.74 52.61/69.89 41.24 42.38/45.31 52.87/61.88 4.06 3.23/**4.29 4.58**/7.68
| NewsRec | QuoteRec | EngageRec |
|-----------|------------|-------------|
Table 1: Experiment results on three text-based recommendation tasks. MRR denotes mean reciprocal rank, NDCG
denotes normalized discounted cumulative gain, and HR denotes hit ratio (presented in percentage). The overall performance of UniTRec is better than other baseline models with p-value < 0.05, validated by unpaired t-test.
Model MRR NDCG@5/10 HR@5/10 MRR NDCG@5/10 HR@5/10 MRR NDCG@5/10 HR@5/10 UniTRec 33.76 37.63/43.74 52.61/69.89 41.24 42.38/45.31 52.87/61.88 4.06 3.23/4.29 4.58/7.68 w/o BART Init 30.31 33.32/39.69 47.55/65.78 19.02 17.66/20.80 22.45/32.16 2.24 0.86/1.61 1.27/3.62
w/o Local-Att 33.34 37.22/43.32 52.28/69.54 40.44 41.63/44.56 52.09/61.15 3.92 3.19/4.15 4.38/7.36
w/o Global-Att 33.22 37.06/43.17 52.14/69.47 40.25 41.47/44.26 52.07/60.76 3.64 2.78/3.59 3.89/6.35
Disc-Score only 33.07 36.76/43.03 51.68/69.46 40.59 41.81/44.65 52.39/61.14 3.82 2.99/3.60 4.49/6.85
PPL-Score only 32.83 36.39/42.59 51.05/68.67 40.31 41.43/44.47 52.13/61.20 3.29 2.39/3.03 3.86/5.66
Table 2: Recommendation performance of ablation model variants.
| NewsRec | QuoteRec | EngageRec |
|-----------|------------|-------------|
tioned in Qi et al. (2022), uses the hidden states of [CLS] tokens to measure user-candidate similarity.
5) UNBERT, implemented as Zhang et al. (2021),
concatenates history and candidate texts as the input to BERT and predicts matching scores from the final hidden states of [CLS] tokens.
Note that we do not consider other methods that use non-text inputs (e.g., user profile, text topic labels). For fair comparison, all baseline models use pretrained 12-layer RoBERTa-base (Liu et al.,
2019) as text encoders to learn embeddings of texts.
## 3.1 Main Results
Table 1 shows the performance of experiment models. From the results of *NewsRec* and *QuoteRec*,
we can see that UniTRec outperforms all baseline models by a clear margin. Also, RoBERTa-Sim and UNBERT that directly use the [CLS] hidden states to represent user history, surpass other baselines that build additional aggregation networks upon the whole RoBERTa outputs. As displayed in the results, *EngageRec* is the most difficult task.
We inspect the dataset and find that the texts on social media contain too much noise (e.g., URL and emoji), and the user history contains less number of turns. Nevertheless, UniTRec achieves better overall performance than other baseline models, validating its robustness on noisy text inputs and limited user history.
## 3.2 Ablation Studies And Analyses
We further conduct ablation studies on UniTRec.
The experiment results are reported in Table 2.
Initialization of UniTRec. We train UniTRec from scratch without initialization from pretrained BART (refer to w/o BART Init). The recommendation performance significantly drops in all three tasks, which indicates that acquiring effective text understanding ability from PLM is a necessary key to UniTRec performance.
Local and global attention. We investigate the function of two-level attention modules of the UniTRec history encoder. Concretely, we set L1 = 0 in w/o Local-Att and L2 = 0 in w/o Global-Att, where L1 + L2 = 6. We can observe that removing local and global attention from the original UniTRec history encoder both lead to suboptimal performance, while the performance drop is more significant in w/o Global-Att. The results justify the effectiveness of jointly modeling two-level history contexts through adapted Transformer attention masking without additional parameters.
## Discriminative And Perplexity-Based Objectives.
We probe into training UniTRec with standalone discriminative (Disc-Score only) and perplexitybased (PPL-Score only) contrastive objectives, respectively. We can see that the discriminative objective yields better performance than the perplexitybased objective. Besides, the model performance on both standalone objectives declines compared to the original joint objective. The results indicate that the discriminative and perplexity-based matching scores are complementary and can jointly provide more accurate signals of user history and candidate text matching for text-based recommendation.
## 4 Conclusion
We present a unified Transformer UniTRec for textbased recommendation. UniTRec learns two-level contexts of multi-turn user history and jointly exploits discriminative matching scores and candidate text perplexity as matching objectives. Empirical experiments on three text-based recommendation datasets corroborate the effectiveness of UniTRec.
## 5 Limitations
Our model only focuses on utilizing text information for recommendation, which is a key limitation of this work. In real-world settings, recommender systems are usually required to handle heterogeneous information inputs. UniTRec is a pure textbased recommender modeling user history and candidate texts as inputs. However, incorporating additional side information (e.g., user profile, text topic labels, and dwell time of user behaviors) could further improve the recommendation performance and alleviate the *cold start* problem. Furthermore, UniTRec only models two-level relations of user behavior history. Nonetheless, incorporating more user behavior information, such as implicit and negative feedback, could further enhance the recommendation performance.
## Acknowledgements
We appreciate constructive comments from anonymous reviewers. The research described in this paper is partially supported by CUHK under Project No. 3230366.
## References
Hidasi Balázs, Karatzoglou Alexandros, Baltrunas Linas, and Tikk Domonkos. 2016. Session-based recommendations with recurrent neural networks. In 4th International Conference on Learning Representations ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Daniel Cheng, Kyle Yan, Phillip Keung, and Noah A.
Smith. 2022. The engage corpus: A social media dataset for text-based recommender systems. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 1885–1889, Marseille, France. European Language Resources Association.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186. Association for Computational Linguistics.
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In *33rd Conference on Neural Information Processing Systems (NeurIPS 2019)*.
Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems, RecSys '22, page 299–315, New York, NY,
USA. Association for Computing Machinery.
Youyang Gu, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Learning to refine text based recommendations. In *Proceedings of the 2016 Conference* on Empirical Methods in Natural Language Processing, pages 2103–2108, Austin, Texas. Association for Computational Linguistics.
Michael Gutmann and Aapo Hyvärinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In *Proceedings* of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of *Proceedings of Machine Learning Research*, pages 297–304, Chia Laguna Resort, Sardinia, Italy. PMLR.
Wang-Cheng Kang and Julian McAuley. 2018. Selfattentive sequential recommendation. In *2018 IEEE*
International Conference on Data Mining (ICDM),
pages 197–206.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Jian Li, Jieming Zhu, Qiwei Bi, Guohao Cai, Lifeng Shang, Zhenhua Dong, Xin Jiang, and Qun Liu. 2022.
MINER: Multi-interest matching network for news recommendation. In *Findings of the Association for* Computational Linguistics: ACL 2022, pages 343–
352, Dublin, Ireland. Association for Computational Linguistics.
Yize Li, Jiazhong Nie, Yi Zhang, Bingqing Wang, Baoshi Yan, and Fuliang Weng. 2010. Contextual recommendation based on text mining. In *Coling* 2010: Posters, pages 692–700, Beijing, China. Coling 2010 Organizing Committee.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. In *arXiv preprint arXiv: 1907.11692*. arXiv.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *7th International* Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Itzik Malkiel, Oren Barkan, Avi Caciularu, Noam Razin, Ori Katz, and Noam Koenigstein. 2020. RecoBERT:
A catalog language model for text-based recommendations. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, Online. Association for Computational Linguistics.
Shumpei Okura, Yukihiro Tagami, Shingo Ono, and Akira Tajima. 2017. Embedding-based news recommendation for millions of users. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 1933–1942, New York, NY, USA. Association for Computing Machinery.
Fanchao Qi, Yanhui Yang, Jing Yi, Zhili Cheng, Zhiyuan Liu, and Maosong Sun. 2022. QuoteR: A
benchmark of quote recommendation for writing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 336–348, Dublin, Ireland. Association for Computational Linguistics.
Tao Qi, Fangzhao Wu, Chuhan Wu, Peiru Yang, Yang Yu, Xing Xie, and Yongfeng Huang. 2021. HieRec:
Hierarchical user interest modeling for personalized news recommendation. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 5446–5456, Online. Association for Computational Linguistics.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International
Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763. PMLR.
Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19, page 1441–1450, New York, NY, USA. Association for Computing Machinery.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30, pages 5998–6008. Curran Associates, Inc.
Lingzhi Wang, Xingshan Zeng, and Kam-Fai Wong.
2021. Quotation recommendation and interpretation based on transformation from queries to quotations.
In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 754–758, Online. Association for Computational Linguistics.
Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2021. Empowering news recommendation with pre-trained language models. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '21, page 1652–1656, New York, NY, USA.
Association for Computing Machinery.
Fangzhao Wu, Ying Qiao, Jiun-Hung Chen, Chuhan Wu, Tao Qi, Jianxun Lian, Danyang Liu, Xing Xie, Jianfeng Gao, Winnie Wu, and Ming Zhou. 2020.
MIND: A large-scale dataset for news recommendation. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 3597–3606, Online. Association for Computational Linguistics.
Xingshan Zeng, Jing Li, Lu Wang, Zhiming Mao, and Kam-Fai Wong. 2020. Dynamic online conversation recommendation. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 3331–3341, Online. Association for Computational Linguistics.
Qi Zhang, Jingjie Li, Qinglin Jia, Chuyuan Wang, Jieming Zhu, Zhaowei Wang, and Xiuqiang He. 2021. Unbert: User-news matching bert for news recommendation. In *Proceedings of the Thirtieth International* Joint Conference on Artificial Intelligence, IJCAI-21, pages 3356–3362. International Joint Conferences on Artificial Intelligence Organization. Main Track.
| Dataset | NewsRec | QuoteRec | EngageRec |
|-----------------------|-----------|------------|-------------|
| Avg. history turns | 26.09 | 4.24 | 3.29 |
| Avg. history tokens | 414.40 | 279.82 | 286.82 |
| Avg. candidates | 37.23 | 1111 | 7163 |
| Avg. candidate tokens | 16.15 | 19.11 | 102.42 |
Table 3: Statistics of three text-based recommendation training datasets. History and candidate **tokens** denote the number of BPE-tokenized tokens. The test set distribution is closed to the training sets (except candidates of *EngageRec*) and hence omitted. Note that the max length of each history log is truncated to 1024 tokens.
## A Dataset Statistics
The detailed statistics of the three text-based recommendation datasets are displayed in Table 3. Note that we use news titles as the text inputs for *NewsRec* following Qi et al. (2021). *NewsRec* regards the user clicked and non-clicked news as candidate texts, while *QuoteRec* and *EngageRec* regard all potential quotation texts and post texts as candidates.
Different from Zeng et al. (2020) that formulates the task as recommending candidate users to given posts based on post content, we formulate the task as recommending candidate posts to given users based on user history.
## Algorithm 1 Candidate Ranking Processs
Input: discriminative scores S
d = {S
d 1 , Sd 2 *, ..., S*dM},
perplexity-based scores S
p = {S
p 1
, Sp 2
, ..., SpM}.
Output: final averaged ranking R¯.
1: Derive the normalized discriminative scores S
d norm =
softmax(S
d).
2: Derive the normalized perplexity-based scores S
p norm =
softmax(S
p).
3: Derive the geometric average scores S¯ = log (S
d norm) +
log (S
p norm).
4: Sort the averaged scores S¯ by descending order to derive the final ranking: R¯ ← Rankdes(S¯).
5: **return** R¯
## B Inference Ranking
Given the user history and M candidate texts, UniTRec first predicts the discriminative ranking scores S
d = {S
d 1
, Sd 2
, ..., SdM} and perplexitybased ranking scores S
p = {S
p 1
, Sp 2
, ..., SpM} of the candidates. Algorithm 1 outlines an approach to aggregate the final ranking based on S
dand S
p. Note that the function Rank(S)
4 denotes outputting the sorted order of elements in a score list S. There exist other ways to average the ranking of S
dand S
p, which we leave for future work to explore.
4Rank(S) works similarly to scipy.stats.rankdata(). For example in ascending order, Rankasc({0.2, 0.6, 0.7, 0.4}) =
scipy.stats.rankdata([0.2, 0.6, 0.7, 0.4]) = [1, 3, 4, 2]
## C Qualitative Analysis
We show randomly sampled outputs of UniTRec, for instance, demonstrated on the news recommendation and quote recommendation tasks. Table 4 and 5 showcase the qualitative samples.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
| Candidate News Texts | S | | | |
|-----------------------------------------------------------------------------------------------------------------|-------|-------|----|----|
| Taylor Swift Rep Hits Back at Big Machine, Claims She's Actually Owed $7.9 Million in Unpaid Royalties | 0.095 | 0.069 | 4 | ✗ |
| Former North Carolina State, NBA player Anthony Grundy dies in stabbing, police say | 0.172 | 0.155 | 3 | ✗ |
| 13 Reasons Why's Christian Navarro Slams Disney for Casting "the White Guy" in The Little Mermaid | 0.048 | 0.065 | 7 | ✗ |
| Opinion: Colin Kaepernick is about to get what he deserves: a chance | 0.303 | 0.250 | 1 | ✓ |
| 3 Indiana judges suspended after a night of drinking turned into a White Castle brawl | 0.076 | 0.059 | 5 | ✗ |
| 66 Cool Tech Gifts Anyone Would Be Thrilled to Receive | 0.009 | 0.005 | 9 | ✗ |
| Police find 26 children behind false wall at Colorado day care | 0.034 | 0.116 | 6 | ✗ |
| I've been writing about tiny homes for a year and spent 2 nights in a 300-foot home to see what it is all about | 0.029 | 0.019 | 8 | ✗ |
| Report: Police investigating woman's death after Redskins' player Montae Nicholson took her to hospital | 0.235 | 0.261 | 2 | ✓ |
| (i) Qualitative Example-A from news recommendation. | | | | |
![7_image_2.png](7_image_2.png)
![7_image_3.png](7_image_3.png)
Table 4: Case analyses of news recommendation. *History News Texts* are sorted by user-clicked timestamps. S
d, S
p, and R¯ are normalized discriminative, perplexity-based scores, and average ranking as described in Appendix B.
Clicked denotes the ground truth user-click labels. Note that the experiment history logs are anonymized and delinked, which is always the first priority of the recommendation study.
| Turn | History News Texts | | | |
|--------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|----|----|
| #1 | Toddler dancing to celebrate 11 months cancer-free goes viral | | | |
| #2 | NFL Week 8 Power Rankings: Old-school football rules the day | | | |
| #3 | The 25 US cities where it's easiest to get a mortgage | | | |
| #4 | Burning questions for Cowboys vs Giants on "Monday Night Football" | | | |
| #5 | Who's the favorite to win 2019 NFL rushing title? | | | |
| #6 | Grading all 32 NFL teams heading into the last eight weeks of the 2019 season | | | |
| #7 | Jennifer Aniston looks amazing in a makeup-free selfie, plus more news | | | |
| #8 | This $12 million "mansion yacht" is made entirely of stainless steel and it's a first for the industry. Take a peek inside Candidate News Texts S d S Opinion: Colin Kaepernick is about to get what he deserves: a chance 0.330 0.400 | 1 | ✓ | |
| U.S. Troops Will Die If They Remain in Syria, Bashar Al-Assad Warns | 0.024 | 0.011 | 10 | ✗ |
| Pete Davidson, Kaia Gerber Are Dating, Trying to Stay "Low Profile" | 0.064 | 0.033 | 6 | ✗ |
| The Hottest Tech Gifts This Holiday Season | 0.050 | 0.027 | 8 | ✗ |
| Taylor Swift Rep Hits Back at Big Machine, Claims She's Actually Owed $7.9 Million in Unpaid Royalties | 0.046 | 0.038 | 7 | ✗ |
| 13 Reasons Why's Christian Navarro Slams Disney for Casting "the White Guy" in The Little Mermaid | 0.060 | 0.096 | 4 | ✓ |
| Some believe Mason Rudolph, hit in head with his own helmet, isn't getting enough blame | 0.154 | 0.179 | 2 | ✓ |
| South Carolina teen gets life in prison for deadly elementary school shooting | 0.066 | 0.046 | 5 | ✗ |
| The Unlikely Star of My Family's Thanksgiving Table | 0.047 | 0.021 | 9 | ✗ |
| Police investigating woman's death after Redskins' player Montae Nicholson took her to hospital | 0.158 | 0.149 | 3 | ✗ |
| (ii) Qualitative Example-B from news recommendation. | | | | |
```
Turn Conversation Threading History
#1 I own an FJ. It's a great car and even on stockies. It s great offroad.
#2 I feel bad for you that you run the risk of being associated with the typical FJ owner.
#3 What is a typical FJ owner? I've not heard anything bad about FJ owners.
#4 It's like someone who drives a jeep wrangler in NYC. There's no need. Tons of FJ owners do that have it and not use it for what it's made for.
#5 God forbid someone likes the design of a car and doesn't use it offroad.
#6 Then buy a much more economic environmentalist friendly version. If you buy something and always use it for much less than it's purpose,
why buy it?
#7 Or people can buy whatever the hell they want because it's their money and not yours.
#8 You're entirely right. Just like people can be rude just because you can do it, because you have the ability but why should you ass.
#9 I wasn't aware that somebody buying a vehicle that they like and you don't was morally wrong.
#10 I love FJs. It's perfectly fine to buy whatever you think looks nice.
```
| Candidate Quote Texts | S d | S | | |
|--------------------------------------------------------------------------------------------|-------|-------|----|----|
| Beauty is in the eye of the beholder. | 0.480 | 0.471 | 1 | ✓ |
| A fool and his money are soon parted. | 0.176 | 0.140 | 2 | |
| Form follows function. | 0.051 | 0.046 | 3 | |
| Everything is worth what its purchaser will pay for it. | 0.040 | 0.058 | 4 | |
| Because it's there. | 0.038 | 0.029 | 5 | |
| You can't fix stupid. | 0.021 | 0.034 | 6 | |
| The lady doth protest too much, methinks. | 0.022 | 0.013 | 7 | |
| It's all about the money. | 0.020 | 0.013 | 8 | |
| Anybody driving slower than you is an idiot, and anyone going faster than you is a maniac? | 0.012 | 0.018 | 9 | |
| Opportunity is missed by most people. | 0.018 | 0.008 | 10 | |
| (iii) Qualitative Example-C from quote recommendation. | | | | |
Turn *Conversation Threading History*
\#1 Society is becoming more efficient, which is a good thing. People should realize there's no point in holding back this technology just for the sake of keeping people employed. If this were beneficial, then calculators and computers shouldn't exist either.
\#2 One small problem is that people need to pay rent and eat. \#3 So we should ditch computers and go back to the typing pool? Should we get rid of heavy earth moving equipment and just use hundreds of guys with hand tools to build everything? It would employ a hell of a lot more people.
\#4 No one's saying that. I don't think anyone is really against automation, but as it increases, there are soon going to be more people that there are jobs that actually need doing. I actually believe we've already passed this point. So what do we do with the people, who can't get jobs simply because there are none? It's an issue that need assessed immediately.
\#5 Tons and tons and tons of American jobs have been replaced by new jobs created by technology or in support of technology years ago. An office might have needed people to handle filing paperwork, keeping it in order, and retrieving, where now a document management system has made them completely redundant. The upshot is that to access that DMS, people are out there selling computers, installing computers, servicing computers, and supporting end users building the servers installing, supporting monitoring backing them up, and all that jobs that come in support of those progress is progress. And it advances human efficiency and knowledge. These are just one or two examples, but the answer is not to kill progress. Other countries simply won't. The answer is to push education to the forefront, so people are prepared for these jobs and whatever other challenges the future may bring.
\#6 This is true. But it s unfortunate technological advances tend to reduce low skill jobs and replace them with high skill jobs. It would feel more fair if the low skilled workers could all do training programs and become high skilled workers. But this isn't really the case. Those jobs end up being taken by someone who had better educational opportunities or someone younger who still has time to take advantage of education.
\#7 The reality is the reality. Unfortunate or not educating people will create more educated people to handle high skill jobs, and I'll tell you being a desktop support technician isn't high skill. As that's where we push in the future, any amount of hand wringing won't change the facts. We must educate our people if we want to be a global leader in more than homelessness poverty.
\#8 Education won't matter. We are at the end of the job age at some point in the near future. We are going to have to deal with the fact that getting a job isn't a reality for a significant percentage of the population. Society will have to radically change as it did during the industrial revolution.
\#9 Much cheaper to heavily discourage having more children free abortions. Then in years there won't be so many useless people who can apparently be replaced by a simple robot.
\#10 Virtually every job will be replaced by automation name skilled trades that can't be automated. I imagine you'd be surprised at how hard this is. Are pharmacists useless, surgeons, accountants? I'd bet that your job is just as replaceable as these.
Table 5: Case analyses of quote recommendation. We demonstrate the candidate quotes of the top 10 rankings out of all candidates. Note that there is only one ground truth quote for each conversation history. 1168
| Candidate Quote Texts | S d | S p | R¯ | Ground truth |
|-------------------------------------------------------------------|-------|-------|------|----------------|
| There's no such thing as a free lunch. | 0.365 | 0.417 | 1 | |
| I can't predict the future. | 0.185 | 0.210 | 2 | ✓ |
| I have never let my schooling interfere with my education. | 0.104 | 0.059 | 3 | |
| Prevention is better than cure. | 0.044 | 0.083 | 4 | |
| Knowledge is power. | 0.059 | 0.052 | 5 | |
| Don't let schooling interfere with your education. | 0.044 | 0.043 | 6 | |
| Nature abhors a vacuum. | 0.036 | 0.024 | 7 | |
| There is no substitute for hard work. | 0.024 | 0.017 | 8 | |
| There are three kinds of lies: lies, damned lies, and statistics. | 0.022 | 0.013 | 9 | |
| You can't fix stupid. | 0.019 | 0.010 | 10 | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5
✗ A2. Did you discuss any potential risks of your work?
We see no concern about potential risks.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** The Abstract Provides The Link To Our Code.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
In the Abstract, a Github repository with documentation is released.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
fei-etal-2023-reasoning | Reasoning Implicit Sentiment with Chain-of-Thought Prompting | https://aclanthology.org/2023.acl-short.101 | While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner. Thus detecting implicit sentiment requires the common-sense and multi-hop reasoning ability to infer the latent intent of opinion. Inspired by the recent chain-of-thought (CoT) idea, in this work we introduce a Three-hop Reasoning (THOR) CoT framework to mimic the human-like reasoning process for ISA. We design a three-step prompting principle for THOR to step-by-step induce the implicit aspect, opinion, and finally the sentiment polarity. Our THOR+Flan-T5 (11B) pushes the state-of-the-art (SoTA) by over 6{\%} F1 on supervised setup. More strikingly, THOR+GPT3 (175B) boosts the SoTA by over 50{\%} F1 on zero-shot setting. |
## Reasoning Implicit Sentiment With Chain-Of-Thought Prompting∗
Hao Fei1, Bobo Li2, Qian Liu3, Lidong Bing4**, Fei Li**2†
, Tat-Seng Chua1 1Sea-NExT Joint Lab, School of Computing, National University of Singapore 2Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University 3Sea AI Lab, 4DAMO Academy, Alibaba Group [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]
## Abstract
While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner. Thus detecting implicit sentiment requires the common-sense and multi-hop reasoning ability to infer the latent intent of opinion. Inspired by the recent chain-of-thought
(CoT) idea, in this work we introduce a *Threehop Reasoning* (THOR) CoT framework to mimic the human-like reasoning process for ISA. We design a three-step prompting principle for THOR to step-by-step induce the implicit aspect, opinion, and finally the sentiment polarity. Our THOR+Flan-T5 (11B) pushes the state-of-the-art (SoTA) by over 6% F1 on supervised setup. More strikingly, THOR+GPT3
(175B) boosts the SoTA by over 50% F1 on zero-shot setting. Our code is open at https:
//github.com/scofield7419/THOR-ISA.
1 Introduction Sentiment analysis (SA) aims to detect the sentiment polarity towards a given target based on the input text. SA can be classified into explicit SA (ESA) and implicit SA (ISA), where the former type is the current mainstream task, in which the emotional expressions explicitly occur in texts
(Pontiki et al., 2014). Different from ESA, ISA
is much more challenging, because in ISA the inputs contain only factual descriptions with no explicit opinion expression directly given (Russo et al., 2015). For example, given a text '*Try the tandoori salmon!*', having no salient cue word, almost all existing sentiment classifier1 predicts a neutral polarity towards '*the tandoori salmon*'. Human can easily determine the sentiment states accurately, because we always grasp the real intent or opinion
![0_image_0.png](0_image_0.png)
Figure 1: Detecting the explicit and implicit sentiment polarities towards targets. Explicit opinion expression helps direct inference, while detecting implicit sentiment requires common-sense and multi-hop reasoning.
behind the texts. Thus, without truly understanding how the sentiment is aroused, traditional SA
methods are ineffective to ISA.
In fact, it is critical to first discover the hidden opinion contexts to achieve accurate ISA. For the explicit case\#1 in Fig. 1, it is effortless to capture the overall sentiment picture (e.g., '*environment*'
is the aspect, '*great*' is the opinion), and thus can precisely infer the *positive* polarity towards the given target *hotel*. Inspired by such fine-grained sentiment spirit (Xue and Li, 2018; Zhang et al.,
2021; Xu et al., 2020), we consider mining the implicit aspect and opinion states. For the implicit case\#2 in Fig. 1, if a model can first infer the key sentiment components, e.g., the latent aspect
'*taste*', latent opinion '*good and worth trying*', the inference of final polarity can be greatly eased. To reach the goal, the capabilities of **common-sense**
reasoning (i.e., infer what is '*tandoori salmon*')
and **multi-hop reasoning** (i.e., infer the aspect and then the opinion) are indispensable.
Fortunately, the recent great triumph of pretrained large-scale language models (LLMs) offers a promising solution. On the one hand, LLMs have been found to carry very rich world knowledge, showing extraordinary ability on commonsense understanding (Paranjape et al., 2021; Liu et al., 2022). On the other hand, the latest chain-of1171
![1_image_0.png](1_image_0.png)
thought (CoT) idea has revealed the great potential of LMs' multi-hop reasoning (Wei et al., 2022; Zhou et al., 2022; Zhang et al., 2023), where an LLM with some prompts can do chain-style reasoning impressively. Built on top of all these successes, in this work we implement a Three-hop Reasoning CoT framework (namely THOR) for ISA. Based on an LLM, we design three prompts for three steps of reasoning, each of which respectively infers 1) the fine-grained aspect of the given target, 2) the underlying opinion towards the aspect, and 3) the final polarity. With such easy-to-hard incremental reasoning, the hidden contexts of the overall sentiment picture can be elicited step by step to achieve an easier prediction of final polarity, which effectively alleviates the difficulties of the task prediction.
To ensure the correctness of each reasoning step, we consider a self-consistency mechanism for CoT
inspired by Wang et al. (2022b), which is to select the candidate answers (at each step) with high voting consistency of inferred aspect and opinion. For supervised fine-tuning setup, we further propose a reasoning revising method. We use the intermediate reasoning answers as model inputs to predict the final labels, where the supervision from gold labels will teach LLM to generate more correct reasoning. On supervised fine-tuning setup, our Flan-T5 based THOR improves the current best-performing baseline by more than 6% in F1 score, and such margins are further magnified on zero-shot setup.
Most strikingly, our GPT3-based THOR with 175B
parameters boosts the baseline to a high-to 51.10%
increase of F1 score.
To sum up, this work contributes a multi-hop reasoning solution for implicit sentiment detection, which helps to achieve impressive improvement over the traditional non-reasoning methods. To our knowledge, this is the first attempt to successfully extend the CoT idea to the sentiment analysis community. Our method is simple yet effective, and can be broadly applied to other similar NLP problems without much effort.
## 2 Three-Hop Reasoning Framework
The task of SA (either ESA or ISA) is defined as: given a sentence X with a target term t ⊂
X, a model determines the sentiment polarity y towards t, i.e., positive, neutral or *negative*. We solve the task using an off-the-shelf LLM with prompt. For the standard prompt-based method, we can construct the following prompt template as LLM's input:
Given the sentence $X$, what is the sentiment polarity towards $t$?
The LLM should return the answer via:
yˆ=argmaxp(y|*X, t*).
## 2.1 Chain-Of-Thought Prompting
Now we consider the CoT-style prompt (Wei et al.,
2022; Fu et al., 2022) method for multi-step reasoning. Instead of directly asking LLM the final result of y, in our THOR (cf. Fig. 2) we hope the LLM infer the latent aspect and opinion information before answering the finale y. We here define the intermediate aspect term a and latent opinion expression o. We construct the three-hop prompts as follows.
1172 Step 1. We first ask LLM what aspect a is mentioned with the following template:
C1[Given sentence X], which specific aspect of t is possibly mentioned?
C1 is the first-hop prompt context. This step can be formulated as A=argmaxp(a|*X, t*), where A is the output text which explicitly mentions the aspect a.
Step 2. Now based on X, t and a, we ask LLM
to answer in detail what would be the underlying opinion o towards the mentioned aspect a:
C2[C1,A]. Based on the common sense, what is the implicit opinion towards the mentioned aspect of t, and why?
C2 is the second-hop prompt context which concatenates C1 and A. This step can be written as O=argmaxp(o|*X, t, a*), where O is the answer text containing the possible opinion expression o.
Step 3. With the complete sentiment skeleton (X,
t, a and o) as context, we finally ask LLM to infer the final answer of polarity t:
C3[C2,O]. Based on the opinion, what is the sentiment polarity towards t?
C3 is the third-hop prompt context. We note this step as yˆ=argmaxp(y|*X, t, a, o*).
2.2 **Enhancing Reasoning via Self-consistency**
We further leverage the self-consistency mechanism (Wang et al., 2022b; Li et al., 2022b) to consolidate the reasoning correctness. Specifically, for each of three reasoning steps, we set the LLM decoder to generate multiple answers, each of which will likely to give varied predictions of aspect a, opinion o as well as the polarity y. At each step, those answers with high voting consistency of inferred a, o or y are kept. We select the one with highest confidence as the context in next step. 2.3 Reasoning Revising with Supervision We can also fine-tune our THOR when the ondemand training set is available, i.e., supervised fine-tuning setup. We devise a reasoning revising method. Technically, at each step we construct a prompt by concatenating 1) initial context, 2) this step's reasoning answer text and 3) final question, and feed it into LLM to predict the sentiment label instead of going to the next step reasoning. For example, at end of step-1, we can assemble a prompt:
[C1,A, '*what is the sentiment polarity towards* t?'].
In the supervision of gold labels, the LLM will be taught to generate more correct intermediate reasoning that is helpful to the final prediction.
| Restaurant | Laptop | | | |
|-----------------------------------------------------------|----------|-------|-------|-------|
| All | ISA | All | ISA | |
| - State-of-the-art baselines BERT+SPC† (110M) | 77.16 | 65.54 | 73.45 | 69.54 |
| BERT+ADA† (110M) | 80.05 | 65.92 | 74.18 | 70.11 |
| BERT+RGAT† (110M) | 81.35 | 67.79 | 74.07 | 72.99 |
| BERTAsp+CEPT† (110M) | 82.07 | 67.79 | 78.38 | 75.86 |
| BERT+ISAIV† (110M) | 81.40 | 69.66 | 77.25 | 78.29 |
| BERTAsp+SCAPT† (110M) | 83.79 | 72.28 | 79.15 | 77.59 |
| - Prompt-based methods BERT+Prompt (110M) | 81.34 | 70.12 | 78.58 | 75.24 |
| Flan-T5+Prompt (250M) | 81.50 | 70.91 | 79.02 | 76.40 |
| Flan-T5+Prompt (11B) | 84.72 | 75.10 | 82.44 | 78.91 |
| - CoT-based methods Flan-T5+THOR (250M) | 82.98 | 71.70 | 79.75 | 67.63 |
| Flan-T5+THOR (11B) | 87.45 | 79.73 | 85.16 | 82.43 |
| w/o SelfConsistency | 86.03 | 77.68 | 84.39 | 80.27 |
| w/o Reason-Revising | 86.88 | 78.42 | 84.83 | 81.69 |
| Table 1: F1 results on supervised fine-tuning setup. Best | | | | |
Table 1: F1 results on supervised fine-tuning setup. Best results are marked in bold. Scores by model with † are copied from Li et al. (2021).
## 3 Experiments
Setups We experiment on the benchmark SemEval14 Laptop and Restaurant datasets (Pontiki et al., 2014), where all the instances are split into explicit and implicit sentiment by Li et al. (2021).
Since the encoder-style BERT cannot generate texts to support CoT, we use encoder-decoder style FlanT52as our backbone LLM. We also test with GPT3
(Brown et al., 2020) and ChatGPT (Ouyang et al.,
2022). We used four versions of Flan-T5: 250M
(base), 780M (large), 3B (xl) and 11B (xxl), and four versions of GPT3: 350M, 1.3B, 6.7B and 175B. Note that GPT3 does not release the model parameters, and we use it in the prompting manner via the API3. This also means that we cannot perform supervised fine-tuning with GPT3. We compare with the current best-performing baselines, including: BERT+SPC (Devlin et al., 2019),
BERT+ADA (Rietzler et al., 2020), BERT+RGAT
(Wang et al., 2020), BERTAsp+CEPT (Li et al.,
2021), BERT+ISAIV (Wang et al., 2022a) and BERTAsp+SCAPT (Li et al., 2021). We consider both the supervised fine-tuning and zero-shot setups. We adopt the F1 as the evaluation metric.
On the few-shot setup, we re-implement the baselines via their source codes. Our experiments are conducted with 4 NVIDIA A100 GPUs.
| Restaurant | Laptop | | | |
|----------------------------------------------------|----------|-------|-------|-------|
| All | ISA | All | ISA | |
| - State-of-the-art baselines BERT+SPC (110M) 21.76 | 19.48 | 25.34 | 17.71 | |
| BERT+RGAT (110M) | 27.48 | 22.04 | 25.68 | 18.26 |
| BERTAsp+SCAPT (110M) | 30.02 | 25.49 | 25.77 | 13.70 |
| - Prompt-based methods BERT+Prompt (110M) | 33.62 | 31.46 | 35.17 | 22.86 |
| Flan-T5+Prompt (250M) | 54.38 | 41.57 | 52.06 | 31.43 |
| Flan-T5+Prompt (11B) | 57.12 | 45.31 | 54.14 | 33.71 |
| - CoT-based methods Flan-T5+THOR (250M) | 55.86 | 41.84 | 52.52 | 32.40 |
| Flan-T5+THOR (3B) | 57.33 | 42.61 | 56.36 | 38.16 |
| Flan-T5+THOR (11B) | 61.87 | 52.76 | 58.27 | 40.75 |
| Flan-T5+ZeroCoT (11B) | 56.58 | 47.41 | 55.53 | 35.67 |
| GPT3+THOR (175B) | 81.96 | 76.55 | 76.04 | 73.12 |
Results on Supervised Fine-tuning The comparisons are shown in Table 1. It is interesting to see that the BERT with prompt learning underperforms the SoTA baseline BERTAsp+SCAPT.
Even the Flan-T5-base (250M) with double-size parameters fails to beat the SoTA. BERTAsp+SCAPT
is pre-trained on the large-scale sentiment aspectaware annotation data, thus showing strong capability on SA. But with our THOR CoT prompting, Flan-T5-base clearly outperforms SoTA. Further, when using the larger LLM, i.e., with 11B parameters, we can find the vanilla prompt-based FlanT5 surpasses the best baseline. More prominently, Flan-T5-11B with THOR shows significant boosts for ISA, i.e., 7.45%(=79.73-72.28) on Restaurant and 5.84%(=82.43-77.59) on Laptop, with average improvement of 6.65%(7.45+5.84)/2 F1. Also the ablations of the self-consistency and reasoning revising mechanisms indicate their importances in our THOR method.
Results on Zero-shot Reasoning In Table 2 we compare the zero-shot performances. We can find that the improvement of both prompt-based and CoT-based methods over the current SoTA baseline increases dramatically. But overall, the CoTbased methods with our THOR show much more significant improvement on ISA. For example, our Flan-T5-11B THOR system gives over 30% F1 average improvement over the best-performing base-
![3_image_0.png](3_image_0.png)
+Prompt(Rest) +THOR(Rest)
+Prompt(Lap) +THOR(Lap)
![3_image_1.png](3_image_1.png)
Figure 3: Influences of LLM scales.
![3_image_2.png](3_image_2.png)
line (BERTAsp+SCAPT) on two datasets. Most strikingly, when THOR is equipped into superlarge LLM, i.e., GPT3-175B, we can observe the impressive improvement, near to the level by Flan-T5-11B THOR in supervised setting as in Table 1. Specifically, it boosts the SoTA results by 51.94%(=81.96-30.02) on Restaurant and 50.27%(=76.04-25.77) on Laptop, with an average 51.10%(51.94+50.27)/2 F1 leap.
Influence of Different Model Sizes of LLMs In Table 1 and 2 we have witnessed the power by using
(very) large LLMs. In Fig. 3 we study the influence of different LLM scales. We see that with the increasing model scale, the efficacy of our multihop reasoning prompting is exponentially amplified. This coincides much with the existing findings of CoT prompting methods (Wei et al., 2022; Zhou et al., 2022; Fu et al., 2022), i.e., the larger the LMs, the more significant improvement by CoT. Because when the LLM is sufficiently large, the capabilities on common-sense and multi-hop reasoning are greatly developed and strengthened.
Improving ChatGPT with THOR The latest birth of ChatGPT has brought revolutionary advancement in NLP and AI community. Here we compare the improvement of our THOR on GPT3
(175B) and ChatGPT, respectively. In Fig. 4 we show the testing results on 100 testing instances.
We can see that both LMs shows very high performances on ESA, and the enhancements by THOR
are very limited. But prompting-based GPT3 and ChatGPT still fail much on ISA, where our THOR
![4_image_0.png](4_image_0.png)
## Has Improved Them On Isa Very Considerably.
Failure Analysis In Fig. 5 we show the error rates of failure cases when using THOR, where we summarize three error types. The Flan-T5-11B
LLM gives 48.27% error rate on zero-shot setup, while it goes down to 12.79% when fine-tuned with supervision. Unsupervised-GPT3 (175B) gives similarity low error rate as with Supervised-T5, while the latter fails much frequently on incapability of reasoning. In contrast to Supervised-T5, the majority of failures in Unsupervised-GPT3 comes from the problematic data annotation. Since Supervised-T5 is fine-tuned with supervision of
'false' labels, it may actually learn the spurious correlations but with higher testing accuracy.
## 4 Related Work
Sentiment analysis has long been a hot research topic in NLP community (Pang and Lee, 2007; Dong et al., 2014; Shi et al., 2022). While the explicit SA models can make predictions based on the opinion expressions effortlessly, the implicit SA can be much more tricky due to the hidden opinion characteristics (Li et al., 2021; Wang et al.,
2022a). And ISA is often more ubiquitous in realistic scenarios. Although efforts have been made to ISA (Li et al., 2021; Wang et al., 2022a), existing work can still be limited to the traditional paradigm of inference. As aforementioned, ISA should be addressed via reasoning, i.e., common-sense and multi-hop reasoning. Thus, this work follows such intuition, targeting solving ISA with a multi-hop reasoning mechanism.
As a key branch of SA, the fine-grained SA has been well explored (Wang et al., 2017; Li et al.,
2018, 2022a). The idea of fine-grained SA is to break down the SA into several key sentiment elements, including target, aspect, *opinion* and *sentiment polarity*, all of which together form a complete sentiment picture in detail (Peng et al., 2020; Fei et al., 2022). This work draws the same spirit of fine-grained SA. We believe the reasoning of implicit sentiment should be an incremental process, inferring the sentiment elements step by step and finally understand the sentiment polarity in an easy-to-hard manner.
Language model pre-training has received increasing research attention for enhancing the utility of downstream applications (Raffel et al., 2020) Most recently, the large-scale language models
(LLMs) have shown great potential to the humanlevel intelligence, e.g., ChatGPT (Ouyang et al.,
2022). LLMs have extensively demonstrated to exhibit extraordinary abilities on common-sense understanding (Paranjape et al., 2021; Liu et al.,
2022) and multi-hop reasoning (Wei et al., 2022; Zhou et al., 2022). This work implements the implicit sentiment reasoning built upon LMs, based on the latest proposed chain-of-thought (CoT) idea.
CoT prompting is a gradient-free technique that induces large LMs to produce intermediate reasoning steps leading to the final answer. Wei et al.
(2022) formally study the CoT prompting in language models, in which they elicit LMs to generate coherent series of intermediate reasoning steps that direct to the final answer to the original question.
5 Conclusion In this paper, we present a *Three-hop Reasoning* prompting framework to achieve the chain-ofthought reasoning process for implicit sentiment analysis. Based on the existing LLM, we design three prompts for three steps of reasoning, each of which respectively infers the fine-grained aspect, the underlying opinion and the final polarity. On the ISA datasets, different LLMs equipped with our THOR show impressive performances over the existing best-performing baselines on both the supervised and zero-shot setups. We show that the larger the LLMs, the more significant improvement by our THOR method.
## Acknowledgments
The work is also partially supported by the National Key Research and Development Program of China
(No. 2022YFB3103602) and the Sea-NExT Joint Lab at National University of Singapore.
## Limitations
THOR helps unleash the full power of LLMs only when being integrated into the large enough models, while on the middle or lower size LLMs, the improvement by THOR will be limited to certain extent, due to the emergence nature of LLMs.
## References
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Proceedings of the Annual Conference on Neural Information Processing Systems.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186.
Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent Twitter sentiment classification. In *Proceedings of ACL*, pages 49–54.
Hao Fei, Fei Li, Chenliang Li, Shengqiong Wu, Jingye Li, and Donghong Ji. 2022. Inheriting the wisdom of predecessors: A multiplex cascade framework for unified aspect-based sentiment analysis. In *Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI*, pages 4096–
4103.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2022. Complexity-based prompting for multi-step reasoning. *CoRR*, abs/2210.00720.
Bobo Li, Hao Fei, Fei Li, Yuhan Wu, Jinsong Zhang, Shengqiong Wu, Jingye Li, Yijiang Liu, Lizi Liao, Tat-Seng Chua, and Donghong Ji. 2022a. Diaasq : A
benchmark of conversational aspect-based sentiment quadruple analysis. *CoRR*, abs/2211.05705.
Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018.
Transformation networks for target-oriented sentiment classification. In *Proceedings of ACL*, pages 946–956.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2022b. On the advance of making language models better reasoners.
CoRR, abs/2206.02336.
Zhengyan Li, Yicheng Zou, Chong Zhang, Qi Zhang, and Zhongyu Wei. 2021. Learning implicit sentiment in aspect-based sentiment analysis with supervised contrastive pre-training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 246–256.
Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, and Hannaneh Hajishirzi. 2022. Generated knowledge prompting for commonsense reasoning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3154–3169.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. *arXiv preprint* arXiv:2203.02155.
Bo Pang and Lillian Lee. 2007. Opinion mining and sentiment analysis. *Foundations and Trends in Information Retrieval*, 2(1-2):1–135.
Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2021. Prompting contrastive explanations for commonsense reasoning tasks. In Findings of the Association for Computational Linguistics:
ACL-IJCNLP 2021, pages 4179–4192.
Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2020. Knowing what, how and why: A
near complete solution for aspect-based sentiment analysis. In *Proceedings of the AAAI Conference on* Artificial Intelligence, pages 8600–8607.
Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In *Proceedings of the 8th* International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21:140:1–140:67.
Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2020. Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification.
In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4933–4941.
Irene Russo, Tommaso Caselli, and Carlo Strapparava.
2015. SemEval-2015 task 9: CLIPEval implicit polarity of events. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 443–450.
Wenxuan Shi, Fei Li, Jingye Li, Hao Fei, and Donghong Ji. 2022. Effective token graph modeling using a novel labeling strategy for structured sentiment analysis. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 4232–4241.
Kai Wang, Weizhou Shen, Yunyi Yang, Xiaojun Quan, and Rui Wang. 2020. Relational graph attention network for aspect-based sentiment analysis. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 3229–
3238.
Siyin Wang, Jie Zhou, Changzhi Sun, Junjie Ye, Tao Gui, Qi Zhang, and Xuanjing Huang. 2022a. Causal intervention improves implicit sentiment analysis. In Proceedings of the 29th International Conference on Computational Linguistics, pages 6966–6977.
Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3316–3322.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V.
Le, Ed H. Chi, and Denny Zhou. 2022b. Selfconsistency improves chain of thought reasoning in language models. *CoRR*, abs/2203.11171.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. *CoRR*, abs/2201.11903.
Lu Xu, Hao Li, Wei Lu, and Lidong Bing. 2020.
Position-aware tagging for aspect sentiment triplet extraction. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 2339–2349.
Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 2514–2523.
Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, and Wai Lam. 2021. Aspect sentiment quad prediction as paraphrase generation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 9209–
9219.
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2023. Automatic chain of thought prompting in large language models. In *The Eleventh International Conference on Learning Representations*.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed H. Chi. 2022.
Least-to-most prompting enables complex reasoning in large language models. *CoRR*, abs/2205.10625.
## A Appendix
Here we present several pieces of real testing examples. We compare THOR with the vanilla prompting method, and the zero-shot CoT method (Prompt
+ '*Lets think step by step*'). We perform the comparisons based on the ChatGPT.4 Given the sentence, "I just need to walk downstairs to get to the metro station as it is below the hotel I'm living in.'', What is the sentiment polarity towards the metro station?
![7_image_1.png](7_image_1.png)
![7_image_0.png](7_image_0.png)
![7_image_2.png](7_image_2.png)
![7_image_3.png](7_image_3.png)
Given the sentence, "I just need to walk downstairs to get to the metro station as it is below the hotel I'm living in.'', What is the sentiment polarity towards the metro station? Lets think step by step.
![7_image_4.png](7_image_4.png)
![7_image_5.png](7_image_5.png)
Figure 7: Result by zero-shot CoT method for testing case-I.
## - Case-I Input Text:
I just need to walk downstairs to get to the metro station as it is below the hotel I'm living in.
The gold sentiment label is positive towards the metro station .
In Fig. 6, 7 and 8, we show that our THOR
successfully induces the ChatGPT to finally give a correct decision on sentiment polarity, where the other two methods fail.
## - Case-Ii Input Text:
Lunch came with pickels and slaw, no extra charge.
The gold sentiment label is positive towards Lunch .
Fig. 9, 10 and 11 shows the results and the LLM's response, respectively. Our THOR induces the ChatGPT to draw a correct decision on sentiment polarity, but the other two methods still fail.
Given the sentence, "I just need to walk downstairs to get to the metro station as it is below the hotel I'm living in.", which specific aspect of the metro station is possibly mentioned?
G
In the sentence "I just need to walk downstairs to get to the metro station as it is below the hotel I'm living in," the specific aspect of the metro station that is mentioned is its location relative to the hotel. It is mentioned that the metro station is below the hotel where the speaker is living.
Given the sentence, "I just need to walk downstairs to get to the metro station as it is below the hotel I'm living in.'', the location of the metro station is mentioned. Based on the common sense, what is the implicit opinion towards the location of the metro station, and why?
G
It is not possible to infer the speaker's opinion about the location of the metro station based on the sentence alone. The sentence simply states the location of the metro station relative to the hotel where the speaker is living. It does not express any opinion about the location. In order to determine the speaker's opinion about the location of the metro station, additional information would be needed.
Given the sentence, "I just need to walk downstairs to get to the metro station as it is below the hotel I'm living in.", the location of the metro station is mentioned. Based on the common sense, it could be inferred that the location of the metro station being below the hotel is seen as convenient. Based on such opinion, what is the polarity towards the metro station?
G
Based on the assumption that the location of the metro station being below the hotel is seen as convenient, it could be inferred that the speaker has a positive attitude towards the metro station. However, this is purely speculative and cannot be definitively determined based on the information provided in the sentence.
## Figure 8: Result By Our Thor Method For Testing Case-I.
Given the sentence, "Lunch came with pickels and slaw, no extra charge.", what is the sentiment polarity towards the lunch? The sentiment polarity towards the lunch is neutral.
![8_image_0.png](8_image_0.png)
Given the sentence, "Lunch came with pickels and slaw, no extra charge.", what is the sentiment polarity towards the lunch? Lets think step by step.
G
It is difficult to determine the sentiment polarity towards the lunch based on the given sentence alone. The sentence simply states that the lunch came with pickles and slaw, and does not express any positive or negative feelings towards the lunch itself. To determine the sentiment towards the lunch, we would need more information or context about the lunch, such as the quality, taste, or appearance of the food.
## Figure 10: Result By Zero-Shot Cot Method For Testing Case-Ii.
Given the sentence, "Lunch came with pickels and slaw, no extra charge.", which specific aspect of lunch is possibly mentioned?
G
The specific aspect of lunch that is mentioned is the cost.
Given the sentence, "Lunch came with pickels and slaw, no extra charge.", the specific aspect of lunch that is mentioned is the cost. Based on the common sense, what is the implicit opinion towards the mentioned aspect of lunch, and why?
s The implicit opinion towards the cost of the lunch is that it is included in the price and there is no additional charge. This is based on the phrase "no extra charge,"
which implies that the cost is not a burden or inconvenience to the customer. Given the sentence, "Lunch came with pickels and slaw, no extra charge.", the specific aspect of lunch that is mentioned is the cost. The implicit opinion towards the cost of the lunch is that it is included in the price and there is no additional charge. This is based on the phrase "no extra charge," which implies that the cost is not a burden or inconvenience to the customer. Based on such opinion, what is the polarity towards lunch?
G
Based on the opinion that the cost of lunch is included and there is no additional charge, the polarity towards lunch would likely be positive. This means that the speaker has a favorable or favorable opinion of lunch.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
5
✓ A2. Did you discuss any potential risks of your work?
5
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
3 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix B
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
chi-etal-2023-latent | Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings | https://aclanthology.org/2023.acl-short.102 | The use of positional embeddings in transformer language models is widely accepted. However, recent research has called into question the necessity of such embeddings. We further extend this inquiry by demonstrating that a randomly initialized and frozen transformer language model, devoid of positional embeddings, inherently encodes strong positional information through the shrinkage of self-attention variance. To quantify this variance, we derive the underlying distribution of each step within a transformer layer. Through empirical validation using a fully pretrained model, we show that the variance shrinkage effect still persists after extensive gradient updates. Our findings serve to justify the decision to discard positional embeddings and thus facilitate more efficient pretraining of transformer language models. | # Latent Positional Information Is In The Self-Attention Variance Of Transformer Language Models Without Positional Embeddings
Ta-Chung Chi†
Carnegie Mellon University Ting-Han Fan Princeton University Li-Wei Chen Carnegie Mellon University Alexander I. Rudnicky Carnegie Mellon University Peter J. Ramadge Princeton University
## Abstract
The use of positional embeddings in transformer language models is widely accepted.
However, recent research has called into question the necessity of such embeddings. We further extend this inquiry by demonstrating that a randomly initialized and frozen transformer language model, devoid of positional embeddings, inherently encodes strong positional information through the shrinkage of self-attention variance. To quantify this variance, we derive the underlying distribution of each step within a transformer layer. Through empirical validation using a fully pretrained model, we show that the variance shrinkage effect still persists after extensive gradient updates. Our findings serve to justify the decision to discard positional embeddings and thus facilitate more efficient pretraining of transformer language models.
## 1 Introduction & Related Work
Transformer models have become the backbone of natural language processing applications (Vaswani et al., 2017; Devlin et al., 2019; Radford et al., 2019). Within the transformer architecture, there are two main categories: 1) bidirectional models, such as BERT (Devlin et al., 2019), that are trained using the masked language modeling objective, and 2) (causal) language models, such as GPT (Radford et al., 2019), that are trained using the traditional language modeling objective. Both of these categories share the common feature of using positional embeddings for encoding token distance.
Whether positional embeddings are truly essential has been a subject of ongoing research. While they have been considered necessary for bidirectional transformer models (Lee et al., 2019; Luo et al., 2021; Sinha et al., 2021; Haviv et al., 2022),
the situation is different for transformer language models (Irie et al., 2019; Yang et al., 2019; Tsai
†Correspondence to: [email protected]
![0_image_0.png](0_image_0.png)
et al., 2019; Scao et al., 2022; Haviv et al., 2022).
In transformer language models, the removal of positional embeddings results in only a marginal decline in performance, while enabling more efficient training (Haviv et al., 2022). In addition to empirical evidence, it has been proven (Bhattamishra et al., 2020) that transformer language models without positional embeddings are Turingcomplete and able to model sequences akin to recurrent neural networks (Rumelhart and McClelland, 1987; Jordan, 1986). Despite this, it remains an open question where positional information is stored in the absence of positional embeddings.
This motivates further investigation into individual operations within a transformer layer.
The example architecture of a pre-LN (Xiong et al., 2020) multi-layer transformer language model with no positional embeddings used in this 1183
![1_image_0.png](1_image_0.png)
work is shown in Figure 1.
1 We hereinafter refer to this configuration as TLM. Our primary focus is on the multi-head attention (MHA) module of a randomly initialized TLM, as it is the only module that allows inter-token information exchange. To gain a deeper understanding, we compute the mean and variance of MHA outputs. To our surprise, we discover that the variance already encodes latent positional information, with later tokens in a sequence displaying smaller variance. This motivates us to quantify the variance by deriving the output distribution after MHA operations. Finally, through empirical validation using a fully pre-trained TLM,
we confirm thatthe same variance shrinkage effect persists after extensive gradient updates.
To the best of our knowledge, we are the first to identify and quantify the latent positional information in TLMs. Our results provide theoretical insights into the removal of positional embeddings, enabling more efficient pretraining of future TLMs.
## 2 Probing Experiments
Given BERT and TLM (GPT) with positional embeddings removed, prior work (Haviv et al., 2022)
shows that only TLM is able to maintain the same language modeling performance as its original version with positional embeddings. The discrepancy might be explained by the fact that only TLM encodes positional information within its layers, as shown by the position probing experiment in Haviv et al. (2022). Since both BERT and TLM have access to the same semantic input and the only difference is the use of causal attention masks in TLM, we hypothesize that the positional informa1Post-LN places layer norm at different positions. It is the configuration used in BERT (Devlin et al., 2019).
tion may be attributed to the interaction between causal attention masks and the TLM architecture.
To further explore this hypothesis, we use a randomly initialized and frozen TLM to eliminate any semantic influence and focus solely on the architectural design. Additionally, to prevent the model from memorizing the order of input sequences, we do not perform embedding lookups and feed the model with randomly sampled input vectors. A
trainable two-layer linear classifier with ReLU activation in between was appended to the TLM to probe the position of each token (further details can be found in Appendix B). We plot the mean absolute error (MAE) w.r.t the number of transformer layers in Figure 2. The plot indicates a randomly initialized and frozen TLM with randomly sampled input vectors inherently provides positional information, with an increase in the number of layers resulting in higher probing performance. This surprising outcome prompts further investigation into the encoding of latent positional information inside the TLM architecture.
## 3 Theoretical Analysis
We dissect the inner workings of a TLM by deriving the distribution of TLM operations in the hope that they elucidate where the latent positional information is stored. The derivation is made possible thanks to the usage of a randomly initialized and frozen TLM. We adopt the initialization settings in accordance with those employed in GPT (Radford et al., 2019). WLOG, our derivation is limited to the operations of the first layer in a TLM and the FFN component is omitted (justified in §3.4).
The hyperparameters utilized in the simulations are: hidden dimension d = 768, number of attention heads H = 12, head dimension d/H = 64, sequence length L = 512, standard deviation for initialization σ = 0.02. All proofs of lemmas are deferred to Appendix A.
Given a sequence of randomly sampled input embeddings {xm}
Lm=1, where each element of xm ∈ R
dis sampled i.i.d from N(0, σ2), a TLM
consists of the following operations:
## 3.1 Layer Normalization
For each input embedding xm, it computes the sample mean and (biased) sample variance:
$${\overline{{\mathbf{x}}}}_{m,:}={\frac{\sum_{i=1}^{d}\mathbf{x}_{m i}}{d}},\ S(\mathbf{x}_{m,:})={\frac{\sum_{i=1}^{d}(\mathbf{x}_{m i}-{\overline{{\mathbf{x}}}}_{m,:})}{d}}$$
2
1184
![2_image_0.png](2_image_0.png)
Then each entry i of xm, denoted as xmi, is normalized by mean and variance to emi:
$$\begin{array}{c}{{e_{m i}=\frac{\mathbf{x}_{m i}-\mathbf{\overline{{{x}}}_{m,:}}}{\sqrt{S(\mathbf{x}_{m,:})}}*\gamma+\beta}}\\ {{\stackrel{(*)}{\approx}\frac{\mathbf{x}_{m i}-\mathbf{\overline{{{E}}}[\mathbf{x}_{m i}]}}{\sqrt{\mathbb{V}[\mathbf{x}_{m i}]}}\sim N(0,1),}}\end{array}$$
where V[x] denotes the variance of x. Since the initialization scheme sets γ = 1 and β = 0, (∗)
holds with sufficiently large d by the Law of large numbers and the continuous mapping theorem.
## 3.2 Self Attention
Each attention head computes query, key, and value vectors in R
d H :
$$\mathbf{q}_{m}=\mathbf{W}_{q}\mathbf{e}_{m},\ \ \mathbf{k}_{m}=\mathbf{W}_{k}\mathbf{e}_{m},\ \ \mathbf{v}_{m}=\mathbf{W}_{v}\mathbf{e}_{m},$$
where Wq, Wk, Wv ∈ R
d H ×dare matrices with each element sampled i.i.d from N(0, σ2).
To be precise, most matrices (W(h)
q , W(h)
k, W(h)
v ), vectors (q
(h)
m , k
(h)
m , v
(h)
m ), and scalars (l
(h)
mn, a
(h)
mn) are associated with a head number h. For notation simplicity, we only show the dependency on h when we need it.
Lemma 1. qm, km, and vm *have zero mean and*
(dσ2) · I *covariance matrix.*
The resulting vectors are processed by the selfattention module for pre-Softmax logits:
$$l_{m n}=\begin{cases}\langle\mathbf{q}_{m},\mathbf{k}_{n}\rangle,&{\mathrm{if}}\;m\geq n\\ -\operatorname*{inf},&{\mathrm{otherwise}}\end{cases}$$
followed by the scaled softmax normalization:
$$a_{m n}={\frac{\exp\left(l_{m n}/{\sqrt{d/H}}\right)}{\sum_{i=1}^{L}\exp\left(l_{m i}/{\sqrt{d/H}}\right)}}$$
Lemma 2. lmn has zero mean and d 3σ 4 H2 *variance.*
lmn/
pd/H has d 2σ 4 H*variance.*
The numerical variance of lmn/
pd/H in our case is 7682·0.024 12 ≈ 0.0079. Lemma 2 suggests the following approximation:
Property 1. *When* σ 4 H
d 2 , lm,: has small variance, making the attention weights am,: *almost* evenly distributed among all positions.2 In Figure 3, we verify Property 1 by showing that amn is almost evenly distributed in simulation.
Observe that the output vector om at position m is:
$$\mathbf{\omega}_{m}=\mathbf{W}_{o}\left(\oplus_{h=1}^{H}\sum_{n=1}^{L}a_{m n}^{(h)}\mathbf{v}_{n}^{(h)}\right),$$
where ⊕ denotes the concatenation of vectors from all H attention heads. Assume that Property 1 is valid and that Wo ∈ R
d×d has elements i.i.d sampled from N(0, σ2), we derive the distribution of om below.
Lemma 3. om *has zero mean and* d 2σ 4 m I *covariance matrix.*
2This approximation was also used in Xiong et al. (2020)
except that they made a stronger assumption that Wq and Wk have to be initialized as zero matrices.
![3_image_0.png](3_image_0.png)
Figure 4 is a simulation that verifies Lemma 3 under the assumption of Property 1. We can see that the variance of om *already encodes the positional information* m.
## 3.3 Residual Connection
As denoted by the *Addition* block of Figure 1, the residual connection sets the output as ym = xm +
om. It allows the model to pass the first MHA
output to later MHA modules as well as the final classifier. As the positional information has been passed by the residual connection, we omit the FFN
part in our analysis.
## 3.4 The Final Layer Normalization
Layer normalization is an operation that might eliminate the positional information derived in Lemma 3, which happens before the MHA modules and position classifier. As mentioned in §3.1, LN(ym) gives:
$$\mathbf{y}_{m i}^{\prime}\approx{\frac{\mathbf{y}_{m i}-\mathbb{E}[\mathbf{y}_{m i}]}{\sqrt{\mathbb{V}[\mathbf{y}_{m i}]}}}\approx{\frac{\mathbf{x}_{m i}+\mathbf{W}_{o}\mathbf{W}_{v}{\frac{\sum_{n}^{m}\mathbf{e}_{n i}}{m}}}{\sqrt{\sigma^{2}+{\frac{d^{2}\sigma^{4}}{m}}}}},$$
,
$$\mathbb{E}[\mathbf{y}_{m i}]=0,\ \mathbb{V}[\mathbf{y}_{m i}]=\mathbb{V}[\mathbf{x}_{m i}]+\mathbb{V}[\mathbf{o}_{m i}]$$ $$=\sigma^{2}+{\frac{d^{2}\sigma^{4}}{m}}$$
**Lemma 4**.: _The variance of the $j$-th dimension of $\boldsymbol{y}_{m}$ is:_ $$\frac{m\sigma^{2}+\sum_{i}(\boldsymbol{W}_{o,j}\colon\boldsymbol{W}_{v,i})^{2}}{m\sigma^{2}+d^{2}\sigma^{4}},$$
where Wo,j: ∈ R
1×dis the j-th row of Wo.
Wv,:i ∈ R
d×1is the i-th column of Wv. As long as Pi
(Wo,j:Wv,:i)
2 6= d 2σ 4, the classifier should be able to exploit the discrepancy to derive m.
Readers might wonder why Wo,j: and Wv,:iin the numerator cannot be treated as random variables. The reason is that we only focus on one dimension (j-th) at a time. This means we cannot use the law of large numbers to approximate the sample variance of ymj as we did for the denominator.
## 3.5 Relaxing The Assumptions
We discuss possible relaxation of the assumptions used in §3.2.
What if Property 1 **does not hold?** Or equivalently, σ 4 6 H
d 2 . This prompts us to vary the value of σ. In Figure 5, we see that smaller σ better aligns Lemma 3 with the simulations, which is unsurprising as Lemma 3 assumes small σ. Even when σ is not too small (i.e., σ = 0.2, 0.02), the variance still encodes the positional information as the variance of om is negatively correlated with its position m.
Other Initialization Schemes So far we assume the weight matrices (Wq, Wk, Wv, Wo) are initialized i.i.d from N(0, σ2). However, we can relax the assumption to i.i.d. samples from a distribution with zero mean and finite variance. This is because the proof in Appendix A calculates the covariance.
The variance calculation relies on E[rir>
i
] = σ 2I
where r>
iis the i-th row vector of a weight matrix.
This property holds for any distribution with zero
## 4 Discussions
Why are the positions of later tokens in a sequence harder to be predicted in Figure 3 of Haviv et al. **(2022)?** Lemma 3 states the variance is inversely proportional to the position m, so the variance of later tokens (large m) plateaus, resulting in a harder numerical optimization problem.
This also suggests a potential downside of removing positional embeddings: It might be challenging for the model to infer positional information of the later tokens in extremely long input sequences.
## Why Do Lower Layers (Closer To Input) Give Worse
probing performances in both Figure 2 and Haviv et al. **(2022)?** This can be explained by Figure 4. Most of the positions at the 0 th layer have tiny variance (exp(−10) = 4.5e−5), which again poses a difficult numerical optimization problem.
Why does BERT fail to converge without positional embeddings? In a BERT model (Devlin et al., 2019), each token has access to all the other tokens, making the variance at all positions d 2σ 4 L.
Therefore, a BERT model cannot utilize variance differences as its positional indicator.
## 5 Post-Training Results
Our derivations only apply to the initial stage where the TLM and input embeddings are randomly initialized, which may not hold true after gradient updates. It is essential to verify the existence of variance properties and lemmas on a fully pre-trained TLM on OpenWebText2 (details in Appendix C).
We expect that the properties of lower layers of a pre-trained TLM should align more closely with the theoretical results for two reasons: 1) There are more steps between the lower layers and the final language modeling loss, resulting in smaller gradients and thereby fewer parameter updates, and 2) Lower layers typically encode more lowlevel information dependent on positional information (Vulic et al. ´ , 2020; de Vries et al., 2020).
Figures 6 and 7 demonstrate that the 0 th (lowest)
layer exhibits highly similar cumulative attention probability and decay-with-position variance as the theoretical results. In contrast, higher layers deviate from the analyses in § 3. We posit that the model learns to rely more heavily on semantic rather than positional information. This also explains why
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
predicting positions using outputs of higher transformer layers is more challenging as demonstrated in Figure 2 of Haviv et al. (2022).
## 6 Conclusion
We mathematically analyzed a randomly initialized transformer language model without positional embeddings. We showed that the variance of the selfattention output decreases as the position increases, which serves as an indicator for positional information. We validated that, after extensive gradient updates, the low layers of a pretrained language model still exhibit highly similar variance reduction behaviors. Our results pave the way for the pretraining of more efficient and positional embedding-free transformer language models.
## Limitations
The limitations of this work mostly come from our assumptions: 1) A randomly initialized and frozen TLM, and 2) Input tokens are all different and randomly sampled. These two assumptions obviously do not hold true for human languages and pre-trained TLMs. Therefore, we attempted to empirically verify the existence of lemmas and properties on a pre-trained TLM without positional embeddings in §5.
That being said, several methods could be attempted to remove these assumptions. Firstly, we can analyze the training dynamics of a TLM to shed light on the model parameter distribution after pretraining. Secondly, Zipf's law or a simple n-gram language model could be used to quantify the degree of input token duplication in human languages.
This might give us a more accurate estimate of the variance at different positions. We leave these ideas as future work.
## Ethics Statement
Our work provides a deeper understanding of why a transformer language model can still perform well without positional embeddings, potentially enabling the application of developing future transformers that are greener and more cost-efficient.
Inappropriate usage of our technique might have negative societal impacts though. These include the ethical challenges of improper text generation and privacy issues inherent in the data collection process. These implications apply to any natural language processing research and are not unique to this specific work.
## Acknowledgment
The authors acknowledge the support from Boeing
(2019-STU-PA-259), Amazon (CC ADV 00474341 2021 TR), NSF MRI Award 1919452, and Princeton Research Computing.
## References
Alex Andonian, Quentin Anthony, Stella Biderman, Sid Black, Preetham Gali, Leo Gao, Eric Hallahan, Josh Levy-Kramer, Connor Leahy, Lucas Nestler, Kip Parker, Michael Pieler, Shivanshu Purohit, Tri Songz, Wang Phil, and Samuel Weinbach. 2021. GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch.
Satwik Bhattamishra, Arkil Patel, and Navin Goyal.
2020. On the computational power of transformers
and its implications in sequence modeling. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 455–475, Online.
Association for Computational Linguistics.
Wietse de Vries, Andreas van Cranenburgh, and Malvina Nissim. 2020. What's so special about BERT's layers? a closer look at the NLP pipeline in monolingual and multilingual models. In Findings of the Association for Computational Linguistics: EMNLP
2020, pages 4339–4350, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),
pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. 2022. Transformer language models without positional encodings still learn positional information. *arXiv preprint arXiv:2203.16634*.
Kazuki Irie, Albert Zeyer, Ralf Schlüter, and Hermann Ney. 2019. Language modeling with deep transformers. In *INTERSPEECH*.
M I Jordan. 1986. Serial order: a parallel distributed processing approach. technical report, june 1985march 1986.
Diederik P. Kingma and Jimmy Ba. 2014. Adam:
A method for stochastic optimization. Cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015.
Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. 2019.
Set transformer: A framework for attention-based permutation-invariant neural networks. In *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pages 3744–3753. PMLR.
Ziyang Luo, Artur Kulmizev, and Xiaoxi Mao. 2021.
Positional artefacts propagate through masked language model embeddings. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5312–5327, Online. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems 32*, pages 8024–8035. Curran Associates, Inc.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners.
OpenAI blog, 1(8):9.
David E. Rumelhart and James L. McClelland. 1987.
Learning Internal Representations by Error Propagation, pages 318–362.
Teven Le Scao, Thomas Wang, Daniel Hesslow, Lucile Saulnier, Stas Bekman, M Saiful Bari, Stella Biderman, Hady Elsahar, Jason Phang, Ofir Press, Colin Raffel, Victor Sanh, Sheng Shen, Lintang Sutawika, Jaesung Tae, Zheng Xin Yong, Julien Launay, and Iz Beltagy. 2022. What language model to train if you have one million GPU hours? In *Challenges &*
Perspectives in Creating Large Language Models.
Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021.
Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*,
pages 2888–2913, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhutdinov.
2019. Transformer dissection: An unified understanding for transformer's attention via the lens of kernel. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4344–4353, Hong Kong, China. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, pages 5998–6008.
Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, ´
Goran Glavaš, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7222–7240, Online. Association for Computational Linguistics.
Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. 2020. On layer normalization in the transformer architecture. In *International Conference on Machine Learning*.
Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019. Assessing the ability of self-attention networks to learn word order. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3635–3644, Florence, Italy. Association for Computational Linguistics.
## A Proofs
The proof of Lemma 1 and 2 are head-dependent while that of Lemma 3 is head-independent. For notation simplicity, at Lemma 1 and 2, we drop the head dependency on matrices (W(h)
q , W(h)
k, W(h)
v ), vectors (q
(h)
m , k
(h)
m , v
(h)
m ), and scalars (l
(h)
mn, a
(h)
mn).
Proof of Lemma 1 Here, we use r>
ito denote the i-th row vector of Wv.
cov(vm, vn) = E[vmv > n] = E[Wveme > nW> v] = E r> 1 em ... r> d H em he> n r1 . . . e> n r dH i = hE[r > i eme > n rj ] i dH i,j=1 = hE[Tr(rjr > i eme > n)]i dH i,j=1 = hTr(E[rjr > i]E[eme > n])i dH i,j=1 (∗) = -Tr((1i=jσ 2) · Id · 1m=n · Id) dH i,j=1 = -1i=j1m=ndσ2 dH i,j=1 = (1m=ndσ2) · Id/H (∗) holds because ri and rj are independent when
i 6= j (similarly for em and en) and the covariance
of a Gaussian random vector is an identity matrix.
Id and Id/H denote d × d and dH ×
d
H
identity
matrices.
Proof of Lemma 2 Here, we use r>
ito denote the i-th row vector of Wq and Wk.
cov(lmn, lmp) = E[(e >mW> q Wken)(e >mW> q Wkep) >] = E[Tr(e >mW> q Wkene > p W> k Wqem)] = E[Tr(eme >mW> q Wkene > p W> k Wq)] = Tr(E[eme >m]E[W> q Wkene > p W> k Wq]) = E[Tr(ene > p W> k WqW> q Wk)] = Tr(E[ene > p]E[W> k WqW> q Wk)]) = (1n=p)Tr(E[WqW> q]E[WkW> k ]) (∗) = (1n=p)Tr(( dH σ 2· I)( dH σ 2· I)) = (1n=p) d 3σ 4 H2
(∗) holds since:
* (7) holds since: $$\mathbb{E}[\boldsymbol{W_{q}W_{q}^{\top}}]=\mathbb{E}\left[\begin{bmatrix}\boldsymbol{r_{1}^{\top}}\\ \vdots\\ \boldsymbol{r_{d}^{\top}}\end{bmatrix}\begin{bmatrix}\boldsymbol{r_{1}}&\ldots&\boldsymbol{r_{d}^{\top}}\\ \end{bmatrix}\right]$$ $$=\left[\mathbb{E}[\boldsymbol{r_{i}^{\top}r_{j}}]\right]_{i,j=1}^{\frac{d}{H}}=\frac{d}{H}\sigma^{2}\cdot I$$ **Proof of Lemma 3**: Because $\boldsymbol{W_{o}}\in\mathbb{R}^{d\times d}$ is ap
d×dis applied on a concatenation of vectors at all heads, we take vi = ⊕H
h=1v
(h)
i. vi here is head-independent while vi at Lemma 1 is head-dependent. Here, we use r>
ito denote the i-th row vector of Wo.
cov(om, om) Property 1 ≈ E " Wo Pm i=1 vi m Pm j=1 v> j mW> o # =1 m2 Xm i,j=1 E[Woviv > j W> o] i,j=1 E r> 1 vi ... r> d vi -v> j r1 . . . v> j rd =1 m2 Xm =1 m2 Xm i,j=1 hE[r > k viv > j rl] id k,l=1 =1 m2 Xm i,j=1 hE[Tr(rlr > k viv > j)]idk,l=1 =1 m2 Xm i,j=1 hTr(E[rlr > k ]E[viv > j])idk,l=1 (∗) =1 m2 Xm i,j=1 -Tr((1k=lσ 2) · I ·(1i=jdσ2) · I) dk,l=1 = d 2σ 4 mI (∗) follows from Lemma 1: because cov(v (h) i, v (h)
j) = (1i=jdσ2) · Id/H, a concatenation for all h ∈ H gives E[viv>
j
] = (1i=jdσ2)· Id.
## B Probing Experiment Details
We train a randomly initialized and frozen TLM
with 12 layers, d = 768, H = 12, L = 512, and σ = 0.02. We use the Adam optimizer (Kingma and Ba, 2014) with learning rate 1e − 3 and 5000 gradient updates. The batch size is set to 32. We implement our model using PyTorch (Paszke et al.,
2019).
| # Layers | Hidden Size | # Attention Heads | Train Seq. Len. | # Trainable Params. |
|--------------------------------------------|---------------|---------------------|-------------------|-----------------------|
| 12 | 64 | 12 | 512 | 162M |
| Optimizer | Batch Size | Train Steps | Precision | Dataset |
| Adam (lr 6e-4) | 32 | 50,000 | bfloat16 | OpenWebText2 |
| Table 1: Pre-trained Model Configurations. | | | | |
## C Pre-Trained Transformer Language Model Details
We use the gpt-neox library (Andonian et al., 2021)
to train a TLM with no positional embeddings. Detailed hyperparameters are listed in Table 1. The pretraining takes 5 hours on one NVIDIA A10040GB.
## D Scientific Artifacts
We use the gpt-neox library (Andonian et al., 2021)
under Apache-2.0 license. OpenWebText2 (Gao et al., 2020) is released by the authors of gpt-neox.
The codebase and dataset are publicly released for research purposes. The steps taken to protect privacy and anonymization are discussed in Section 6 and 7 of Gao et al. (2020). The distribution and statistics of OpenWebext2 are also discussed in Gao et al. (2020).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2 And 5
✓ B1. Did you cite the creators of artifacts you used?
appendix D
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
appendix D
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? appendix D
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? appendix D
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? appendix D
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. appendix D
## C ✓ **Did You Run Computational Experiments?** Section 2, 3, And 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
appendix C
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? appendix C
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In figure captions scattered across all sections C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ait-saada-nadif-2023-anisotropy | Is Anisotropy Truly Harmful? A Case Study on Text Clustering | https://aclanthology.org/2023.acl-short.103 | In the last few years, several studies have been devoted to dissecting dense text representations in order to understand their effectiveness and further improve their quality. Particularly, the anisotropy of such representations has been observed, which means that the directions of the word vectors are not evenly distributed across the space but rather concentrated in a narrow cone. This has led to several attempts to counteract this phenomenon both on static and contextualized text representations. However, despite this effort, there is no established relationship between anisotropy and performance. In this paper, we aim to bridge this gap by investigating the impact of different transformations on both the isotropy and the performance in order to assess the true impact of anisotropy. To this end, we rely on the clustering task as a means of evaluating the ability of text representations to produce meaningful groups. Thereby, we empirically show a limited impact of anisotropy on the expressiveness of sentence representations both in terms of directions and L2 closeness. |
## Is Anisotropy Truly Harmful? A Case Study On Text Clustering
Mira Ait-Saada§† and **Mohamed Nadif** §
§Centre Borelli UMR9010, Université Paris Cité, 75006, Paris
†Caisse des Dépôts et Consignations, 75013, Paris
{mira.ait-saada,mohamed.nadif}@u-paris.fr
## Abstract
In the last few years, several studies have been devoted to dissecting dense text representations in order to understand their effectiveness and further improve their quality. Particularly, the anisotropy of such representations has been observed, which means that the directions of the word vectors are not evenly distributed across the space but rather concentrated in a narrow cone. This has led to several attempts to counteract this phenomenon both on static and contextualized text representations. However, despite this effort, there is no established relationship between anisotropy and performance.
In this paper, we aim to bridge this gap by investigating the impact of different transformations on both the isotropy and the performance in order to assess the true impact of anisotropy. To this end, we rely on the clustering task as a means of evaluating the ability of text representations to produce meaningful groups. Thereby, we empirically show a limited impact of anisotropy on the expressiveness of sentence representations both in terms of directions and L2 closeness.
## 1 Introduction
Contextualized pre-trained representations are now widely used as input to various tasks such as information retrieval (Lin et al., 2021), anomaly detection (Ait-Saada and Nadif, 2023) and document clustering (Boutalbi et al., 2022). In parallel, several studies have investigated the intrinsic properties of Transformers (Peters et al., 2018; Ait Saada et al., 2021; Ethayarajh, 2019; Kovaleva et al.,
2019) in order to demystify these black-box models and the reasons behind their impressive performance levels. Particularly, it has been observed that language models in general (Gao et al., 2019)
and Transformer word embedding models in particular (Ethayarajh, 2019; Wang et al., 2020) produce an anisotropic embedding space. This concretely means that the directions of trained dense word
![0_image_0.png](0_image_0.png)
representations do not occupy uniformly the embedding space, which is suspected to limit their expressiveness and thus their expected performance on downstream tasks. The main question addressed in this paper is how harmful this anisotropy really is regarding the quality of text representations.
Several approaches have been proposed to increase the isotropy of dense representations, based on different strategies. In the context of static word embeddings like GloVe and word2vec, both Raunak et al. (2019) and Mu and Viswanath (2018)
propose a post-processing method that consists in removing the first principal components before reconstructing the word vectors as opposed to the traditional approach of removing the weakest components. This approach improves the quality of word vectors on several downstream tasks while reducing their anisotropy (Mu and Viswanath, 2018).
As to contextualized representations provided by Transformer models, several approaches have been proposed in order to alleviate the anisotropy problem. For instance, based on the idea that anisotropic representations tend to have high expected pairwise cosine similarity, Wang et al.
(2020) propose to apply a cosine similarity regularization term to the embedding matrix. In the same vein, Gao et al. (2019) propose a method named
"spectrum control" that allows for increasing the isotropy of Transformer representations and improving the performance of the machine translation task. To this purpose, they propose regularization 1194 terms that hamper the singular value decay of the embedding matrix. However, despite the success of these *optimization* tricks in lowering the anisotropy of Transformer representations, Ding et al. (2022)
have recently shown that they do not bring any improvement, relying on several tasks like summarization and sentence similarity (STS). They even observed a certain deterioration of the performance brought by anisotropy mitigation techniques.
In contrast, Rajaee and Pilehvar (2022, 2021)
show that *post-processing* methods made for increasing isotropy are also responsible for a performance increase in the STS task in both monolingual and cross-lingual settings. Similarly, the whitening operation, which consists in using the principal components normalized by their inertia, has shown an increase in isotropy as well as enhanced performance in STS (Su et al., 2021; Huang et al., 2021)
and document clustering (Ait-Saada et al., 2021). However, there is no evidence that the decrease of anisotropy brought by such transformations is directly responsible for the gain of performance, as shown in Figure 1, which gives an initial idea of the question addressed in this paper.
Indeed, despite the great energy devoted to studying and mitigating the anisotropy of dense text representations, there is no clear connection between isotropy and performance, which seems to depend, inter alia, on the sought task. In order to contribute to settling this question, we consider using a task that has never been used for this purpose: document clustering. The rationale behind this choice is to evaluate, under different degrees of isotropy, the capability of text representations to facilitate the clear separation and identification of meaningful groups of documents through clustering algorithms.
The main contributions of this paper are:
- We extend the isotropy study of word embeddings to document representations.
- We investigate the correlation between different isotropy measures.
- We assess the connection between isotropy and quality of representation.
## 2 Background 2.1 Isotropy Measures
Let X = {xi} be a set of n vector representations, characterizing n words or documents by d features.
In Mu and Viswanath (2018), the isotropy is assessed using the partition function ψ as follows:
$${\frac{\operatorname*{min}_{\|\mathbf{c}\|=1}\psi(\mathbf{c})}{\operatorname*{max}_{\|\mathbf{c}\|=1}\psi(\mathbf{c})}};\;{\mathrm{~where~}}\psi(\mathbf{c})=\sum_{i=1}^{n}e^{\langle\mathbf{x}_{i},\mathbf{c}\rangle}$$
This approach is inspired by the theoretical findings issued by Arora et al. (2016) who prove that, for isotropic representations X , the partition function ψ can be approximated by a constant for any unit vector c, thus leading to a min/max ratio score of 1.
As there is no analytic solution c that maximizes or minimizes ψ(c), Mu and Viswanath propose to use the eigenvectors of the covariance matrix as the set of unit vectors, which leads to:
$${\mathcal{I}}_{p f}({\mathcal{X}})={\frac{\operatorname*{min}_{{\mathbf{w}}_{j}}\,\psi({\mathbf{w}}_{j})}{\operatorname*{max}_{{\mathbf{w}}_{j}}\,\psi({\mathbf{w}}_{j})}}$$
where pf stands for *partition function*, wj is the jth eigenvector of X⊤X (X being the representation matrix). In our experiments, X contains representations of either words or sentences/documents. In addition to this measure, Wang et al. (2020) quantify the anisotropy by the standard deviation of the partition function normalized by the mean:
$$\mathcal{A}(\mathcal{X})=\sqrt{\frac{\sum_{j=1}^{d}(\psi(\mathbf{w}_{j})-\bar{\psi}))^{2}}{d\,\bar{\psi}^{2}}}$$
where ψ¯ is the average value of the partition function. Perfectly isotropic representations lead to A(X ) = 0 and greater values denote a higher anisotropy. For our purpose, we derive the isotropy score as the square root of the precision score τ = 1/σ, which leads to:
$$\mathcal{I}_{p f_{2}}(\mathcal{X})=\frac{1}{\sqrt{\sigma}}=\frac{1}{\mathcal{A}(\mathcal{X})}$$
On the other hand, the study of anisotropy provided in (Ethayarajh, 2019) has been applied to
word representations and the empirical results have
been obtained using a high number of words picked randomly. The authors rely on the assumption that
the expected similarity of two words uniformly randomly sampled from an isotropic embedding space
$\sigma$ being the variance normalized by $d\bar{\psi}^{2}$.
is zero and that high similarities characterize an
anisotropic embedding space. They hence use the
expected pairwise cosine similarity in order to assess the anisotropy level of word representations.
The *isotropy* is thus obtained by:
Icos := Ei̸=i′(1 − cos (xi
, xi′))
where the score is computed over m random pairs
(xi
, xi′) of vector representations.
1195
![2_image_0.png](2_image_0.png)
NMIkm NMIskm Icos NMIkm NMIskm Icos
## 2.2 Quality Measures
In order to assess the quality of text representations X of size n, we rely on the document clustering task, with the aim of estimating the ability of a clustering algorithm to accurately distinguish groups of documents in a corpus represented by X . As the accuracy measure is not reliable when the classes are dramatically unbalanced, this is achieved using two well-known measures: Normalized Mutual Information (NMI, Strehl and Ghosh 2002), and the Adjusted Rand Index (ARI, Hubert and Arabie 1985; Steinley 2004).
Thereby, to compare two partitions A and B
into g clusters, the NMI metric takes the following form: NMI(*A, B*) = √
MI(A,B)
H(A) H(B)
where MI(A,B)
denotes the mutual information while H(.) denotes the entropy; NMI(A,B) is hence given by:
$$\frac{\sum_{k,\ell}\frac{n_{k\ell}}{n}\log\frac{n n_{k\ell}}{n_{k}\hat{n}_{\ell}}}{\sqrt{(\sum_{k}n_{k}\log\frac{n_{k}}{n})(\sum_{\ell}\hat{n}_{\ell}\log\frac{\hat{n}_{\ell}}{n})}}$$
where nk represents the number of samples contained in the class Ak(1 ≤ k ≤ g), nˆℓthe number of samples belonging to the class Bℓ(1 ≤ ℓ ≤ g),
and nkℓ the number of samples that are at the intersection between the class Ak and the class Bℓ.
The ARI metric, is a measure of the similarity between two groups of data. From a mathematical point of view, the ARI(A,B) is related to the precision and is given by:
$$\frac{\sum_{k,\ell}\left(\frac{n_{k}\ell}{2}\right)-\left[\sum_{k}\left(\frac{n_{k}}{2}\right)\sum_{\ell}\left(\frac{\hat{n}_{\ell}}{2}\right)\right]/\binom{n}{2}}{\frac{1}{2}\left[\sum_{k}\left(\frac{n_{k}}{2}\right)+\sum_{\ell}\left(\frac{\hat{n}_{\ell}}{2}\right)\right]-\left[\sum_{k}\left(\frac{n_{k}}{2}\right)\sum_{\ell}\left(\frac{\hat{n}_{\ell}}{2}\right)\right]/\binom{n}{2}}$$
where the binomial coefficient u
v
can be interpreted as the number of ways to choose u elements
## From A V-Elements Set.
Intuitively, NMI quantifies how much the estimated clustering is informative about the true clustering, while the ARI measures the degree of agreement between the estimated clustering and the reference partition. Both NMI and ARI are equal to 1 if the resulting clustering partition is identical to the ground truth.
## 3 Experiments
In this study, we aim to determine to what extent the anisotropy actually affects the quality of the representations and their ability to discriminate data samples through separable clusters. To this end, we use three measures to evaluate the isotropy of the original embedding space before and after post-processing. Then, we compare the changes in isotropy with the corresponding clustering performance in order to establish a potential relationship between the two concepts.
Relying on several isotropy measures allows us to consolidate confidence in our conclusions and, at the same time, verify if the measures agree with each other. In the same spirit, using different clustering methods and performance measures ensures more rigorous assertions.
We make the code and data used publicly available1.
## 3.1 Datasets
The datasets used for clustering experiments are described in Table 1, where the balance is the ratio between the smallest and largest cluster sizes. We 1https://github.com/miraaitsaada/anisotropy_clustering
$\sin x=\frac{4\pi a b}{2}$.
used classic3 and classic4 datasets of Cornell University, the BBC news dataset proposed in (Greene and Cunningham, 2006) and random extracts of DBPedia (Lehmann et al., 2015) and AG-news
(Zhang et al., 2015) of size 12,000 and 8,000 respectively.
| classic3 | classic4 | DBPedia | AG-news | BBC | |
|------------|------------|-----------|-----------|-------|-------|
| Clusters | 3 | 4 | 14 | 4 | 5 |
| Balance | 0.71 | 0.32 | 0.92 | 0.97 | 0.76 |
| Samples | 3 891 | 7 095 | 12 000 | 8 000 | 2 225 |
Table 1: Datasets' description.
In addition to the datasets used for clustering, we also make use of an external dataset in order to compute an independent score of isotropy. We make use of the dataset used by Rajaee and Pilehvar (2022) which contains sentences extracted from Wikipedia. We use this dataset to evaluate the isotropy measures like Icos, computed between m = 5 000 pairs of words and sentences. The 10 000 resulting representations are also used to compute Ipf and Ipf2
.
## 3.2 Post-Processing
In this study, we focus on post-processing operations based on dimension reduction, showing their effectiveness on text clustering and assessing their impact on isotropy. The objective here is to compute a reduced version of X(n×d) called Y(n×d′)
that comprises the most useful information present in X while using only d′ dimensions.
As an alternative to removing the dominant principal components (PCs) (Raunak et al., 2019; Mu and Viswanath, 2018), the whitening operation allows to normalize the PCs to unit variance, thus reducing the impact of the first components and producing vectors of better quality. It consists in building a reduced representation Y whereby each value is computed as:
$y_{ij}=\mathbf{x}_{i}\mathbf{w}_{j}/\sqrt{\delta_{j}}$, $\forall i=1,...,n$; $j=1,...,d^{n}$
where wj is the jth eigen vector of X⊤X and δj its jth eigen value. We also compare the classical and whitened version of PCA with a nonlinear dimension technique called UMAP (McInnes et al.,
2018), a faster and more robust manifold technique than t-SNE (van der Maaten and Hinton, 2008)
that can be used as a post-processing tool with any d′(while d′ ≤ 3 for t-SNE). UMAP, like t-SNE,
is a graph-based method that aims at producing a reduced space that best preserves the (local) connections of a KNN graph. In order to respect the unsupervised context of text clustering, we avoid all kinds of hyperparameter tuning. We thus set d′
to 10 for all of the post-processing methods.
Besides, two strategies are used to leverage Transformer models. The first consists simply in taking the last layer as usually performed in the literature (Reimers and Gurevych, 2019). The second strategy used all of the layers by averaging them together (Ait-Saada et al., 2021).
## 3.3 Euclidean Vs. Cosine
As a recall, anisotropic vector directions occupy a narrow cone in the geometrical space. Given this definition, we can expect directional techniques based on the angles between vectors to be particularly sensitive to the alleged lack of expressiveness induced by anisotropy. With this in mind, we use, in addition to k-means (MacQueen et al.,
1967), Spherical k-means (Dhillon and Modha, 2001) which is made for directional data and based on the cosine distance instead of the L2 metric. For both algorithms, we use 10 different initializations and keep the partition that yields the best withincluster inertia. For more details about the datasets used, please refer to Appendix 3.1.
## 3.4 Correlation Estimation
In order to assess the linear correlation between two continuous variables, we use the Pearson correlation coefficient ρ (Pearson, 1896) and test its significance. The ρ coefficient between two random variables X and Y indicates how much does one of the variables increase with the growth of the other. It is computed as:
$$\rho_{X,Y}={\frac{\operatorname{cov}(X,Y)}{\sqrt{\sigma_{X}\sigma_{Y}}}}$$
$\mathfrak{g}\;\mathfrak{m}\top\mathfrak{x}$.
where X et Y are two random variables of variance σX et σY respectively and cov(*X, Y* ) is the covariance between X and Y .
In order to test the significance of ρ we rely on the p-value which is a probability that denotes how likely it is that the observed variables have occurred under the null hypothesis which is that the two variables are perfectly correlated (ρX,Y = 0).
Thus, high ρX,Y values indicate a stronger linear relationship and the closer the p-value gets to zero, the more we consider significant the correlation between X and Y .
![4_image_1.png](4_image_1.png)
![4_image_0.png](4_image_0.png)
## 4 Discussion
Figure 2 confronts one quality measure (NMI) and one isotropy measure (Icos) using different postprocessing techniques. We first observe that PCAw produces, by far, the most isotropic representations while increasing the performance of the raw vectors.
Indeed, an appealing explanation of the success of the whitening operation is that it considerably alleviates the anisotropy of the embedding space (Su et al., 2021). Applying that reasoning, PCA and UMAP should deteriorate the performance since they both exacerbate the anisotropy (in all cases for PCA and in most cases for UMAP). Nonetheless, the performance of PCA is comparable to that of the raw embeddings and UMAP achieves even better performance than PCAw even though it significantly reduces the isotropy. Overall, averaging the whole set of layer representations achieves better results, even though it clearly decreases the isotropy, compared to using the last layer, as traditionally performed. Also, it is worth noting that even when the *directions* of the vectors are used
(skm), the decrease of isotropy has a negligible impact on the performance. All these observations suggest that, although the anisotropy reduces the spectrum of directions taken by sentence vectors, it does not necessarily alter their expressiveness.
In order to confirm this supposition, we directly compare isotropy and quality measures in a wide range of situations. To this end, we compute the correlation (Table 2) between several isotropy measures and performance scores on 2 models (BERT
(Devlin et al., 2019) and RoBERTa (Liu et al.,
2019)) with 2 different strategies ("all layers" and
"last"), using 5 datasets and 4 transformations, lead-2Corresponding p-values are given in Table 3 in the Appendix ing to a total of 80 occurrences of each measure.
We first observe a high correlation (associated with a near-zero p-value in Table 3) between measures within the same family (e.g. Icos and Ipf ). This indicates that the selected measures agree with each other which denotes a certain coherence. However, when looking at the correlation between the two families of measures, it is clear that there is no significant relationship between isotropy and quality measures, since all the values of the correlation coefficient are close to zero, which is corroborated by relatively high p-values, denoting a non-significant correlation. Note that the same observations (not shown in this paper) can be made using the Spearman correlations of ranks (Spearman, 1987).
## 5 Conclusion
It has been known to happen that transformations that tend to decrease the anisotropy of text representations also improve the performance of downstream tasks. In stark contrast, we observe in the present study that transformations that exacerbate the anisotropy phenomenon may also improve the results, which calls into question the importance of isotropy in text representation. To draw this important conclusion, we relied on the clustering task and several empirical measures to assess the relationship between isotropy and quality of representations, using several datasets. Most importantly, we show that even a directional approach for clustering, which should be primarily affected by anisotropy, does not undergo any performance loss resulting from low-isotropy representations. In addition, we show the advantage of using UMAP as a post-processing step, which provides good-quality representations using only a handful of dimensions, despite a high resulting anisotropy.
## 6 Limitations
In this study, we focused on the clustering task in order to assess the real impact of anisotropy on the quality of representations. The conclusion is clear regarding Euclidean and directional clustering but investigating other tasks like information retrieval and anomaly detection would further strengthen the present findings. Also, the set of post-processing methods is not limited to the ones used in this study, and it would be interesting to conduct a more comprehensive study, including more transformation functions. Finally, an important future direction is to assess the impact of anisotropy on other languages, especially on embedding models trained on a restrained corpus, which can be the case of low-resource languages.
## References
Mira Ait-Saada and Mohamed Nadif. 2023. Unsupervised anomaly detection in multi-topic short-text corpora. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1384–1395, Dubrovnik, Croatia. Association for Computational Linguistics.
Mira Ait-Saada, François Role, and Mohamed Nadif.
2021. How to leverage a multi-layered Transformer language model for text clustering: An ensemble approach. In *Proceedings of the 30th ACM International Conference on Information & Knowledge Management*, CIKM '21, page 2837–2841, New York, NY, USA. Association for Computing Machinery.
Mira Ait Saada, François Role, and Mohamed Nadif.
2021. Unsupervised methods for the study of Transformer embeddings. In Advances in Intelligent Data Analysis XIX, pages 287–300, Cham. Springer International Publishing.
Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to PMI-based word embeddings. *Transactions of the Association for Computational Linguistics*, 4:385–399.
Rafika Boutalbi, Mira Ait-Saada, Anastasiia Iurshina, Steffen Staab, and Mohamed Nadif. 2022. Tensorbased graph modularity for text data clustering. In Proceedings of the 45th International ACM SIGIR
Conference on Research and Development in Information Retrieval, SIGIR '22, page 2227–2231, New York, NY, USA. Association for Computing Machinery.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional Transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Inderjit S Dhillon and Dharmendra S Modha. 2001.
Concept decompositions for large sparse text data using clustering. *Machine learning*, 42(1):143–175.
Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, and Roger Wattenhofer. 2022. On isotropy calibration of Transformer models. In *Proceedings of the Third Workshop on Insights from Negative Results in NLP*, pages 1–9, Dublin, Ireland.
Association for Computational Linguistics.
Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics.
Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tieyan Liu. 2019. Representation degeneration problem in training natural language generation models.
In *International Conference on Learning Representations*.
Derek Greene and Pádraig Cunningham. 2006. Practical solutions to the problem of diagonal dominance in kernel document clustering. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, page 377–384, New York, NY, USA.
Association for Computing Machinery.
Junjie Huang, Duyu Tang, Wanjun Zhong, Shuai Lu, Linjun Shou, Ming Gong, Daxin Jiang, and Nan Duan. 2021. WhiteningBERT: An easy unsupervised sentence embedding approach. In Findings of the Association for Computational Linguistics: EMNLP
2021, pages 238–244, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. *Journal of classification*, 2(1):193–218.
Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the Dark Secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4364–4373, Hong Kong, China. Association for Computational Linguistics.
Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. DBpedia - a large-scale, multilingual knowledge base extracted from wikipedia.
Semantic web, 6(2):167–195.
Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021.
Pretrained transformers for text ranking: Bert and beyond.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
James MacQueen et al. 1967. Some methods for classification and analysis of multivariate observations.
In *Proceedings of the fifth Berkeley symposium on* mathematical statistics and probability, volume 1, pages 281–297. Oakland, CA, USA.
Leland McInnes, John Healy, and James Melville. 2018.
UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.
Jiaqi Mu and Pramod Viswanath. 2018. All-but-the-top:
Simple and effective postprocessing for word representations. In *International Conference on Learning* Representations.
Karl Pearson. 1896. VII. mathematical contributions to the theory of evolution.–III. regression, heredity, and panmixia. Philosophical Transactions of the Royal Society of London. Series A, containing papers of a mathematical or physical character, (187):253–318.
Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting Contextual Word Embeddings: Architecture and Representation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 1499–
1509, Brussels, Belgium. Association for Computational Linguistics.
Sara Rajaee and Mohammad Taher Pilehvar. 2021. How does fine-tuning affect the geometry of embedding space: A case study on isotropy. In *Findings of the* Association for Computational Linguistics: EMNLP
2021, pages 3042–3049, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Sara Rajaee and Mohammad Taher Pilehvar. 2022. An isotropy analysis in the multilingual BERT embedding space. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 1309–1316,
Dublin, Ireland. Association for Computational Linguistics.
Vikas Raunak, Vivek Gupta, and Florian Metze. 2019.
Effective dimensionality reduction for word embeddings. In *Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)*,
pages 235–243, Florence, Italy. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence Embeddings using Siamese BERTNetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3980–3990, Hong Kong, China. Association for Computational Linguistics.
Charles Spearman. 1987. The proof and measurement of association between two things. The American journal of psychology, 100(3/4):441–471.
Douglas Steinley. 2004. Properties of the HubertArable Adjusted Rand Index. *Psychological methods*,
9(3):386.
Alexander Strehl and Joydeep Ghosh. 2002. Cluster ensembles—a knowledge reuse framework for combining multiple partitions. *Journal of machine learning research*, 3(Dec):583–617.
Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou. 2021. Whitening sentence representations for better semantics and faster retrieval. *CoRR*,
abs/2103.15316.
Laurens van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-SNE. Journal of Machine Learning Research, 9(86):2579–2605.
Lingxiao Wang, Jing Huang, Kevin Huang, Ziniu Hu, Guangtao Wang, and Quanquan Gu. 2020. Improving neural language generation with spectrum control.
In *International Conference on Learning Representations*.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. Advances in neural information processing systems, 28:649–657.
A Appendix
| NMI | ARI | Dataset | External (word) | External (sentence) | | | | | | | | | | |
|-------------|-------|-----------|-------------------|-----------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| km | skm | km | skm | cos | pf | pf2 | cos | pf | pf2 | cos | pf | pf2 | | |
| NMI km | 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | 0.647 | 0.858 | 0.644 | 0.225 | 0.88 | 0.86 | 0.54 | 0.964 | 0.896 | |
| skm | ≈ 0.0 | 0.0 | ≈ 0.0 | ≈ 0.0 | 0.453 | 0.662 | 0.195 | 0.642 | 0.64 | 0.658 | 0.476 | 0.616 | 0.635 | |
| ARI km | ≈ 0.0 | ≈ 0.0 | 0.0 | ≈ 0.0 | 0.562 | 0.921 | 0.542 | 0.297 | 0.946 | 0.925 | 0.542 | 0.98 | 0.96 | |
| skm | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | 0.0 | 0.934 | 0.71 | 0.517 | 0.56 | 0.742 | 0.722 | 0.877 | 0.755 | 0.741 | |
| Dataset cos | 0.647 | 0.453 | 0.562 | 0.934 | 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | |
| pf | 0.858 | 0.662 | 0.921 | 0.71 | ≈ 0.0 | 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | |
| pf2 | 0.644 | 0.195 | 0.542 | 0.517 | ≈ 0.0 | ≈ 0.0 | 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | |
| cos | 0.225 | 0.642 | 0.297 | 0.56 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | |
| Ext-w | pf | 0.88 | 0.64 | 0.946 | 0.742 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 |
| pf2 | 0.86 | 0.658 | 0.925 | 0.722 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | |
| cos | 0.54 | 0.476 | 0.542 | 0.877 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | 0.0 | ≈ 0.0 | ≈ 0.0 | |
| Ext-s pf | 0.964 | 0.616 | 0.98 | 0.755 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | 0.0 | ≈ 0.0 | |
| pf2 | 0.896 | 0.635 | 0.96 | 0.741 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | ≈ 0.0 | 0.0 | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
✗ A2. Did you discuss any potential risks of your work?
We did not identify any risk regarding our work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 3
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
nguyen-duc-etal-2023-class | Class based Influence Functions for Error Detection | https://aclanthology.org/2023.acl-short.104 | Influence functions (IFs) are a powerful tool for detecting anomalous examples in large scale datasets. However, they are unstable when applied to deep networks. In this paper, we provide an explanation for the instability of IFs and develop a solution to this problem. We show that IFs are unreliable when the two data points belong to two different classes. Our solution leverages class information to improve the stability of IFs.Extensive experiments show that our modification significantly improves the performance and stability of IFs while incurring no additional computational cost. | # Class Based Influence Functions For Error Detection
Nguyen Duc-Thang∗† Hoang Thanh-Tung∗† **Quan Tran**∗‡
Huu-Tien Dang† Nguyen Ngoc-Hieu† Anh Dau† **Nghi Bui**†
† FPT Software AI Center ‡ Adobe Research
{nguyenducthang8a2, htt210, quanthdhcn}@gmail.com
## Abstract
Influence functions (IFs) are a powerful tool for detecting anomalous examples in large scale datasets. However, they are unstable when applied to deep networks. In this paper, we provide an explanation for the instability of IFs and develop a solution to this problem. We show that IFs are unreliable when the two data points belong to two different classes. Our solution leverages class information to improve the stability of IFs. Extensive experiments show that our modification significantly improves the performance and stability of IFs while incurring no additional computational cost.
## 1 Introduction
Deep learning models are data hungry. Large models such as transformers (Vaswani et al., 2017),
BERT (Devlin et al., 2019), and GPT-3 (Brown et al., 2020) require millions to billions of training data points. However, data labeling is an expensive, time consuming, and error prone process. Popular datasets such as the ImageNet (Deng et al., 2009)
contain a significant amount of errors - data points with incorrect or ambiguous labels (Beyer et al.,
2020). The need for automatic error detection tools is increasing as the sizes of modern datasets grow.
Influence function (IF) (Koh and Liang, 2017)
and its variants (Charpiat et al., 2019; Khanna et al.,
2019; Barshan et al., 2020; Pruthi et al., 2020) are a powerful tool for estimating the influence of a data point on another data point. Researchers leveraged this capability of IFs to design or detect adversarial
(Cohen et al., 2020), poisonous (Koh et al., 2022; Koh and Liang, 2017), and erroneous (Dau et al.,
2022) examples in large scale datasets. The intuition is that these harmful data points usually have a negative influence on other data points and this influence can be estimated with IFs.
Basu et al. (2021) empirically observed that IFs are unstable when they are applied to deep neu-
∗Joint first authors ral networks (DNNs). The quality of influence estimation deteriorates as networks become more complex. In this paper, we provide empirical and theoretical explanations for the instability of IFs.
We show that IFs scores are very noisy when the two data points belong to two different classes but IFs scores are much more stable when the two data points are in the same class (Sec. 3). Based on that finding, we propose IFs-class, variants of IFs that use class information to improve the stability while introducing no additional computational cost.
IFs-class can replace IFs in anomalous data detection algorithms. In Sec. 4, we compare IFs-class and IFs on the error detection problem. Experiments on various NLP tasks and datasets confirm the advantages of IFs-class over IFs.
## 2 Background And Related Work
We define the notations used in this paper. Let z = (x, y) be a data point, where x ∈ X is the input, y ∈ Y is the target output; Z = z
(i) n i=1 be a dataset of n data points; Z−i =
Z\z
(i) be the dataset Z with z
(i)removed; fθ :
X → Y be a model with parameter θ; LZ,θ =
1 n Pn i=1 ℓ(fθ(x
(i)), y
(i)) = 1n Pn i=1 ℓ(z
(i); θ) be the empirical risk of fθ measured on Z, where ℓ : *Y × Y →* R
+ is the loss function; θˆ =
arg minθ LZ,θ and θˆ−i = arg minθ LZ−i,θ be the optimal parameters of the model fθ trained on Z and Z−i. In this paper, fθ is a deep network and θˆ is found by training fθ with gradient descent on the training set Z.
## 2.1 Influence Function And Variants
The influence of a data point z
(i) on another data point z
(j)is defined as the change in loss at z
(j)
when z
(i)is removed from the training set
$$s^{(i j)}=\ell(\mathbf{z}^{(j)};{\hat{\boldsymbol{\theta}}}_{-i})-\ell(\mathbf{z}^{(j)};{\hat{\boldsymbol{\theta}}})$$
$\square$
(j); θˆ) (1)
The absolute value of s
(ij) measures the strength of the influence of z
(i) on z
(j). The sign of s
(ij)show 1204 the direction of influence. A negative s
(ij) means that removing z
(i) decreases the loss at z
(j), i.e. z
(i)
is harmful to z
(j). s
(ij) has high variance because it depends on a single (arbitrary) data point z
(j).
To better estimate the influence of z
(i) on the entire data distribution, researchers average the influence scores of z
(i) over a reference set Z′
$$s^{(i)}=\frac{1}{|\mathcal{Z}^{\prime}|}\sum_{\mathbf{z}^{(j)}\in\mathcal{Z}^{\prime}}s^{(i j)}=\mathcal{L}_{\mathcal{Z}^{\prime},{\dot{\theta}}_{-i}}-\mathcal{L}_{\mathcal{Z}^{\prime},{\dot{\theta}}}\quad(2)$$
s
(i)is the influence of z
(i) on the reference set Z′.
Z′can be a random subset of the training set or a held-out dataset. Naive computation of s
(ij)requires retraining fθ on Z−i. Koh and Liang (2017)
proposed the influence function (IF) to quickly estimate s
(ij) without retraining
$$\begin{array}{l}{{s^{(i j)}\approx I F({\bf z}^{(i)},{\bf z}^{(j)})}}\\ {{\qquad\qquad\approx\frac{1}{n}\nabla_{\hat{\theta}}\ell({\bf z}^{(i)};\hat{\theta})^{\top}H_{\hat{\theta}}^{-1}\nabla_{\hat{\theta}}\ell({\bf z}^{(j)};\hat{\theta})\quad(3)}}\end{array}$$
where Hθˆ = ∂
2LZ,θˆ/∂θ 2is the Hessian at θˆ. Exact computation of H
−1 θˆis intractable for modern networks. Koh and Liang (2017) developed a fast algorithm for estimating H
−1 θˆ ∇θˆℓ(z
(j); θˆ) and used only the derivatives w.r.t. the last layer's parameters to improve the algorithm's speed. Charpiat et al.
(2019) proposed gradient dot product (GD) and gradient cosine similarity (GC) as faster alternatives to IF. Pruthi et al. (2020) argued that the influence can be better approximated by accumulating it through out the training process (TracIn). The formula for IFs are summarized in Tab. 3 in Appx. A.
IFs can be viewed as measures of the similarity between the gradients of two data points. Intuitively, gradients of harmful examples are dissimilar from that of normal examples (Fig. 1).
## 2.2 Influence Functions For Error Detection
In the error detection problem, we have to detect data points with wrong labels. Given a (potentially noisy) dataset Z, we have to rank data points in Z by how likely they are erroneous. Removing or correcting errors improves the performance and robustness of models trained on that dataset.
Traditional error detection algorithms that use hand designed rules (Chu et al., 2013) or simple statistics (Huang and He, 2018), do not scale well to deep learning datasets. Cohen et al. (2020);
Dau et al. (2022) used IFs to detect adversarial and erroneous examples in deep learning datasets. Dau et al. (2022) used IFs to measure the influence of each data point z ∈ Z on a clean reference set Z′. Data points in Z are ranked by how harmful they are to Z′. Most harmful data points are reexamined by human or are removed from Z (Alg. 2 in Appx. A). In this paper, we focus on the error detection problem but IFs and IFs-class can be used to detect other kinds of anomalous data.
## 3 Method
![1_image_0.png](1_image_0.png)
![1_image_1.png](1_image_1.png)
Basu et al. (2021) attributed the instability of IFs to the non-convexity of DNNs and the errors in Taylor's expansion and Hessian-Vector product approximation. In this section, we show that the learning dynamics of DNNs makes examples from different classes unrelated and can have random influence on each other.
Pezeshkpour et al. (2021); Hanawa et al. (2021)
empirically showed that IFs with last layer gradient perform as well as or better than IFs with all layers' gradient and variants of IF behave similarly. Therefore, we analyze the behavior of GD
with last layer's gradient and generalize our results to other IFs. Fig. 1 shows the last layer's gradient of an MLP on a 3-class classification problem.
In the figure, gradients of mislabeled data points have large magnitudes and are opposite to gradients of correct data points in the true class. However, gradients of mislabeled data points are not necessarily opposite to that of correct data points from other classes. Furthermore, gradients of two data points from two different classes are almost perpendicular. We make the following observation.
A mislabeled/correct data point often has a very negative/positive influence on data points of the same (true) class, but its influence on other classes is noisy and small.
We verify the observation on real-world datasets.
(Fig. 2). We compute GD scores of pairs of clean data points from 2 different classes and plot the score's distribution. We repeat the procedure for pairs of data points from each class. In the 2-class case, GD scores are almost normally distributed with a very sharp peak at 0. That means, in many cases, a clean data point from one class has no significant influence on data points from the other class. And when it has a significant effect, the effect could be positive or negative with equal probability. In contrast, GD scores of pairs of data points from the same class are almost always positive. A clean data point almost certainly has a positive influence on clean data points of the same class.
Our theoretical analysis shows that when the two data points have different labels, then the sign of GD depends on two random variables, the sign of inner product of the features and the sign of inner product of gradients of the losses w.r.t. the logits.
And as the model becomes more confident about the labels of the two data points, the magnitude of GD becomes smaller very quickly. Small perturbations to the logits or the features can flip the sign of GD. In contrast, if the two data points have the same label, then the sign of GD depends on only one random variable, the sign of the inner product of the feature, and the GD's magnitude remains large when the model becomes more confident. Mathematical details are deferred to Appx. D.
## 3.2 Class Based Ifs For Error Detection
Our class based IFs for error detection is shown in Alg. 1. In Sec. 3.1, we see that an error has a very
Algorithm 1 Class based influence function for
error detection.
Require:
1: Z =
z
(i) n
i=1: a big noisy dataset
2: C: number of classes
3: Z′k =
z′(jk) mk
jk=1: clean data from class k
4: Z′ =SC
k=1 Z′k
: a clean reference dataset
5: fθˆ: a deep model pretrained on Z 6: sim(·, ·): a similarity measure in Tab. 3
Ensure: Zˆ: data points in Z ranked by score
7: **for ${\bf z}^{(i)}\in{\cal Z}$ do** 8: **for $k=1,...,C$ do** 9: $s_{k}^{(i)}=\frac{\sum_{j=1}^{mk}\sin(\nabla_{\phi}\ell({\bf z}^{(i)}),\nabla_{\phi}\ell({\bf z}^{\prime(j_{k})}))}{m_{k}}$ 10: **end for** 11: $s^{(i)}=\min_{k}(s_{k}^{(i)})$
12: **end for**
13: Zˆ = sort(Z, key = s, ascending = True) 14: **return** Zˆ
strong negative influence on correct data points in the true class, and a correct data point has a positive influence on correct data points in the true class. Influence score on the true class is a stronger indicator of the harmfulness of a data point and is better at differentiating erroneous and correct data points. Because we do not know the true class of z
(i)in advance, we compute its influence score on each class in the reference set Z′and take the minimum of these influence scores as the indicator of the harmfulness of z
(i)(line 8-11). Unlike the original IFs, IFs-class are not affected by the noise from other classes and thus, have lower variances
(Fig. 4 in Appx. A). In Appx. A, we show that our algorithm has the same computational complexity as IFs based error detection algorithm.
## 4 Experiments 4.1 Error Detection On Benchmark Datasets
Experiment setup We evaluate the error detection performance of IFs-class on 2 NLP tasks, (1)
text classification on IMDB (Maas et al., 2011),
SNLI (Bowman et al., 2015), and BigCloneBench
(Svajlenko et al., 2014) datasets, and (2) NER on the CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) dataset. For text classification tasks, we detect text segments with wrong labels. For the NER task, we detect tokens with wrong entity types. We use BERT (Devlin et al., 2019) and CodeBERT (Feng et al., 2020) in our experiments.
Implementation details are located in Appx. B. To
![3_image_0.png](3_image_0.png)
create benchmark datasets Z's, we inject random
noise into the above datasets. For text classification
datasets, we randomly select p% of the data points
and randomly change their labels to other classes. For the CoNLL-NER dataset, we randomly select
p% of the sentences and change the labels of r%
of the phrases in the selected sentences. All tokens in a selected phrase are changed to the same
class. The reference set Z′is created by randomly
selecting mk clean data points from each class in
Z. To ensure a fair comparison, we use the same
reference set Z′for both IFs and IFs-class algorithms. Models are trained on the noisy dataset
Z. To evaluate an error detection algorithm, we
select top q% most harmful data points from the
sorted dataset Zˆ and check how many percent of
the selected data points are really erroneous. Intuitively, increasing q allows the algorithm to find
more errors (increase recall) but may decrease the
detection accuracy (decrease precision).
Our code is available at https://github.com/Fsoft-AIC/
Class-Based-Influence-Functions.
Result and Analysis Because results on all datasets
share the same patterns, we report representative
results here and defer the full results to Appx. C.
Fig. 3(a) shows the error detection accuracy on
the SNLI dataset and how the accuracy changes
with q. Except for the GC algorithm, our classbased algorithms have higher accuracy and lower
variance than the non-class-based versions. When
q increases, the performance of IFs-class does not
decrease as much as that of IFs. This confirms that IFs-class are less noisy than IFs. Class information
fails to improve the performance of GC. To understand this, let's reconsider the similarity measure
sim(·, ·). Let's assume that there exist some clean
data points z′(j) ∈ Z′ with a very large gradient
∇θˆℓ(z′(j)). If the similarity measure does not normalize the norm of ∇θˆℓ(z′(j)), then z′(j) will have the dominant effect on the influence score. The noise in the influence score is mostly caused by these data points. GC normalizes both gradients,
∇θˆℓ(z
(i)) and ∇θˆℓ(z′(j)), and effectively removes such noise. However, gradients of errors tend to be larger than that of normal data points (Fig. 1). By normalizing both gradients, GC removes the valuable information about magnitudes of gradients of errors ∇θˆℓ(z
(i)). That lowers the detection performance. In Fig. 3(a), we see that the performance of GC when q ≥ 15% is lower than that of other classbased algorithms. Similar trends are observed on other datasets (Fig. 6, 7, 8 in Appx. C).
Fig. 3(b) shows the change in detection accuracy as the level of noise p goes from 5% to 20%. For each value of p, we set q to be equal to p. Our class-based influence score significantly improves the performance and reduces the variance. We note that when p increases, the error detection problem becomes easier as there are more errors. The detection accuracy, therefore, tends to increase with p as shown in Fig. 3(b), 9, 10.
Fig. 3(c) shows that GD-class outperforms GD
on all entity types in CoNLL2003-NER. The performance difference between GD-class and GD is greater on the MISC and ORG categories. Intuitively, a person's name can likely be an organization's name but the reverse is less likely. Therefore, it is harder to detect that a PER or LOC tag has been changed to ORG or MISC tag than the reverse.
The result shows that IFs-class is more effective than IFs in detecting hard erroneous examples.
## 4.2 The Effect Of Data On Error Detection Algorithms
We study the effect of the size and the cleanliness of the reference set on the performance of error detection algorithms.
The size of the reference set. We changed the size of classes in the reference set from 10 to 1000 to study the effect of the reference set's size on the detection performance. We report the mean performance of GD and GC algorithms in Tab. 1.
We observe no clear trend in the performance as the size of the reference set increases. Our conjecture is that gradients of clean data points from the same class have almost the same direction. Averaging the gradient direction over a small set of data points already gives a very stable gradient direction. Therefore, increasing the size of the reference set does not have much impact on detection performance.
Table 2: The result of GD and GD-class on SNLI dataset when the reference set is a random (noisy) subset of the training set.
| Method | top 5% | top 10% | top 15% | top 20% |
|----------|----------|-----------|-----------|-----------|
| GD | 75.33 | 59.37 | 43.87 | 34.49 |
| GD-Class | 73.85 | 70.84 | 67.28 | 64.29 |
Our findings shed light of the development of new influence estimators and on the application of IFs in downstream tasks.
Method top 5% top 10% top 15% top 20%
GD@10 79.18 76.95 74.90 71.58
GD@50 82.10 72.35 72.82 64.92
GD@100 92.99 88.01 82.53 77.60 GD@200 83.05 83.55 81.04 77.10
GD@500 85.36 78.08 73.22 63.96
GD@1000 83.47 82.43 82.45 77.56
GC@10 83.48 87.52 85.06 78.15
GC@50 80.63 86.73 84.77 77.96 GC@100 81.90 84.45 84.42 77.98
GC@200 79.88 83.62 84.22 77.88
GC@500 82.48 83.63 84.25 77.95
GC@1000 85.09 82.30 83.81 77.85
Table 1: The effect of the reference set's size.
The cleanliness of the reference set. The result of GD and GD-class on SNLI dataset when the reference set is a random (noisy) subset of the training set is shown in table 2. When the reference set is noisy, the error detection performance of IF algorithms decreases significantly. IF-class algorithms are much more robust to noise in the reference set and their performance decreases only slightly. This experiment further demonstrates the advantage of IFs-class over IFs algorithms.
## 5 Conclusion
In this paper, we study influence functions and identify the source of their instability. We give a theoretical explanation for our observations. We introduce a stable variant of IFs and use that to develop a high performance error detection algorithm.
## Limitations
Our paper has the following limitations
1. Our class-based influence score cannot improve the performance of GC algorithm. Although class-based version of GD, IF, and TracIn outperformed the original GC, we aim to develop a stronger version of GC. From the analysis in Sec. 4, we believe that a partially normalized GC could have better performance. In partial GC, we normalize the gradient of the clean data point z′(j) only. That will remove the noise introduced by ∥∇θˆℓ(z′(j))∥ while retaining the valuable information about the norm of ∇θˆℓ(z
(i)).
## Ethics Statement
Our paper consider a theoretical aspect of influence functions. It does not have any biases toward any groups of people. Our findings do not cause any harms to any groups of people.
## References
Elnaz Barshan, Marc-Etienne Brunet, and Gintare Karolina Dziugaite. 2020. Relatif:
Identifying explanatory training samples via relative influence. In *International Conference on Artificial* Intelligence and Statistics, pages 1899–1909. PMLR.
Samyadeep Basu, Phil Pope, and Soheil Feizi. 2021.
Influence functions in deep learning are fragile. In International Conference on Learning Representations.
Lucas Beyer, Olivier J. Hénaff, Alexander Kolesnikov, Xiaohua Zhai, and Aäron van den Oord. 2020. Are we done with imagenet? *CoRR*, abs/2006.07159.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Guillaume Charpiat, Nicolas Girard, Loris Felardos, and Yuliya Tarabalka. 2019. Input similarity from the neural network perspective. *Advances in Neural* Information Processing Systems, 32.
Xu Chu, Ihab F. Ilyas, and Paolo Papotti. 2013. Holistic data cleaning: Putting violations into context. In 2013 IEEE 29th International Conference on Data Engineering (ICDE), pages 458–469.
Gilad Cohen, Guillermo Sapiro, and Raja Giryes. 2020.
Detecting adversarial samples using influence functions and nearest neighbors. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14453–14462.
Anh T. V. Dau, Nghi D. Q. Bui, Thang Nguyen-Duc, and Hoang Thanh-Tung. 2022. Towards using datainfluence methods to detect noisy samples in source code corpora. In *Proceedings of the 37th IEEE/ACM*
International Conference on Automated Software Engineering.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference*
on computer vision and pattern recognition, pages 248–255. Ieee.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. CodeBERT: A pre-trained model for programming and natural languages. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 1536–1547, Online. Association for Computational Linguistics.
Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et al. 2020. Graphcodebert: Pre-training code representations with data flow.
arXiv preprint arXiv:2009.08366.
Kazuaki Hanawa, Sho Yokoi, Satoshi Hara, and Kentaro Inui. 2021. Evaluation of similarity-based explanations. In *International Conference on Learning* Representations.
Zhipeng Huang and Yeye He. 2018. Auto-detect: Datadriven error detection in tables. In Proceedings of the 2018 International Conference on Management of Data, pages 1377–1392.
Rajiv Khanna, Been Kim, Joydeep Ghosh, and Sanmi Koyejo. 2019. Interpreting black box predictions using fisher kernels. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pages 3382–3390. PMLR.
Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR.
Pang Wei Koh, Jacob Steinhardt, and Percy Liang. 2022.
Stronger data poisoning attacks break data sanitization defenses. *Machine Learning*, 111(1):1–47.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, MING GONG, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, and Shujie LIU. 2021. CodeXGLUE: A machine learning
benchmark dataset for code understanding and generation. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Pouya Pezeshkpour, Sarthak Jain, Byron Wallace, and Sameer Singh. 2021. An empirical comparison of instance attribution methods for NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 967–975, Online. Association for Computational Linguistics.
Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. *Advances in Neural* Information Processing Systems, 33:19920–19930.
Jeffrey Svajlenko, Judith F. Islam, Iman Keivanloo, Chanchal K. Roy, and Mohammad Mamun Mia.
2014. Towards a big data curated benchmark of interproject code clones. In *2014 IEEE International* Conference on Software Maintenance and Evolution, pages 476–480.
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task:
Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–
147.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
## A Additional Algorithms And Formula
Table 3: Influence function and its variants. We drop the constant factor 1/n for clarity.
IF ∇θˆℓ(z
$$\begin{array}{l}{{\left|\begin{array}{l}{{\nabla_{\hat{\theta}}\ell(\mathbf{z}^{(i)};\hat{\theta})^{\top}H_{\hat{\theta}}^{-1}\nabla_{\hat{\theta}}\ell(\mathbf{z}^{(j)};\hat{\theta})}}\\ {{\left\langle\nabla_{\hat{\theta}}\ell(\mathbf{z}^{(i)}),\nabla_{\hat{\theta}}\ell(\mathbf{z}^{(j)})\right\rangle}}\\ {{\cos(\nabla_{\hat{\theta}}\ell(\mathbf{z}^{(i)}),\nabla_{\hat{\theta}}\ell(\mathbf{z}^{(j)}))}}\\ {{\sum_{t=1}^{T}\eta_{t}\left\langle\nabla_{\theta^{(t)}}\ell(\mathbf{z}^{(i)}),\nabla_{\theta^{(t)}}\ell(\mathbf{z}^{(j)})\right\rangle}}\end{array}\right.}}\end{array}$$
$\blacksquare$
$$\mathrm{GD}$$
GD ∇θˆℓ(z GC cos∇θˆℓ(z
TracIn PT
$\mathrm{T}$u.
## Computational Complexity Of Error Detection Algorithms
The inner for-loop in Alg. 1 calculates C influence scores. It calls to the scoring function sim() exactly
Algorithm 2 Influence function based error detection (Dau et al., 2022)
Require:
1: Z =
z
(i) n i=1: a big noisy dataset 2: Z′ =
z′(j) m j=1: a clean reference dataset 3: fθˆ: a deep model pretrained on Z 4: sim(·, ·): a similarity measure in Tab. 3 Ensure: Zˆ: data points in Z ranked by score 5: for z
(i) ∈ Z do 6: s
(i) =
1 m Pm j=1 sim(∇θˆℓ(z
(i)), ∇θˆℓ(z′(j)))
7: **end for**
8: Zˆ = sort(Z, key = s, ascending = True) 9: **return** Zˆ
|Z′| = m times. The complexity of the inner forloop in Alg. 1 is equal to that of line 6 in Alg. 2.
Thus, the complexity of Alg. 1 is equal to that of Alg. 2.
![7_image_0.png](7_image_0.png)
Figure 4: Distributions of GD and GD-class scores of erroneous tokens in the CoNLL2003 dataset. GD-class scores are more concentrated and have mostly negative values. GD scores are more spread out and the values are less negative. Furthermore, a significant portion of GD scores are greater than 0, i.e. GD 'thinks' that these erroneous data points have positive influence on clean data points in Z′. In contrast, GD-class scores are more concentrated and almost always have negative values.
This shows a clear advantage of GD-class over GD.
## B Implementation Details B.1 Experiment Setup
We used standard datasets and models and experimented with 5 different random seeds and reported the mean and standard deviation. A Nvidia RTX 3090 was used to run our experiments. Models are trained with the AdamW optimizer (Loshchilov and Hutter, 2019) with learning rate η = 5e − 5, cross entropy loss function, and batch-size of 16.
The epoch with the best classification accuracy on the validation set was used for error detection.
Our source code and guidelines were attached to the supplementary materials.
## B.2 Datasets
IMDB (Maas et al., 2011) The dataset includes 50000 reviews from the Internet Movie Database
(IMDb) website. The task is a binary sentiment analysis task. The dataset contains an even number of positive and negative reviews. The IMDB
dataset is split into training, validation, and test sets of sizes 17500, 7500, and 25000. The IMDB
dataset can be found at https://ai.stanford.
edu/~amaas/data/sentiment/
SNLI dataset (Standart Natural Language Inference) (Bowman et al., 2015) consists of 570k sentence pairs manually labeled as entailment, contradiction, and neutral. We convert these labels into numbers. It is geared towards serving as a benchmark for evaluating text representational systems. This dataset is available at https://nlp.
stanford.edu/projects/snli/.
BigCloneBench (Svajlenko et al., 2014) is a huge code clone benchmark that includes over 6,000,000 true clone pairs and 260,000 false clone pairs from 10 different functionality. The task is to predict whether two pieces of code have the same semantics. This dataset is commonly used in language models for code (Feng et al., 2020; Lu et al., 2021; Guo et al., 2020). This dataset is available at https:
//github.com/clonebench/BigCloneBench CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) is one of the most influential corpora for NER model research. A large number of publications, including many landmark works, have used this corpus as a source of ground truth for NER
tasks. The data consists two languages: English and German. In this paper, we use CoNLL2003 English dataset. The sizes of training, validation, and test are 14,987, 3,466, and 3,684 sentences correspond to 203,621, 51,362, and 46,435 tokens, respectively. The dataset is available at https:
//www.clips.uantwerpen.be/conll2003/ner/
## B.3 Models
BERT (Devlin et al., 2019) stands for Bidirectional Encoder Representations from Transformers, is based on Transformers. The BERT model in this paper was pre-trained for natural language processing tasks. We use BERT for IMDB and SNLI
datasets. At the same time, we also use the BERT
model for the NER problem on the CoNLL2003 dataset.
CodeBERT (Feng et al., 2020) is a bimodal pretrained model for programming and natural languages. We use CodeBERT for BigCloneBench dataset.
## C Additional Results C.1 3-Class Classification Experiment
We train a MLP with 2 input neurons, 100 hidden neurons in the first hidden layer, 2 hidden neurons in the second hidden layer, and 3 output neurons with SGD for 1000 epochs. The activation function is LeakyReLU and the learning rate is η = 1e − 3.
The last layer has 6 parameters organized into a 3 × 2 matrix. The gradient of the loss with respect to the last layer's parameters is also organized into a 3 × 2 matrix. We visualize 3 rows of the gradient matrix in 3 subfigures (Fig. 5).
## C.2 Result On Imdb, Snli, Bigclonebench, And Conll2003
To ensure a fair comparison between our classbased algorithm and algorithm 2, we use the same reference dataset Z′for both algorithms. The reference dataset Z′consists of C classes. We have C = 2 for the IMDB dataset, C = 3 for the SNLI
dataset, C = 2 for the BigCloneBench dataset, and C = 5 for the CoNLL2003-NER dataset. From each of the C classes, we randomly select mk = 50 k = 1*, ..., C* clean data points to form Z′. We tried varying mk from 10 to 1000 and observed no significant changes in performance.
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
![9_image_2.png](9_image_2.png)
![9_image_3.png](9_image_3.png)
![10_image_0.png](10_image_0.png)
![10_image_1.png](10_image_1.png)
![10_image_2.png](10_image_2.png)
![10_image_3.png](10_image_3.png)
## D **Explanation Of The Observation In Sec.** 3
We also have Let's consider a classification problem with cross entropy loss function
$$\ell({\hat{\mathbf{y}}},\mathbf{y})=\sum_{i=1}^{d_{y}}y_{i}\log{\hat{y}}_{i}$$
where dy is the number of classes. Let z = (x, y)
be a data point with label k, i.e. yk = 1, yi =
0 ∀ i ̸= k. The model fθ is a deep network with last layer's parameter W ∈ R
dy×dh , where dh is the number of hidden neurons. Let u ∈ R
dh be the activation of the penultimate layer. The output is computed as follow
$$\mathbf{a}=W\mathbf{u}$$ $${\hat{\mathbf{y}}}=\delta(\mathbf{a})$$
where δ is the softmax output function. The derivative of the loss at z w.r.t. W is
$${\frac{\partial\ell(\mathbf{z})}{\partial W}}=\nabla_{\mathbf{a}}\ell(\mathbf{z})\,\mathbf{u}^{\top}$$ $$={\left[\begin{matrix}\nabla_{\mathbf{a}}\ell(\mathbf{z})_{1}\mathbf{u}^{\top}\\ \vdots\\ \nabla_{\mathbf{a}}\ell(\mathbf{z})_{d_{y}}\mathbf{u}^{\top}\end{matrix}\right]}$$
$$(4)$$
$${\mathrm{(5)}}$$
$${\frac{\partial\ell}{\partial{\hat{y}}_{i}}}={\begin{cases}0\;\mathrm{if}\;i\neq k\\ {\frac{1}{{\hat{y}}_{i}}}\;\mathrm{if}\;i=k\end{cases}}$$
$${\frac{\partial{\hat{y}}_{k}}{\partial a_{i}}}={\begin{cases}{\hat{y}}_{k}(1-{\hat{y}}_{k}){\mathrm{~if~}}i=k\\ -{\hat{y}}_{k}{\hat{y}}_{i}{\mathrm{~if~}}i\neq k\end{cases}}$$
Substitute this into Eqn. 9 we have
$$\nabla_{\mathbf{a}}\ell={\left[\begin{array}{l}{-{\hat{y}}_{1}}\\ {\vdots}\\ {1-{\hat{y}}_{k}}\\ {\vdots}\\ {-{\hat{y}}_{d_{y}}}\end{array}\right]}$$
Because 1−yˆk =Pj̸=k yˆj , 1−yˆk is much greater than yˆj in general. Substitute this into Eqn. 5, we see that the magnitude of the k-th row is much larger than than of other rows. We also note that the update for the k-th row of W has the opposite direction of the updates for other rows.
Let's consider the inner product of the gradients of two data points z and z′ with label k and k′.
Let's consider the case where k′ ̸= k first.
$$\mathrm{vec}\left({\frac{\partial\ell(\mathbf{z})}{\partial W}}\right)^{\top}\mathrm{vec}\left({\frac{\partial\ell(\mathbf{z}^{\prime})}{\partial W}}\right)=$$ $$(\nabla_{\mathbf{a}}\ell^{\top}\nabla_{\mathbf{a}^{\prime}}\ell)(\mathbf{u}^{\top}\mathbf{u}^{\prime})\tag{10}$$
The gradient ∇aℓ(z) is
Intuitively, the product ∇aℓ⊤∇a′ℓ is small because the large element ∇aℓk = 1 − yˆk is multiplied to the small element ∇a′ℓk = ˆy′k
and the large
element ∇a′ℓk′ = 1−yˆ′k′ is multiplied to the small
element ∇aℓk′ = ˆyk′. To make it more concrete,
let's assume that yˆk = α ≈ 1 and yˆi =
1−α
dy−1 = β
for i ̸= k. We assume the same condition for yˆ′.
(∇aℓ)
⊤ =
∂ℓ
∂a(6)
=
∂ℓ
∂yˆ
∂yˆ
∂a(7)
=
h∂ℓ
∂yˆ1· · ·∂ℓ
∂yˆk*· · ·*∂ℓ
∂yˆdy
i×
∂yˆ1
∂a1
∂yˆ1
∂a2*· · ·*∂yˆ1
∂adh
............
∂yˆk
∂a1
∂yˆk
∂a2*· · ·*∂yˆk
∂adh
............
∂yˆdy
∂a1
∂yˆdy
∂a2*· · ·*
∂yˆdy
∂adh
(8)
∇aℓ
⊤∇a′ℓ = (ˆyk − 1)ˆy
′k + (ˆy
′k′ − 1)ˆyk′
+X
dy
i=1,i̸=k,k′
yˆiyˆ
′i
= (dy − 2)β
2 − 2(dy − 1)β
2
=
h∂ℓ
∂yˆk
∂yˆk
∂a1· · ·∂ℓ
∂yˆk
∂yˆk
∂ak· · ·∂ℓ
∂yˆk
∂yˆk
∂adh
i
= −dyβ
2
(9)
= −
dy(1 − α)
2
(dy − 1)2
(11)
We go from Eqn. 8 to Eqn. 9 by using the following
fact
α ≈ 1 implies 1 − α ≈ 0 and β ≈ 0. Eqn. 11 implies that as the model is more confident about the label of z and z′, the product ∇aℓ⊤∇a′ℓ tends toward 0 at a quadratic rate. The means, as the training progresses, data points from different classes 1215 become more and more independent. The gradients of data points from different classes also become more and more perpendicular.
The sign of the gradient product depends on the sign of ∇aℓ⊤∇a′ℓ and u⊤u′. The signs of
∇aℓ⊤∇a′ℓ and u⊤u′are random variables that depend on the noise in the features u and u′and the weight matrix W. If the model fθ cannot learn a good representation of the input then the feature u and the sign of u⊤u′could be very noisy.
sign(u⊤u′) is even noisier if z and z′are from different classes. Because
∇aℓ⊤∇a′ℓ
is small, a tiny noise in the logits a and a′can flip the sign of
∇aℓ⊤∇a′ℓ and change the direction of influence.
We now consider the case where k′ = k. When k′ = k, ∇aℓ⊤∇a′ℓ is always positive. The sign of the gradient product only depends on u⊤u′. That explains why the product of gradients of data points from the same class is much less noisy and almost always is positive.
Furthermore, the magnitude of ∇aℓ⊤∇a′ℓ is larger than that in the case k′ ̸= k because the large element 1 − yˆk is multiplied to the large element 1 − yˆ′k
. More concretely, under the same assumption as in the case k′ ̸= k, we have
$$\nabla_{\bf a}\ell^{\top}\nabla_{\bf a^{\prime}}\ell=(1-\hat{y}_{k})(1-\hat{y}_{k}^{\prime})+\sum_{i=1,i\neq k}^{d_{y}}\hat{y}_{i}\hat{y}_{i}^{\prime}$$ $$=(1-\alpha)^{2}+(d_{y}-1)\beta^{2}\tag{12}$$
From Eqn. 12, we see that when k′ = k, the magnitude of ∇aℓ⊤∇a′ℓ is approximately dy times larger than that when k′ ̸= k.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
chong-etal-2023-leveraging | Leveraging Prefix Transfer for Multi-Intent Text Revision | https://aclanthology.org/2023.acl-short.105 | Text revision is a necessary process to improve text quality. During this process, writers constantly edit texts out of different edit intentions. Identifying edit intention for a raw text is always an ambiguous work, and most previous work on revision systems mainly focuses on editing texts according to one specific edit intention. In this work, we aim to build a multi-intent text revision system that could revise texts without explicit intent annotation. Our system is based on prefix-tuning, which first gets prefixes for every edit intent, and then trains a prefix transfer module, enabling the system to selectively leverage the knowledge from various prefixes according to the input text. We conduct experiments on the IteraTeR dataset, and the results show that our system outperforms baselines. The system can significantly improve the SARI score with more than 3{\%} improvements, which thrives on the learned editing intention prefixes. |
## Leveraging Prefix Transfer For Multi-Intent Text Revision
Ruining Chong12*, Cunliang Kong12*, Liu Wu2, Zhenghao Liu3**, Ziye Jin**4, Liner Yang12†, Yange Fan5, Hanghang Fan5**, Erhong Yang**12 1National Language Resources Monitoring and Research Center for Print Media, Beijing Language and Culture University, China 2School of Information Science, Beijing Language and Culture University, China 3Department of Computer Science and Technology, Northeastern University, China 4School of Arts and Sciences, New York University Shanghai, China 5Kika Tech, China [email protected]
## Abstract
Text revision is a necessary process to improve text quality. During this process, writers constantly edit texts out of different edit intentions.
Identifying edit intention for a raw text is always an ambiguous work, and most previous work on revision systems mainly focuses on editing texts according to one specific edit intention. In this work, we aim to build a multiintent text revision system that could revise texts without explicit intent annotation. Our system is based on prefix-tuning, which first gets prefixes for every edit intent, and then trains a prefix transfer module, enabling the system to selectively leverage the knowledge from various prefixes according to the input text. We conduct experiments on the ITER-ATER dataset, and the results show that our system outperforms baselines. The system can significantly improve the SARI score with more than 3% improvements, which thrives on the learned editing intention prefixes.
## 1 Introduction
Revision is an essential process to improve the text quality (Vaughan and McDonald, 1986). During this process, writers perform various editing operations on the text with different editing intentions. As shown in Figure 1, the writer corrects misspelled words to improve text *fluency*, deletes redundant words to improve text *clarity*, adds connective words to improve text *coherence*, inserts adverbs to convey the writer's writing preferences
(*style*) and modifies data to update text information
(*meaning-changed*).
Lots of recent studies have focused on a text revision task corresponding to a specific edit intention, such as grammatical error correction (Omelianchuk
*Equal contribution
†Corresponding author: Liner Yang
![0_image_0.png](0_image_0.png)
et al., 2020; Kaneko et al., 2020; Liu et al., 2021; Yang et al., 2022), text simplification (Dong et al.,
2019; Jiang et al., 2020; Omelianchuk et al., 2021; Martin et al., 2022), and text style transfer (Malmi et al., 2020; Reid and Zhong, 2021). The work divides text revision into several independent problems. While some methods with strong universality can be applied to multiple tasks (Malmi et al.,
2019; Stahlberg and Kumar, 2020; Mallinson et al.,
2020), they train different models on various data sets. Real-world scenarios require addressing multiple types of editing errors at the same time, such as grammatical errors, spelling errors, etc. But these methods failed to integrate knowledge from these tasks into a unified model.
To solve the problem, Du et al. (2022) attempted to train one model using data with multiple editing intentions and leveraged edit intent information by simply appending it to the input. However, when adding a new intent, the entire model must be re1219 trained. A more lightweight and scalable approach to multi-intent text revision is still required.
Li and Liang (2021) proposed a new kind of prompt tuning method to quickly adapt a pretrained model to new tasks, which is called prefixtuning. Prompt tuning can help the pre-trained language model to locate the task learned in pretraining and enable the related knowledge to model text revision with different edit intentions (Reynolds and McDonell, 2021). This method enables a model to handle multiple edit intentions in a lightweight and scalable way.
In this paper, we present our method: a prefixtuning-based model which adapts to text revision with multiple edit intentions. This method involves a two-step training process. In the first step, we initialize a pre-trained language model (PLM) and train multiple prefixes on it. Each edit intention corresponds to a prefix. In the second step, a prefix transfer module is trained at each attention layer of the PLM. The prefix transfer module is configured as two attention units that act respectively on this layer's key states and value states. It enables our model to learn a tailored prefix for the given input with the help of prefix embeddings from the predefined tasks.
We conduct experiments on ITERATER (Du et al., 2022), an iterative text revision dataset. It mainly contains parallel sentences with five edit intentions: fluency, coherence, clarity, *style*, and meaning-changed. The results show that our approach performs better than the fully fine-tuned BART (Lewis et al., 2020) and PEGASUS (Zhang et al., 2020) baselines reported in Du et al. (2022)
with fewer training parameters.
## 2 Related Work 2.1 Iterative Text Revision
For the first time, Du et al. (2022) systematically studied the iterative revision phenomenon in human writing. They presented the ITERATER, an annotated dataset across multiple domains of formally human-written text, which includes Wikipedia, ArXiv, and Wikinews. And they trained several types of text revision models using ITERATER.
Dwivedi-Yu et al. (2022) presented EDITEVAL, an instruction-based benchmark, to evaluate the editing capabilities of models and they also included the test set of ITERATER in it. Based on Du et al.
(2022), our work further explores the method of text revision.
## 2.2 Transfer Learning Of Prompt Tuning
Transfer learning is a common and powerful technique in NLP (Raffel et al., 2020). Some recent studies have tried to improve prompt tuning performance by leveraging the knowledge of multiple related or unrelated tasks. Asai et al. (2022) used an attention module to make use of the knowledge in exiting soft prompts (Lester et al., 2021) while learning a new task. Chen et al. (2022) improved the few-shot text summarization by multi-task pretraining and prefix-tuning. Specifically, they pretrained a summarization model on a set of popular summarization datasets and then conducted prefixtuning for it on an unseen summarization task. Different from their modeling of a new task through existing tasks, our work aims to achieve the mutual utilization of knowledge between different edit intents in text revision.
## 3 Method
The revision task can be defined as the following process: given a source sentence x = [x1*, . . . ,* xm]
and an optional edit intent e ∈ E to generate a revised sentence y = [y1*, . . . ,* yn], where E is the set of all edit intentions. Note that e is optional because it can be inferred from the input x.
Our method is depicted in Figure 2. It includes two stages: the multi-prefix tuning stage and the prefix transfer stage.
## 3.1 Multi-Prefix Tuning Stage
The prefix is a set of parameters on every attention layer of PLM. For an edit intention e, at each attention layer, the prefix can be described as Pe = {P
K
e, PV
e }, where P
K
eand P
V
eare parameters added before the key states and value states in this attention layer. After adding these parameters, the calculation of the attention head in this layer becomes:
## H = Attention(Q, [P K E; K], [P V E; V ]) (1)
where H is the output vector sequence; *Q, K, V*
are query states, key states, and value states, respectively; Attention means scaled dot-product attention. Only P
K
eand P
V
eare updated during the training process. Note that we ignore the layer number information because the operation for each layer is the same.
As shown in the left part of Figure 2, for every edit intention e, we train a prefix Pe accordingly.
![2_image_0.png](2_image_0.png)
In this way, the model could revise an intentionannotated text by activating the corresponding prefix at inference.
## 3.2 Prefix Transfer Stage
Identifying edit intention is always an ambiguous work. At the prefix transfer stage, we aim to build a new prefix for an unannotated input instance by transferring existing prefixes. The new prefix Pnew is instance-specific.
The prefix transfer stage is described in the right part of Figure 2. At each layer, we rearrange the prefixes {Pe | e ∈ E} obtained in the last stage as P
K = {P
K
e| e ∈ E} and P
V = {P
V
e| e ∈ E}
according to whether they are configured before the key states or before the value states. Then a pair of attention units G
K and G
Vare trained for P
K and P
V.
Take G
K as an example. It calculates the similarity between the key states K and every P
K
ein P
K to get attention scores.
The similarity can't be calculated directly, because K and P
K
e have different lengths. So we perform the max-pool operation for length dimension on K and P
K
e
. After that, we obtain Kˆ ∈ R
d and PˆK
e ∈ R
d, where d is the dimension of the hidden states in the PLM.
To get attention scores, we train a fully connected layer to extract features from Kˆ :
$$H=\mathrm{NonLinear}(W^{\top}({\hat{K}}))\qquad\qquad(2)$$
where W ∈ R
d×dis a transfer matrix updated during training. Following Asai et al. (2022), we use SiLU (Elfwing et al., 2018) for the non-linear layer and add a Layer Norm (Ba et al., 2016) layer:
Then, we calculate the attention scores for intent e as follows:
$$a_{e}={\frac{\exp({\hat{P}}_{e}^{K}\cdot H_{\mathrm{norm}})/T}{\sum_{i\in E}\exp({\hat{P}}_{i}^{K}\cdot H_{\mathrm{norm}})/T}}\qquad{\mathrm{(4)}}$$
where T is the softmax temperature (Radford et al.,
2021) which could avoid making the attention unit over-confident.
Finally we use them to build P
K
new as follows:
$$P_{\mathrm{new}}^{K}=\sum_{e\in E}a_{e}P_{e}^{K}\qquad\qquad(5)$$
In the same way, we get P
$\mathcal{P}^V_{\text{max}}$ by $\mathcal{G}^V_{\text{max}}$.
## New By G
V. Using
The New Prefix Pnew = [P
K
New, Pv
New], Our System
Could Revise The Unannotated Input Instance With
The Knowledge From Existing Prefixes. 4 Experimental Setup
We choose BART-large as the PLM for our system and use adapter-transformers (Pfeiffer et al., 2020) to implement prefix-tuning. More implementation details are in Appendix A.
## 4.1 Datasets
We conduct our experiments on the iterative text revision dataset: ITERATER (Du et al., 2022). We remove the *Other* class of the data as it essentially contains a variety of unrecognized edit intentions and accounts for a small proportion (1.44%). The entire dataset consists of two parts: ITERATER-HUMAN and ITERATER-FULL. The former is a smaller dataset with manual annotation of edit intentions, while the latter is a large dataset annotated by a classification model trained on ITERATER-HUMAN. We train our model on both of them.
Following Du et al. (2022), we report the results on the test set of ITERATER-HUMAN in Section 5,
$$H_{\mathrm{norm}}=\mathrm{LayerNorm}(H)$$
$$({\mathfrak{I}})$$
| ITERATER-HUMAN | ITERATER-FULL | | | | | | | | |
|-------------------|-----------------|-------|-------|-------|-------|-------|-------|-------|-------|
| Model | Intent | SARI | BLEU | R-L | Avg. | SARI | BLEU | R-L | Avg. |
| BART-FineTune | % | 33.20 | 78.56 | 85.20 | 65.66 | 33.88 | 78.55 | 86.05 | 66.16 |
| PEGASUS-FineTune | % | 33.09 | 79.09 | 86.77 | 66.32 | 34.67 | 78.21 | 87.06 | 66.65 |
| BART-SinglePrefix | % | 30.97 | 81.82 | 87.57 | 66.79 | 36.81 | 79.65 | 86.37 | 67.61 |
| BART-FineTune | ✓ | 34.77 | 74.43 | 84.45 | 64.55 | 37.28 | 77.50 | 86.14 | 66.97 |
| PEGASUS-FineTune | ✓ | 34.43 | 78.85 | 86.84 | 66.71 | 37.11 | 77.60 | 86.84 | 67.18 |
| BART-SinglePrefix | ✓ | 31.23 | 81.66 | 87.39 | 66.76 | 36.54 | 77.46 | 85.80 | 66.60 |
| Multi-Prefix | ✓ | 33.12 | 82.00 | 87.57 | 67.56 | 37.25 | 78.25 | 86.18 | 67.23 |
| PrefixTransfer | % | 36.01 | 80.53 | 87.18 | 67.91 | 37.12 | 80.34 | 87.61 | 68.36 |
which is completely a human-created dataset and is reliable for evaluation. We show more details of the datasets in Appendix B.
## 4.2 Evaluation Metrics
Following previous work, we report three metrics: SARI (Xu et al., 2016), Rouge-L (Lin, 2004),
and BLEU (Papineni et al., 2002). Among them, SARI is considered an important metric in situations where input text and output text have a large overlap in words. It also indicates the positive impact of revisions on document quality.
The setting of evaluation metrics is the same as Du et al. (2022). We use the metrics package from Huggingface transformers (Wolf et al., 2020) to calculate the SARI, BLEU, and Rouge-L scores.
## 4.3 Models Setup And Baselines
Using our method, we train the models in two ways:
the model that only trains the multi-prefix tuning stage and that trains both the multi-prefix tuning stage and the prefix transfer stage.
We compare our method with three baselines:
full fine-tuning BART (BART-FineTune), full finetuning PEGASUS (PEGASUS-FineTune), and prefixtuning of BART with a single prefix (BARTSinglePrefix). Both BART and PEGASUS are generative models based on the transformer architecture.
Compared to the edit-based model FELIX, they perform better. We use the results reported by Du et al. (2022) for these two models. Furthermore, we compare BART-SinglePrefix as a possible technical solution as we choose BART as our backbone model. BART-SinglePrefix trains only one prefix on the entire dataset.
All three baselines are trained with two configurations. The first configuration is using the pure sentence pairs without edit intention annotations to train the model. The second configuration is appending an edit intent token at the beginning of the input text during the training process, which is the same as the approach of Du et al. (2022).
## 5 Results And Analysis 5.1 Main Results
The main results are shown in Table 1. Compared to training with a single prefix, the setting of multiple prefixes can improve the results, especially training on ITERATER-HUMAN. Meanwhile, with fewer training parameters, the multi-prefix setting could achieve a comparable SARI score and better average score than the fully fine-tuned BART and PEGASUS baselines.
Moreover, prefix transfer could further improve the model's performance. Training on ITERATER-HUMAN, prefix transfer significantly improves the SARI score from 33.12 to 36.01 and gets the highest average score of 67.91. Training on ITERATER-FULL, prefix transfer can also improve the average score from 67.23 to 68.36.
An interesting phenomenon is that training on different datasets results in different gains for prefix transfer in evaluation metrics. On ITERATER-HUMAN, prefix transfer improves the SARI score significantly. While on ITERATER-FULL, prefix transfer mainly improves the BLEU score and Rouge-L score. One possible explanation is that in situations when the training data is small, prefix transfer tends to learn more editing operations to improve text quality. In this way, the SARI score related to editing operations will be improved significantly. When the training data is sufficient, pre-
| Stage 1 | Stage 2 | SARI | BLEU | R-L | Avg. |
|-----------|-----------|--------|--------|-------|--------|
| HUMAN | HUMAN | 36.01 | 80.53 | 87.18 | 67.91 |
| FULL | FULL | 37.12 | 80.34 | 87.61 | 68.36 |
| FULL | HUMAN | 38.44 | 80.24 | 86.90 | 68.53 |
Table 2: Results on the test set of ITERATER-HUMAN.
Stage 1 indicates the training data used in the multiprefix tuning stage. Stage 2 indicates the training data used in the prefix transfer stage.
fix transfer will model the gold reference in more detail. So the BLEU score and the Rouge-L score will be improved.
## 5.2 Analysis
We further tried to use different training data at different stages of training to conduct experiments.
The results are shown in Table 2.
We find that the best practice is to train the model on ITERATER-FULL in the multi-prefix tuning stage and on ITERATER-HUMAN in the prefix transfer stage, which gets the highest SARI score and average score. This may be because of the different distributions of manually annotated edit intent and automatically annotated edit intent. The auto-annotated dataset ITERATER-FULL contains many incorrectly classified sentences, which may cause mismatched knowledge in prefixes. In the prefix transfer stage, due to the existence of mismatched knowledge and incorrectly classified sentences, the continued use of the same training data may finally cause a certain degree of negative transfer. However, if we use ITERATER-HUMAN in the prefix transfer stage, the impact of negative transfer will be mitigated, because ITERATER-HUMAN
only contains correctly classified sentences.
In Appendix C, we separately provide the performance results on different edit intentions of the best-performing model.
## 6 Conclusion
In this paper, we introduce a new method for multiintent text revision. The system is based on prefixtuning, which first obtains a prefix for every edit intention and then learns to transfer the knowledge in prefixes for every input instance by training a prefix transfer module. This prefix transfer module is configured as two attention units that act respectively on the key states and the value states at each attention layer of the PLM. In this way, our method can make full use of the knowledge of various edit intentions and does not need to annotate the intentions of the input. The experimental results show that our method significantly outperforms baselines, and both multi-prefix and prefix transfer settings could improve the performance.
## Limitations
Due to the lack of multi-intent text revision datasets, we only conduct experiments on ITERATER. Although it is a multi-domain dataset, we only use its sentence-level data, and each sentence pair only contains one editing operation. The robustness of our method is still to be verified by evaluating it on more types of datasets in future work.
Another limitation of our work is that we only made improvements at the model level. We have noticed that Kim et al. (2022) recently improved text revision by leveraging extra data from other text editing tasks and performing editable span detection before revising. Similar methods can also be applied to our model and will be tried in our future work.
## Ethics Statement
The PrefixTransfer method mainly aims at fusing multiple prefixes to obtain a unified model that can perform multi-intent text revision. The experiments are based on the ITERATER dataset, which is unlikely to include harmful content.
## Acknowledgments
This work was supported by the funds of the Research Project of the National Language Commission ( No. ZDI145-24), Natural Science Foundation of China (No. 62206042), Fundamental Research Funds for the Central Universities (No.
21PT04, No. N2216013) and China Postdoctoral Science Foundation (No. 2022M710022). We would like to thank all anonymous reviewers for their valuable comments and suggestions on this work.
## References
Akari Asai, Mohammadreza Salehi, Matthew E Peters, and Hannaneh Hajishirzi. 2022. ATTEMPT:
Parameter-efficient multi-task tuning via attentional mixtures of soft prompts. In *Proceedings of the 2022* Conference on Empirical Methods in Natural Language Processing, pages 6655–6672.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450.
Yulong Chen, Yang Liu, Ruochen Xu, Ziyi Yang, Chenguang Zhu, Michael Zeng, and Yue Zhang. 2022.
Unisumm: Unified few-shot summarization with multi-task pre-training and prefix-tuning. *arXiv* preprint arXiv:2211.09783.
Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393–3402.
Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, and Dongyeop Kang. 2022.
Understanding iterative revision from human-written text. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics, pages 3573–3590.
Jane Dwivedi-Yu, Timo Schick, Zhengbao Jiang, Maria Lomeli, Patrick Lewis, Gautier Izacard, Edouard Grave, Sebastian Riedel, and Fabio Petroni. 2022.
EditEval: An instruction-based benchmark for text improvements. *arXiv preprint arXiv:2209.13331*.
Stefan Elfwing, Eiji Uchibe, and Kenji Doya. 2018.
Sigmoid-weighted linear units for neural network function approximation in reinforcement learning.
Neural Networks, 107:3–11.
Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural CRF model for sentence alignment in text simplification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7943–7960.
Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 4248–4254.
Zae Myung Kim, Wanyu Du, Vipul Raheja, Dhruv Kumar, and Dongyeop Kang. 2022. Improving iterative text revision by learning where to edit from other revision tasks. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9986–9999.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880.
Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 4582–4597.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81.
Zhenghao Liu, Xiaoyuan Yi, Maosong Sun, Liner Yang, and Tat-Seng Chua. 2021. Neural quality estimation with multiple hypotheses for grammatical error correction. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5441–5452.
Jonathan Mallinson, Aliaksei Severyn, Eric Malmi, and Guillermo Garrido. 2020. FELIX: Flexible text editing through tagging and insertion. In Findings of the Association for Computational Linguistics: EMNLP
2020, pages 1244–1255.
Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5054–5065.
Eric Malmi, Aliaksei Severyn, and Sascha Rothe. 2020.
Unsupervised text style transfer with padded masked language models. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing, pages 8671–8680.
Louis Martin, Angela Fan, Éric Villemonte De La Clergerie, Antoine Bordes, and Benoît Sagot. 2022.
MUSS: Multilingual unsupervised sentence simplification by mining paraphrases. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 1651–1664.
Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR - grammatical error correction: Tag, not rewrite. In *Proceedings of the Fifteenth Workshop* on Innovative Use of NLP for Building Educational Applications, pages 163–170.
Kostiantyn Omelianchuk, Vipul Raheja, and Oleksandr Skurzhanskyi. 2021. Text simplification by tagging.
In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, pages 11–25.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun ´ Cho, and Iryna Gurevych. 2020. AdapterHub: A
framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46–54.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pages 8748–8763.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *The Journal of Machine Learning Research*,
21(1):5485–5551.
Machel Reid and Victor Zhong. 2021. LEWIS: Levenshtein editing for unsupervised text style transfer.
In *Findings of the Association for Computational* Linguistics: ACL-IJCNLP 2021, pages 3932–3944.
Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In *Extended Abstracts of the* 2021 CHI Conference on Human Factors in Computing Systems, pages 1–7.
Felix Stahlberg and Shankar Kumar. 2020. Seq2Edits:
Sequence transduction using span-level edit operations. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing, pages 5147–5159.
Marie M. Vaughan and David D. McDonald. 1986. A
model of revision in natural language generation. In Proceedings of the 24th Annual Meeting on Association for Computational Linguistics, pages 90–96, USA. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45.
Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification.
Transactions of the Association for Computational Linguistics, 4:401–415.
Liner Yang, Chengcheng Wang, Yun Chen, Yongping Du, and Erhong Yang. 2022. Controllable data synthesis method for grammatical error correction. *Frontiers of Computer Science*, pages 1–10.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In *International Conference on Machine Learning*, pages 11328–11339.
## A **Details On Computational Experiments**
| Dataset | Train | Dev | Test |
|----------------|---------|--------|--------|
| ITERATER-HUMAN | 3,215 | 385 | 360 |
| ITERATER-FULL | 157,579 | 19,705 | 19,703 |
Our system is established based on BART-large
(with 400 million parameters). We use the AdamW
optimizer with weight decay and adopt Noam Optimizer. We initial the learning rate of 5e−5in the multi-prefix tuning stage and 1e−5in the prefix transfer stage. We set the warm-up steps to 4000.
Regarding batch size, we use max-token configuration and set the maximum of tokens to 1024. The maximum epoch is set to 100. And we set an early stop strategy in the patience of 20 epochs.
The length of the prefix (with 40 million parameters) is 12 and the prefix vectors are not optimized directly but reparameterized via a bottleneck MLP
which has a middle dimension of 512. We set the prefix dropout to 0.2.
We do validation every epoch while training the model on ITERATER-HUMAN and every 200 steps while training the model on ITERATER-FULL.
We report descriptive statistics with a single run.
We deploy all our experiments on a slurm cluster. We train the prefixes on 4 Tesla V100-SXM2
(16GB) GPUs and train prefix transfer modules on an NVIDIA TITAN RTX (24GB) GPU.
Table 3: Data split of ITERATER after removing the Other class.
## B Details Of Dataset B.1 Taxonomy
The taxonomy of edit intentions in ITERATER after removing **Other**:
- **Fluency** Fix grammatical errors in the text.
- **Coherence** Make the text more cohesive, logically linked, and consistent as a whole.
- **Clarity** Make the text more formal, concise, readable, and understandable.
- **Style** Convey the writer's writing preferences, including emotions, tone, voice, etc.
- **Meaning-changed** Update or add new information to the text.
## B.2 Data Split
ITERATER dataset is splited as in Table 3 after romoving the **Other** class.
## B.3 License
The ITERATER dataset uses Apache License, and it allows the data for academic usage.
## C Model Performance Of Different Edit Intentions
Table 4: The performance results on different edit intentions of the best-performing model
| Edit Intention | SARI | BLEU | R-L | Avg. |
|------------------|--------|--------|-------|--------|
| CLARITY | 34.01 | 78.18 | 84.62 | 65.60 |
| FLUENCY | 48.91 | 90.81 | 97.30 | 79.01 |
| COHERENCE | 38.66 | 84.83 | 90.39 | 71.29 |
| STYLE | 32.12 | 76.34 | 87.49 | 65.32 |
| MEANING-CHANGED | 37.65 | 51.46 | 68.98 | 52.70 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The Limitation section after conclusion and before the references
✓ A2. Did you discuss any potential risks of your work?
The Ethics Statement section after Limitation
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section Abstract and Section 1
✓ A4. Have you used AI writing assistants when working on this paper?
We use grammarly to to correct the full text
## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 4
✓ B1. Did you cite the creators of artifacts you used?
In Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In Appendix B.3
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Our use of existing artifacts was consistent with their intended use. All for research purposes
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The dataset we use is open source and does not involve such problems
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
In Section 1 Section 4 and Appendix B
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
In Appendix B
## C ✓ **Did You Run Computational Experiments?** In Section 4 And 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In Appendix A
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wang-lu-2023-learning | Learning Multi-Step Reasoning by Solving Arithmetic Tasks | https://aclanthology.org/2023.acl-short.106 | Mathematical reasoning is regarded as a necessary ability for Language Models (LMs). Recent works demonstrate large LMs{'} impressive performance in solving math problems. The success is attributed to their Chain-of-Thought (CoT) reasoning abilities, i.e., the ability to decompose complex questions into step-by-step reasoning chains, but such ability seems only to emerge from models with abundant parameters. This work investigates how to incorporate relatively small LMs with the capabilities of multi-step reasoning. We propose to inject such abilities by continually pre-training LMs on a synthetic dataset MsAT which is composed of Multi-step Arithmetic Tasks. Our experiments on four math word problem datasets show the effectiveness of the proposed method in enhancing LMs{'} math reasoning abilities. | # Learning Multi-Step Reasoning By Solving Arithmetic Tasks
Tianduo Wang and **Wei Lu**
StatNLP Research Group Singapore University of Technology and Design
{tianduo_wang,luwei}@sutd.edu.sg
## Abstract
Mathematical reasoning is regarded as a necessary ability for Language Models (LMs). Recent works demonstrate large LMs' impressive performance in solving math problems. The success is attributed to their Chain-of-Thought
(CoT) reasoning abilities, i.e., the ability to decompose complex questions into step-by-step reasoning chains, but such ability seems only to emerge from models with abundant parameters. This work investigates how to incorporate relatively small LMs with the capabilities of multi-step reasoning. We propose to inject such abilities by continually pre-training LMs on a synthetic dataset MSAT which is composed of Multi-step Arithmetic Tasks. Our experiments on four math word problem datasets show the effectiveness of the proposed method in enhancing LMs' math reasoning abilities.1
## 1 Introduction
Making Language Models (LMs) perform mathematical reasoning is a valuable, yet challenging research objective (Hendrycks et al., 2021; Cobbe et al., 2021). Recently, we have witnessed largescale LMs' impressive performance on a series of reasoning tasks via *chain-of-thought* prompting (Wei et al., 2022). This method elicits large LM's ability to decompose a complex problem into several intermediate steps. However, it is believed that such ability only emerges from sufficiently large models (empirically more than 100B parameters) (Wei et al., 2022). In this paper, we examine how to incorporate moderate-sized LMs, e.g.,
RoBERTa (Liu et al., 2019), with such multi-step reasoning ability via continual pre-training to improve the performance on math problems.
Correctly understanding numbers is a prerequisite of mathematical reasoning abilities. But Wallace et al. (2019) shows that medium-sized 1Our code and data are released at https://github.com/
TianduoWang/MsAT.
![0_image_0.png](0_image_0.png)
Figure 1: A math word problem example with different kinds of answers. In **Question**, <Num0>, <Num1>, and
<Num2> are special tokens used for masking numbers.
LMs have a deficiency in numerical comprehension. To overcome this issue, previous works inject numerical reasoning skills into LMs following two approaches. The first is masking numbers with special tokens, and generating symbolic expressions with a structured neural decoder (Xie and Sun, 2019; Jie et al., 2022). An example of such expression is provided in Figure 1. The second strategy continually pre-trains LMs on synthetic numerical tasks, which requires models to learn how to perform computation involving numbers
(Geva et al., 2020; Pi et al., 2022).
However, both approaches suffer from critical limitations. For symbolic methods, they neglect the information carried by the numbers, which could provide crucial hints for solving math problems (Wu et al., 2021; Liang et al., 2022). As for continual pre-training methods, LMs' arithmetic skills are not reliable. Previous works indicate that such skills are highly influenced by the training data (Razeghi et al., 2022) and hard for extrapolation (Wallace et al., 2019).
Motivated by these shortcomings, we propose to first pre-train moderate-sized LMs on a synthetic dataset called MSAT (Multi-step Arithmetic Tasks)
![1_image_0.png](1_image_0.png)
Y=8.Z=2. X-Y=Z. X=?
![1_image_1.png](1_image_1.png)
before downstream task fine-tuning. To make sure LMs capture the information carried by the numbers, we keep the numbers in the questions instead of masking them during both pre-training and finetuning. Instead of making LMs conduct computation internally, MSAT encourages LMs to generate a series of intermediate steps leading to the answer.
Experiments on four math word problem datasets with two backbone models demonstrate the effectiveness of our method in enhancing LMs' math reasoning performance.
## 2 Method
Our method essentially appends a continual pretraining stage before fine-tuning LMs on downstream tasks. The continual pre-training serves two purposes: first, we tokenize numbers digit-bydigit to improve LMs' numerical comprehension; second, we make LMs learn multi-step reasoning skills from the proposed synthetic task.
## 2.1 Digit Tokenization For Numbers
Sub-word tokenization methods, e.g., byte pair encoding (BPE) (Sennrich et al., 2016), is one of the reasons why moderated-sized LMs poorly understand numbers (Wallace et al., 2019). BPE-based tokenizers split text based on the token frequency in the training corpus, which can be counter-intuitive when dealing with numbers. For example, numbers
"520" and "521" will be tokenized into ["520"] and
["5", "21"] respectively by the RoBERTaTokenizer2 of the Transformers library (Wolf et al., 2020).
Such inconsistent tokenization strategy for numbers undermines LM's numerical understanding ability. Hence, we tokenize numbers digit-by-digit for both pre-training and fine-tuning.
2https://huggingface.co/docs/transformers/
model_doc/roberta
## 2.2 Multi-Step Arithmetic Tasks (Msat)
The core of our method is the synthetic task MSAT
where LMs can learn multi-step reasoning skills.
Like MWP tasks, MSAT can be formulated as a Seq2Seq task: the input of a MSAT example describes an arithmetic question, while the output is a reasoning chain leading to the answer. Specifically, each input sequence is composed of three components: question context, *equation*, and *question variable*. Equation is a sequence of symbols and operators (+, −, ×, ÷, =) that builds equality relationship between symbols. Given an equation, only one of the symbols is set as the question variable, while other symbols will be listed in question context with their numerical values.
The output sequence of MSAT is constructed in a code-style multi-step reasoning format. Each step consists of two sub-steps: *variable assignment* and *calculation*. In variable assignment, numbers appear in the input sequence are assigned to the variable names that are exclusive for decoder. In calculation, a new variable is generated from the calculation of the existing variables. This makes our outputs become executable Python code so that the numerical answer can be calculated by an external Python interpreter. Both inputs and outputs of MSAT are generated purely automatically. Details about the construction of MSAT are provided in Appendix A.1.
## 2.3 Pre-Training Via Adapter-Tuning
Directly training on synthetic data that are largely different from the natural language corpus harms LMs' language prowess (Geva et al., 2020). Therefore, we adopt a two-stage tuning strategy (Wang and Lu, 2022) to inject reasoning skills into LMs.
Specifically, we perform adapter-tuning (Houlsby et al., 2019) on MSAT and then jointly fine-tune adapter and LM backbone on downstream tasks.
It mitigates catastrophic forgetting because LM's original parameters are largely preserved during adapter-tuning (Houlsby et al., 2019).
We consider two backbone models to verify the effectiveness of our method. In particular, we select a sequence-to-sequence (Seq2Seq) model (Lan et al., 2021) and a directed acyclic graph (DAG) structured model (Jie et al., 2022) that both adopt RoBERTabase to encode the input questions. More details of these models are provided in §3.1. Figure 2 shows an overview of the proposed pretraining method.
Model **MAWPS ASDiv-A SVAMP SVAMP** (hard)
Acc. ∆ Acc. ∆ Acc. ∆ Acc. ∆
Large language models (PaLM 540B) (code-davici-002) (PaLM 540B)
w/ Chain-of-Thought prompting 93.3 80.4 **79.0** -
Seq2Seq models
ROBERTAGEN (Lan et al., 2021)
w/ symbolic masks 88.4 72.1 30.3 30.3♡
w/ digit tokenization 84.1 (-4.3) 71.9 (-0.2) 27.6 (-2.7) 19.6 (-10.7)
MSAT-ROBERTAGEN (OURS) **91.6** (+3.2) **81.8** (+9.7) **39.8** (+9.5) **36.2** (+5.9)
DAG structured models
DEDUCTREASONER (Jie et al., 2022)
w/ symbolic masks 92.0 85.0 45.0 45.0♡
w/ digit tokenization 91.6 (-0.4) 84.1 (-0.9) 44.4 (-0.6) 42.8 (-2.2)
MSAT-DEDUCTREASONER (OURS) **94.3** (+2.3) **87.5** (+2.5) **48.9** (+3.9) **48.2** (+3.2)
## 3 Experiments
Now we investigate whether our pre-training method facilitates models on Math Word Problem
(MWP) solving tasks. All results are averaged over three different runs.
## 3.1 Experimental Setup
Existing datasets We consider three commonlyused MWP datasets: MAWPS (Koncel-Kedziorski et al., 2016), ASDiv-A (Miao et al., 2020), and SVAMP (Patel et al., 2021). The statistics of these datasets is provided in Table 2. More details can be found in Appendix A.2. We report five-fold crossvalidation results for both MAWPS and ASDiv-A
and test set accuracy for SVAMP following previous practice (Lan et al., 2021; Jie et al., 2022).
SVAMP (hard) We find more than 85% of the numbers in the above datasets are smaller than 102.
To investigate the extrapolation performance of the models trained with MSAT, we create SVAMP
(hard) from the original SVAMP dataset by replacing the numbers with much larger ones inspired by Gao et al. (2022). More details about SVAMP (hard) and number distribution of the existing datasets are provided in Appendix A.3.
Table 2: Existing dataset statistics.
Models We consider both sequence-to-sequence
(Seq2Seq) models and directed acyclic graph
(DAG) structured models as our backbone models. For Seq2Seq model, we choose ROBERTAGEN (Lan et al., 2021), an encoder-decoder model with RoBERTabase as the encoder combined with a Transformer decoder. For DAG structured model, we choose DEDUCTREASONER (Jie et al., 2022)
that combines RoBERTabase with a DAG decoder.
In their original implementation, both models replace numbers with symbolic mask tokens. Hence, we additionally consider a baseline for each backbone model that uses actual numbers with digit tokenization. We name the models that are based on these two backbone models and pre-trained with our method as MSAT-ROBERTAGEN and MSATDEDUCTREASONER respectively. We also compare our models to large LMs, e.g., PaLM (Chowdhery et al., 2022) and Codex (Chen et al., 2021),
with chain-of-thought prompting (Wei et al., 2022).
All models are evaluated via greedy decoding.
More implementation details, e.g., training hyperparameters, are provided in Appendix B.
## 3.2 Main Results
| Dataset | # Data | Avg. input | Avg. output |
|-----------|-----------------|--------------|---------------|
| length | reasoning steps | | |
| MAWPS | 1,987 | 30.3 | 1.4 |
| ASDiv-A | 1,217 | 32.3 | 1.2 |
| SVAMP | 1,000 | 34.7 | 1.2 |
Table 1 compares our models with backbone model baselines and large LMs. On all datasets, digit tokenization baselines consistently perform worse than their symbolic mask counterparts, indicating the deficiency of the numeracy comprehension of the original RoBERTa model. However, the models trained with MSAT surpass both baselines by a large margin, which demonstrates the effectiveness of our pre-training method.
![3_image_0.png](3_image_0.png)
SVAMP (hard) We can observe that, on SVAMP
(hard), the accuracies of digital tokenization baselines decrease dramatically (10.7 points drop for ROBERTAGEN and 2.2 points drop for DEDUC-TREASONER) compared with baselines with symbolic masks, while the models trained with MSAT
still outperforms symbolic mask baselines by 5.9 and 3.2 points respectively. This shows that not only does our models obtain better results than the baselines on the existing tasks, but it is also more robust in handling out-of-distribution numbers.
Compare with large language models We also observe that, on relatively simple tasks, i.e.,
MAWPS and ASDiv-A, RoBERTa-based models can outperform large LMs. But for the more challenging task SVAMP, there is still a large performance gap. We believe this is because SVAMP
requires models to have a better understanding of natural languages. Jie et al. (2022) also reports that varying LM encoders results in significant performance disparities on SVAMP, indicating that SVAMP performance is closely tied to model's natural language capabilities.
## 4 Pre-Training Analysis
In this section, we provide a careful analysis of our pre-training method from various perspectives to understand why it works.
## 4.1 Pre-Training Task Performance
We visualize how the performance of pre-training task MSAT and one of the MWP tasks SVAMP
changes with pre-training steps in Figure 3. It can be observed that the performance on both synthetic and natural language tasks tends to improve gradually as the number of pre-training steps increases.
Figure 3 demonstrates that LMs are capable of learning multi-step reasoning gradually from the synthetic task MSAT. The acquired multi-step rea-
![3_image_1.png](3_image_1.png)
soning ability can subsequently be transferred to the downstream MWP solving tasks, enhancing performance during the fine-tuning phase.
## 4.2 Reasoning Format Of Msat
The reasoning format of MSAT dictates the specific reasoning skills that LMs will acquire during pre-training. We demonstrate the superiority of our code-style multi-step reasoning format by comparing it with two different reasoning expressions.
Effect of producing intermediate steps While it is a common practice to train LMs towards directly producing the numerical answers of the arithmetic questions (Geva et al., 2020; Pi et al., 2022), a recent work shows that LMs' arithmetic skills are not reliable (Razeghi et al., 2022). To explore whether LMs can learn reasoning skills from MSAT without intermediate steps, we pre-train LMs on a variant of MSAT by replacing step-by-step output sequences with only numerical answers. Figure 4 compares this model (answer only) with our model (codestyle). Its poor performance on both MSAT and SVAMP confirms the necessity of producing intermediate reasoning steps during pre-training.
Structured code-style expression We next investigate the importance of applying the structured code-style reasoning expressions by comparing it with the less formatted math expressions. We argue that, compared with math expressions that only contain numbers and operators, our code-style expressions are more suitable for multi-step reasoning due to the structure information in the output sequences.
Our experiments in Figure 4 demonstrate the superiority of the code-style output expressions. We can see that models with math expressions perform consistently worse than models with code-style multi-step reasoning format on both pre-training task MSAT and MWP solving task SVAMP.
![4_image_2.png](4_image_2.png)
![4_image_0.png](4_image_0.png)
ASDiv-A acc.
## 4.3 Difficulty Level Of Msat
Leveraging synthetic data for pre-training provides the advantage of enabling highly customizable difficulty levels for the training data. Here we define the difficulty level of a reasoning task as the averaged reasoning steps that are required to solve the problems. From Figure 5, we see that pre-training LMs on MSATs that are harder than downstream tasks generally leads to better results. It's important to note that, broadly speaking, the difficulty level of a reasoning task, particularly those involving natural language, is not solely determined by the number of reasoning steps. One example is that, though both ASDiv-A and SVAMP have an averaged reasoning steps of 1.2 (see Table 2), SVAMP
is considered more difficult as it requires high-level natural language understanding (Patel et al., 2021).
## 4.4 Perform Adapter-Tuning On Msat
Tuning all parameters of LM encoders on synthetic data that are largely different from the pre-training corpus may lead to catastrophic forgetting (Geva et al., 2020). To explore the importance of performing adapter-tuning on MSAT, we create a variant of our method in which we perform full finetuning on MSAT. We compare this variant with our models in Figure 6. It can be observed that both full fine-tuning and adapter-tuning can achieve good performance on MSAT, but adapter-tuning outperforms fine-tuning on all downstream MWP
datasets, which demonstrates the benefits of performing adapter-tuning on MSAT.
## 5 Related Work
In this work, we focus on improving moderatesized LM's MWP performance by injecting multistep reasoning ability. Hence, our work closely relates to both reasoning ability injection (Geva et al., 2020; Pi et al., 2022) and MWP solving (Xie and Sun, 2019; Patel et al., 2021; Jie et al., 2022).
![4_image_1.png](4_image_1.png)
Reasoning skills injection This technique refers to continually pre-training LMs on certain intentionally-crafted tasks to enhance their reasoning abilities. GenBERT (Geva et al., 2020) pretrains LMs on templated-based synthetic data to inject numerical skills into the LMs. PoET (Pi et al., 2022) improves LMs' reasoning ability by pre-training them on tabular data towards imitating program executors. Both methods involve training LMs to produce numerical answers directly, which can be unreliable (Razeghi et al., 2022). Our work focuses on injecting into LMs the capability for solving complex arithmetic problems step-by-step.
## Solving Mwp With Specialized Architectures
One of the research lines of MWP solving focuses on designing specialized achiectures for math reasoning (Xie and Sun, 2019; Lan et al., 2021; Jie et al., 2022). For example, Lan et al. (2021) combines RoBERTa (Liu et al., 2019) with a Transformer (Vaswani et al., 2017) decoder, and Jie et al.
(2022) augments encoder-only LMs with a directed acyclic graph decoder. One of the shortages of such models is the information loss caused by masking actual numbers in the questions with symbolic tokens (Wu et al., 2021). In this work, we propose to represent actual numbers with digit tokenization, and improve models' multi-step reasoning ability by pre-training them on a synthetic task MSAT.
## 6 Conclusion
We propose a novel synthetic pre-training task, MSAT, to incorporate LMs with multi-step reasoning skills that improve performance on MWP tasks.
This pre-training task encourages LMs to generate intermediate reasoning steps instead of predicting final numerical answers directly. Our experiments show that the proposed method is effective in improving the moderate-sized LM's performance on MWP solving tasks.
## Limitations
Limited number of operators considered Following previous methods (Lan et al., 2021), we only consider binary operators (+, −, ×, and ÷).
As we adopt a code-style output format, it is possible to introduce other non-binary operators supported by the Python interpreter, e.g., sum() and max(). However, obtaining labeled data with such operators may require laborious efforts. We believe it is an interesting research question on exploring how to teach models to solve practical questions e.g., math word problems, by writing code in a low-resource setting (Jie and Lu, 2023).
Limited performance due to greedy decoding All the results we report in this work are produced via greedy decoding. A recent work (Wang et al.,
2023) reports that making large LMs generate multiple answers and selecting the answer with the most votes can boost performance by a large margin. However, performing beam search for symbolic neural reasoners, e.g., DeductReasoner, can be challenging in that searching space increases exponentially with the number of variables in the question (Jie et al., 2022). Designing effective beam search strategies for symbolic neural reasoners is a promising direction.
## Acknowledgements
We would like to thank the anonymous reviewers, our meta-reviewer, and senior area chairs for their insightful comments and support with this work. We would also like to thank members of our StatNLP research group for helpful discussions. This research/project is supported by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Program (AISG Award No: AISG2-RP-2020016), and Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOET2EP20122-0011)
## References
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language models. *arXiv preprint arXiv:2211.10435*.
Mor Geva, Ankit Gupta, and Jonathan Berant. 2020.
Injecting numerical reasoning skills into language models. In *Proceedings of ACL*.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. In *Proceedings of NeurIPS*.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for nlp. In *Proceedings of ICML*.
Zhanming Jie, Jierui Li, and Wei Lu. 2022. Learning to reason deductively: Math word problem solving as complex relation extraction. In *Proceedings of ACL*.
Zhanming Jie and Wei Lu. 2023. Leveraging training data in few-shot prompting for numerical reasoning. In *Findings of ACL*.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps:
A math word problem repository. In Proceedings of NAACL.
Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan, Bing Tian Dai, Yan Wang, Dongxiang Zhang, and Ee-Peng Lim. 2021. Mwptoolkit: An open-source framework for deep learning-based math word problem solvers. *arXiv preprint arXiv:2109.00799*.
Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin, Yunshi Lan, Jie Shao, and Xiangliang Zhang. 2022.
MWP-BERT: Numeracy-augmented pre-training for math word problem solving. In *Findings of NAACL*.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *Proceedings of ICLR*.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2020. A diverse corpus for evaluating and developing english math word problem solvers. In *Proceedings* of ACL.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Proceedings of NeurIPS*.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are NLP models really able to solve simple math word problems? In *Proceedings of NAACL*.
Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Yan Gao, Qiang Fu, Jian-Guang Lou, and Weizhu Chen. 2022. Reasoning like program executors. In Proceedings of EMNLP.
Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. 2022. Impact of pretraining term frequencies on few-shot reasoning. In Proceedings of ICML.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In *Proceedings of ACL*.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proceedings of NeurIPS*.
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know numbers? probing numeracy in embeddings. In *Proceedings of EMNLP-IJCNLP*.
Tianduo Wang and Wei Lu. 2022. Differentiable data augmentation for contrastive sentence representation learning. In *Proceedings of EMNLP*.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In *Proceedings of ICLR*.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. In *Proceedings of NeurIPS*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, et al. 2020. Transformers:
State-of-the-art natural language processing. In *Proceedings of EMNLP*.
Qinzhuo Wu, Qi Zhang, Zhongyu Wei, and Xuanjing Huang. 2021. Math word problem solving with explicit numerical values. In *Proceedings of ACLIJCNLP*.
Zhipeng Xie and Shichao Sun. 2019. A goal-driven tree-structured neural model for math word problems.
In *Proceedings of IJCAI*.
## A Additional Information About Datasets
In this section, we provide additional details about the datasets that we used in the experiments.
## A.1 Construction Of Msat
The proposed MSAT is a synthetic Seq2Seq task where the inputs describe arithmetic questions and outputs are the solutions represented by a codestyle multi-step reasoning format. Both inputs and outputs of MSAT can be generated automatically.
To construct an example of MSAT, we first generate the input sequence and then produce the output solution accordingly. In all, we generate 85,000 examples and split them into 80,000 and 5,000 for training and evaluation respectively.
Input sequence construction We start by preparing a set of equation templates and each equation template contains no more than 3 binary operators
(+, −, ×, and ÷). By enumerating the possible combinations of operators, we obtain 4+42+43 =
84 equation templates in total. The first step to construct an input arithmetic question is to instantiate an equation from an equation template. For example, given an equation template "<Num0> + <Num1> =
<Num2>", we assign each variable a value that makes the equality hold and a variable name selected from the capitalized letters. The numbers in the questions are sampled from 0 to 10,000. The last step is to randomly pick a variable as the question variable.
Therefore, the resulting input arithmetic question may look like: "A=1. C=3. A+B=C. B?"
Output sequence construction Given an equation and a question variable, the output is first constructed as a math expression leading to the value of the question variable. Notice that an equation can be represented as a binary tree where the variables are the terminal nodes and operators are the non-terminal nodes. Hence, the output can be produced by a "tree inversion" algorithm (see Figure 7)
from an equation and a question variable.
![6_image_0.png](6_image_0.png)
![7_image_0.png](7_image_0.png)
## A.2 Existing Datasets
MAWPS (Koncel-Kedziorski et al., **2016)** It is a popular benchmark dataset for math word problems. We use the five-fold split provided by Lan et al. (2021) for evaluation.
ASDiv-A (Miao et al., **2020)** This is an English math word problem task containing various linguistic patterns and problem categories. We obtain the data and five-fold split from Patel et al. (2021).
SVAMP (Patel et al., **2021)** It is a challenge set created for MWP model robustness evaluation. The examples in SVAMP are from ASDiv-A with deliberately designed variations. Such variations include: changing questions, adding irrelevant information, etc. Following the evaluation protocol suggested by Patel et al. (2021), we train our models over 3,138 training examples from a combination of MAWPS and ASDiv-A.
## A.3 Svamp (Hard)
SVAMP (hard) is used to evaluate models' extrapolation ability on the out-of-distribution numbers.
We sample numbers from from 10 to 10,000, a significantly different range from the original one, to replace the original numbers in SVAMP. Every question in SVAMP (hard) corresponds to a question in SVAMP. Although it is straightforward to sample a large number and use it to replace the numbers, we expect the created questions to make sense. We achieve this by making sure the new numerical results have the same type as the original ones. For example, if the original numerical answer is a positive integer, then we make sure the new numerical answer is also a positive integer. We compare the number distribution of existing MWP
datasets and SVAMP (hard) in Figure 8.
## B Implementation Details
Our method is implemented in Python 3.8 with HuggingFace's Transformers (Wolf et al., 2020)
and PyTorch (Paszke et al., 2019) libraries. All experiments can be conducted on one NVIDIA
RTX 6000 GPU with 22 GB memory.
## B.1 Backbone Model Implementation
For our MSAT-ROBERTAGEN and MSATDEDUCTREASONER, we build the backbone models following the implementation provided by Lan et al. (2021) and Jie et al. (2022) respectively. The encoders for both models are initialized with the pre-trained weights of RoBERTabase. The adapter modules (Houlsby et al., 2019) are added to each layer of the encoders with a bottleneck dimension of 64. More details about the mdoel architectures are provided in Table 3.
| ROBERTAGEN | DEDUCTREASONER | |
|-------------------|------------------|----------|
| # Params. | 139.71 M | 142.40 M |
| # Attention heads | 8 | - |
| Hidden dim. | 768 | 768 |
| Feedforward dim. | 1024 | 768 |
| # Layers | 2 | - |
| Activation | ReLU | ReLU |
| Dropout | 0.1 | 0.1 |
| Label smoothing | 0.05 | - |
| # Constants | 17 | 17 |
## B.2 Training Configurations
| PRE-TRAINING | FINE-TUNING | |
|----------------|-------------------------------------|--------|
| Batch size | 32 | 16 |
| Max steps | 10,000 | 50,000 |
| Optimizer | AdamW (Loshchilov and Hutter, 2019) | |
| Weight decay | 0.01 | 0.01 |
| Max grad norm | 0.1 | 1.0 |
| Learning rate | 3e-5 | 1e-5 |
| LR scheduler | Linear | Linear |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Limitations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2, 3
✓ B1. Did you cite the creators of artifacts you used?
3, Appendix
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We mainly focus on dealing with mathematical problems and in this work.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Table 2, Appendix
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3, 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhang-etal-2023-towards-adaptive | Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning | https://aclanthology.org/2023.acl-short.107 | Fine-tuning large pre-trained language models on various downstream tasks with whole parameters is prohibitively expensive. Hence, Parameter-efficient fine-tuning has attracted attention that only optimizes a few task-specific parameters with the frozen pre-trained model. In this work, we focus on prefix tuning, which only optimizes continuous prefix vectors (i.e. pseudo tokens) inserted into Transformer layers. Based on the observation that the learned syntax and semantics representation varies a lot at different layers, we argue that the adaptive prefix will be further tailored to each layer than the fixed one, enabling the fine-tuning more effective and efficient. Thus, we propose Adaptive Prefix Tuning (APT) to adjust the prefix in terms of both fine-grained token level and coarse-grained layer level with a gate mechanism. Experiments on the SuperGLUE and NER datasets show the effectiveness of APT. In addition, taking the gate as a probing, we validate the efficiency and effectiveness of the variable prefix. | # Towards Adaptive Prefix Tuning For Parameter-Efficient Language Model Fine-Tuning Zhen-Ru Zhang, Chuanqi Tan, Haiyang Xu, Chengyu Wang,
Jun Huang, Songfang Huang Alibaba Group
{zhangzhenru.zzr,chuanqi.tcq,shuofeng.xhy}@alibaba-inc.com
{chengyu.wcy,huangjun.hj,songfang.hsf}@alibaba-inc.com
## Abstract
Fine-tuning large pre-trained language models on various downstream tasks with whole parameters is prohibitively expensive. Hence, Parameter-efficient fine-tuning has attracted attention that only optimizes a few task-specific parameters with the frozen pre-trained model.
In this work, we focus on prefix tuning, which only optimizes continuous prefix vectors (i.e.
pseudo tokens) inserted into Transformer layers. Based on the observation that the learned syntax and semantics representation varies a lot at different layers, we argue that the adaptive prefix will be further tailored to each layer than the fixed one, enabling the fine-tuning more effective and efficient. Thus, we propose Adaptive Prefix Tuning (APT) to adjust the prefix in terms of both fine-grained token level and coarse-grained layer level with a gate mechanism. Experiments on the SuperGLUE and NER datasets show the effectiveness of APT.
In addition, taking the gate as a probing, we validate the efficiency and effectiveness of the variable prefix.
## 1 Introduction
Vanilla fine-tuning strategy usually adjusts all the parameters to adapt the pre-trained language model to downstream tasks. Parameter-efficient learning
(He et al., 2022; Houlsby et al., 2019; Lester et al.,
2021; Guo et al., 2021; Ben Zaken et al., 2022) is an emerging framework that freezes the pre-trained model and only tunes a few number of task-specific parameters for downstream tasks. For instance, Prefix tuning (Li and Liang, 2021; Liu et al., 2022)
prepends length-equivalent pseudo prefix tokens, i.e. continuous task-specific vectors to each layer of the pre-trained model, achieving comparable even superior performance with only 0.1-3% parameters.
In previous works, the length of prefix tokens
(or the number of trainable parameters) is usually the same at each layer. However, a potential observation lies in that the structure information and
![0_image_0.png](0_image_0.png)
representational capacity embedded in each layer are prone to be inconsistent (Jawahar et al., 2019). It is generally considered that the bottom layers of the language model tend to capture concrete and shallow phrase-level features, while the top layers concerns more with abstract semantic information
(Tenney et al., 2019). Based on the perspective, we assume adaptive prefix can grab the emphasis more flexibly to adapt to various downstream tasks.
In light of above motivation, we investigate the adaptive prefix in this work. We propose Adaptive Prefix Tuning (APT) with an adaptive gate mechanism at both fine-grained token level and coarsegrained layer level. Specifically, as shown in Figure 1, for fine granularity, APT scores each individual prefix token via gated weight assignment. Then, the scaled weight is utilized to balance the inserted task-specific prefix tokens and original input tokens for current layer at coarse-grained level.
Extensive experiments against prefix tuning on the sentence and token classification tasks in full data and low resources setting validate the effectiveness of APT. In addition, the gate learned from APT could be served as a probing for the number of necessary parameters in different layers, guiding us to directly apply variable prefix to the original prefix tuning. The probing experiment further demonstrates the effectiveness of adaptive prefix.
1239
## 2 Related Works
Since fine-tuning the whole model is prohibitively expensive, parameter-efficient language model finetuning becomes a lightweight alternative that only optimizes a small number of parameters while keeping most pre-trained parameters frozen (He et al.,
2022). Adapter tuning (Houlsby et al., 2019) inserts two tunable task-specific modules after multihead attention and feed-forward network, achieving comparable performance with only 2-4% of the parameters. Prompt tuning (Lester et al., 2021) and Prefix-Tuning (Li and Liang, 2021) only train soft prompts by adding prefix tokens to the input or hidden states. Recently, Liu et al. (2022) extend the prefix tuning to the natural language understanding tasks, which matches the performance of fine-tuning with only 0.1%-3% tuned parameters.
Furthermore, with an overlap of our motivations that each layer of the pre-trained language model focuses on different aspects of feature for various tasks (Jawahar et al., 2019; Clark et al., 2019b) and extra parameters are probably not necessary for certain tasks (Houlsby et al., 2019; Fan et al., 2020; Rücklé et al., 2021), Adaptable Adapters (Moosavi et al., 2022) selects beneficial adapter layers and learns task-specific activation function for downstream tasks to make adaptor dynamic for each task and layer. In addition to different frameworks
(adapter versa prefix tuning), our key difference from their work lies in that we aim to dynamically filter required information at each layer in a soft way, while they choose whether to add trainable modules at the layer level in a hard manner.
## 3 Methodology 3.1 Prefix Tuning
As prefix tuning is an extension on Transformer
(Vaswani et al., 2017), we first recap the structure of Transformer. Transformer is the block consisting of multi-head attention concatenated by multiple single self-attention functions and a fully connected feed-forward network. Formally speaking, the Transformer block is calculated as follows:
$ \begin{array}{c}\text{Attn}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{softmax}(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d}}\mathbf{V})\quad(1)\\ \text{FFN}(\mathbf{x})=\text{ReLU}(\mathbf{x}\mathbf{W}_{1}+\mathbf{b}_{1})\mathbf{W}_{2}+\mathbf{b}_{2}\quad(2)\end{array}$ For $ \mathbf{x}$, we have a real number of order $ \mathbf{b}$.
Prefix tuning prepends pseudo prefix tokens of length l to each layer of the language model, which is implemented by concatenating inserted keys and values matrix with original corresponding items in each multi-head attention. Specifically, let Pk, Pv ∈ R
l×d be the keys and values of the engaged prefix separately, where l denotes the length of prefix and d corresponds to the dimension, thus self-attention function can be reformatted as:
$$\begin{array}{r l}{{\mathrm{Attn}(\mathbf{Q},\mathbf{K}^{\prime},\mathbf{V}^{\prime})=\operatorname{softmax}({\frac{\mathbf{Q}(\mathbf{K}^{\prime})^{T}}{\sqrt{d}}}\mathbf{V}^{\prime})}}&{{(3)}}\\ {{\mathrm{~where~}\mathbf{K}^{\prime}=[\mathbf{P}_{k};\mathbf{K}],\mathbf{V}^{\prime}=[\mathbf{P}_{v};\mathbf{V}]}}\end{array}$$
Here, [; ] donates concatenation function.
## 3.2 Adaptive Prefix Tuning
The length of prefix is usually a manually set hyperparameter for each task and fixed in distinct layers of the model. However, existing work demonstrates each layer of the language model pays attention to different aspects of the input feature. We assume the prefix in fixed length is insufficient to tailor different layers and tasks. To dynamically customize the prefix at each layer, APT performs a gate mechanism via fine-grained gated weight assignment and coarse-grained scaled weight specification.
Specifically, to capture the diversity of information utilization at different layers, we go deep into the token level at the fine-grained granularity. The token-level gate can inspire us on how many trainable parameters (i.e. pseudo tokens in prefix tuning) are required for this layer, which will be discussed in Section 4.4. Thus, APT yields the gated weights of l pseudo tokens at each layer. We use the hidden states to represent the information encoded in the layer and calculate the gated weights αi = [αi1, αi2*, . . . , α*il] for i-th layer as:
$$\alpha_{i}=\operatorname{sigmoid}(h_{i-1}W_{i})$$
$$(4)$$
αi = sigmoid(hi−1Wi) (4)
Here, hi−1 is the d-dimensional hidden states from the previous layer, and Wi ∈ R
d×lcorresponds to the parameters to be learned.
Besides, we also design a coarse-level gate to balance the information brought from task-specific prefix tokens and original input tokens by learning a layer-level weight. A learnable scaled weight λiis added to the representation of pseudo prefix tokens at the i-th layer.
With the above strategy, the keys-values pair Pi = [Pik, Piv] derived from pseudo prefix tokens in i-th layer is updated to Pˆi as:
$${\hat{\mathbf{P}}}_{i}=\lambda_{i}\mathbf{\alpha}_{i}\odot[\mathbf{P}_{i k},\mathbf{P}_{i v}]$$
$$(5)$$
1240
| Model | SuperGLUE | NER | | | | | | | | | |
|----------------|-------------|-------|------|------|------|---------|---------|-----------|------|------|------|
| BoolQ | COPA | RTE | WiC | WSC | Avg. | CoNLL03 | CoNLL04 | OntoNotes | Avg. | | |
| FT | 72.9 | 67.0 | 68.4 | 71.1 | 63.5 | 68.6 | - | - | - | - | |
| BERT-base | PT-2 | 72.5 | 67.4 | 71.3 | 69.5 | 65.4 | 69.2 | 89.3 | 82.6 | 87.1 | 86.3 |
| (110M) | APT | 72.6 | 70.0 | 72.7 | 71.2 | 66.9 | 70.7 | 89.7 | 84.1 | 87.2 | 87.0 |
| FT | 77.7 | 69.0 | 70.4 | 74.9 | 68.3 | 72.1 | 92.8 | 85.6 | 89.2 | 89.2 | |
| BERT-large | PT-2 | 75.8 | 73.0 | 78.3 | 75.1 | 68.3 | 74.1 | 90.2 | 84.5 | 86.4 | 87.0 |
| (335M) | APT | 76.0 | 79.0 | 79.4 | 75.1 | 70.2 | 75.9 | 90.7 | 85.8 | 88.6 | 88.4 |
| FT | 86.9 | 94.0 | 86.6 | 75.6 | 63.5 | 81.3 | 92.6 | 88.8 | 89.8 | 90.4 | |
| RoBRETa-large | PT-2 | 84.8 | 93.0 | 89.5 | 73.4 | 63.5 | 80.8 | 92.8 | 88.4 | 89.8 | 90.3 |
| (355M) | APT | 84.8 | 94.0 | 89.9 | 74.6 | 68.3 | 82.3 | 92.7 | 89.0 | 89.8 | 90.5 |
| FT | - | - | - | - | - | - | 93.1 | 89.1 | 90.4 | 90.9 | |
| DeBERTa-xlarge | PT-2 | - | - | - | - | - | - | 93.1 | 86.5 | 90.4 | 90.0 |
| (750M) | APT | - | - | - | - | - | - | 93.0 | 89.1 | 90.5 | 90.8 |
Setting Method BoolQ COPA RTE WiC WSC Avg.
BERT-base
(16-shot)
FT 47.27.5 54.06.5 49.42.7 50.32.3 46.26.8 49.4
PT-2 52.47.2 54.23.3 50.83.1 48.23.3 48.54.3 50.8
APT 55.76.5 57.42.7 53.14.4 53.72.2 55.23.8 **55.0**
BERT-large
(16-shot)
FT **57.3**9.7 52.02.4 49.52.7 50.00.0 38.72.2 49.5
PT-2 50.35.7 58.25.3 49.93.4 49.32.2 48.14.2 51.2
APT 51.73.5 60.06.3 53.94.6 51.84.8 55.42.3 **54.6**
BERT-base
(32-shot)
FT 48.19.4 52.26.4 49.52.7 49.40.9 **60.4**3.8 51.9
PT-2 50.15.5 55.03.2 53.83.4 52.04.1 51.54.6 52.5
APT 53.55.3 57.62.2 56.51.6 **54.8**3.9 54.66.5 **55.4**
BERT-large
(32-shot)
FT 47.611.9 45.03.6 48.42.2 50.00.0 47.313.2 47.6
PT-2 45.55.1 57.46.9 51.32.3 53.32.1 46.07.1 50.7
APT 49.95.9 62.05.0 55.53.6 54.92.8 49.04.4 **54.3**
⊙ is the element-wise multiplication. Accordingly, the calculation of the self-attention function in APT
is similar to Eq.(3) without further elaboration.
## 4 Experiments 4.1 Experimental Setup
We conduct 5 NLU tasks on SuperGLUE (Wang et al., 2019) benchmark including BoolQ (Clark et al., 2019a), COPA (Roemmele et al., 2011), RTE (Wang et al., 2018), WiC (Pilehvar and Camacho-Collados, 2019) and WSC (Levesque et al., 2012) as well as 3 Named Entity Recognition
(NER) tasks including CoNLL03 (Tjong Kim Sang and De Meulder, 2003), CoNLL04 (Carreras and Màrquez, 2004), and OntoNotes 5.0 (Weischedel et al., 2013). With BERT-base / large (Devlin et al.,
2019) and RoBERTa-large (Liu et al., 2019) instantiated by HuggingFace Transformers (Wolf et al.,
2020), we compare APT with vanilla fine-tuning and P-Tuning v2 (Liu et al., 2022) which is an implementation of the prefix tuning, configured with hyper-parameters public in the released code1. We also verify our method with DeBERTa-xlarge (He et al., 2020) on NER tasks following P-Tuning v2.
## 4.2 Results
We report the main results in Table 1. For BERTbase, we can observe that APT achieves 1.5% and 0.7% improvements over P-Tuning v2 on SuperGLUE and NER tasks, respectively. For BERTlarge, APT outperforms P-Tuning v2 by 1.8% on SuperGLUE tasks and 1.4% on NER tasks. For RoBERTa-large, APT surpasses P-Tuning v2 by 1.5% on SuperGLUE tasks and 0.2% on NER tasks.
On NER tasks with DeBERTa-xlarge, APT is supe-1https://github.com/THUDM/P-tuning-v2
| Setting | SuperGLUE | NER | | | | | | | | |
|---------------------|-------------|-------|------|------|------|---------|---------|-----------|------|------|
| BoolQ | COPA | RTE | WiC | WSC | Avg. | CoNLL03 | CoNLL04 | OntoNotes | Avg. | |
| APT | 72.6 | 70.0 | 72.7 | 71.2 | 66.9 | 70.7 | 89.7 | 84.1 | 87.2 | 87.0 |
| w/o token-level α | 72.6 | 69.0 | 69.9 | 70.8 | 65.8 | 69.6 | 89.5 | 83.7 | 87.2 | 86.8 |
| w/o layer-level λ | 72.1 | 67.4 | 71.3 | 69.6 | 65.4 | 69.1 | 89.0 | 82.6 | 86.9 | 86.2 |
| w/o hidden states h | 72.0 | 68.8 | 68.7 | 70.2 | 64.6 | 68.9 | 89.1 | 83.6 | 87.1 | 86.6 |
Table 3: Ablation study on BERT-base for two different level gate mechanisms and the hidden states from the previous layer. **bold**: the best score.
| Model | SuperGLUE | NER | | | | | | | | |
|---------|-------------|-------|------|------|------|---------|---------|-----------|------|------|
| BoolQ | COPA | RTE | WiC | WSC | Avg. | CoNLL03 | CoNLL04 | OntoNotes | Avg. | |
| PT-2 | 72.5 | 67.4 | 71.3 | 69.5 | 65.4 | 69.2 | 89.3 | 82.6 | 87.1 | 86.3 |
| PT-2* | 72.6 | 68.8 | 71.9 | 70.0 | 65.8 | 69.8 | 89.3 | 83.0 | 87.2 | 86.5 |
| PT-2+ | 72.8 | 65.4 | 69.1 | 71.1 | 65.8 | 68.8 | 89.4 | 83.2 | 87.1 | 86.6 |
| APT | 72.6 | 70.0 | 72.7 | 71.2 | 66.9 | 70.7 | 89.7 | 84.1 | 87.2 | 87.0 |
Table 4: Comparison between PT-2 and PT-2∗, PT-2+ and APT on BERT-base. (PT-2: P-Tuning v2; PT-2∗: PT-2 with variable prefix; PT-2+: PT-2 with enlarged prefix)
rior to P-Tuning v2 by an average of 0.8%. Compared with vanilla fine-tuning, APT is comparable or even better on part of tasks. In addition, we explore the experimental performance under low resource settings on SuperGLUE benchmark. As shown in Table 2, APT is a better few-shot learner than P-Tuning v2, which exceeds 4.2%, 3.4% in 16-shot setting, and 2.9%, 3.6% in 32-shot setting for BERT-base and BERT-large, respectively.
## 4.3 Ablation Study
We conduct an ablation study in order to explore the separate effect of token-level gated weight α, layer-level scaled weight λ and the hidden states h from the previous layer which is used to calculate token-level gated weight α in Eq.(4). As shown in Table 3, it can be found that removing any strategy hurts the performance to varying degrees, demonstrating that they are all advantageous. Specifically, the beneficial effect of λ for APT is slightly greater than α overall. Besides, it is effective and meaningful to introduce the context (i.e. the hidden states h from the previous layer) when obtaining the gated weight, especially for SuperGLUE tasks.
## 4.4 Discussion What Is Prefix Weight Distribution Learned By
APT? The gate mechanism for prefix serves as the key strategy of the proposed APT, where the learned prefix weight distribution turns out to be a critical point. Figure 2 illustrates the gate weights of the pseudo prefix token for COPA and CoNLL04,
![3_image_0.png](3_image_0.png)
respectively. It can be found that CoNLL04 is concerned with bottom layers in the language model which are regarded as phrase-level features, while COPA pays more attention to the higher layers, indicating semantic information. The observation is consistent with the characteristics of corresponding tasks. NER is a token-level task while COPA is a causal reasoning task sensitive to the semantics of sentences, which reminds us that it is worth placing various prefix tokens on specific layers according to the task properties.
## Does Variable Prefix Work Better Than Fixed **One?**
To verify the effectiveness of adaptive prefix under the proposed architecture, we wonder if the learned ratio at each layer can be directly transferred to P-Tuning v2. Taking the gate as a probing indicator, we reset the prefix length of P-Tuning v2 from fixed to variable in different layers based on the observation of the learned ratio (e.g. the distribution shown in Figure 2). From the comparison between PT-2 and PT-2∗in Table 4, we demonstrate that the variable prefix with less trainable parameters surprisingly outperforms the original implementation in fixed prefix. Nonetheless, it is also worth noting that there is still a gap between P-Tuning v2 with variable prefix and APT, where the latter continuously adjusts the weight of prefix during the training phase while the former only initializes with a one-time mask probing.
Whether the adaptive structure benefits the finetuning? Compared to P-Tuning v2, APT learns extra gated and scaled weights. To figure it out whether the improvement of APT is brought from more trainable parameters or the adaptive model structure, we adjust the hyper-parameter, i.e., enlarge the prefix length of P-Tuning v2 by 1.5 times to align the number of parameters with our APT. As shown in the comparison between PT-2+ and APT
of Table 4, we observe that APT still outperforms enlarged P-Tuning v2 with 1.9%, 0.4% on average for SuperGLUE and NER tasks respectively, validating the superiority of the gate mechanism.
## 5 Conclusion
In this paper, we investigate prefix tuning and assume that adaptive prefix is probably more efficient and effective than fixed prefix. Firstly, we propose APT that leverages the token-level and the layerlevel gate mechanism which achieves an improvement of performance over original prefix tuning.
Then, we illustrate the weight distribution learned by APT and take it as a probe, which validates the variable prefix can work better than the fixed one.
The above experiments and analysis demonstrate that the adaptive prefix can be served as a promising strategy for parameter-efficient fine-tuning.
## Limitations
The proposed approach in this paper also suffers from certain limitations, i.e. we adapt APT on the encoder model and lack design for the other architectures such as decoder-only and encoder-decoder.
In addition, it is better to generalize the key idea to other parameter-efficient learning approaches. A unified solution for existing work may be worth exploring in the future.
## References
Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel.
2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 1–9, Dublin, Ireland. Association for Computational Linguistics.
Xavier Carreras and Lluís Màrquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role labeling. In *Proceedings of the Eighth Conference on* Computational Natural Language Learning (CoNLL2004) at HLT-NAACL 2004, pages 89–97, Boston, Massachusetts, USA. Association for Computational Linguistics.
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019a. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019b. What does BERT
look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP:
Analyzing and Interpreting Neural Networks for NLP,
pages 276–286, Florence, Italy. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Angela Fan, Edouard Grave, and Armand Joulin. 2020.
Reducing transformer depth on demand with structured dropout. In *International Conference on Learning Representations*.
Demi Guo, Alexander Rush, and Yoon Kim. 2021.
Parameter-efficient transfer learning with diff pruning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4884–4896, Online. Association for Computational Linguistics.
Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning.
In *International Conference on Learning Representations*.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790–2799.
PMLR.
Ganesh Jawahar, Benoît Sagot, and Djamé Seddah.
2019. What does BERT learn about the structure of language? In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 3651–3657, Florence, Italy. Association for Computational Linguistics.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Hector Levesque, Ernest Davis, and Leora Morgenstern.
2012. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597, Online. Association for Computational Linguistics.
Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. P-tuning:
Prompt tuning can be comparable to fine-tuning across scales and tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68, Dublin, Ireland. Association for Computational Linguistics.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, too. *arXiv:2103.10385*.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Nafise Sadat Moosavi, Quentin Delfosse, Kristian Kersting, and Iryna Gurevych. 2022. Adaptable Adapters.
In Proceedings of the 2022 Annual Conference of
the North American Chapter of the Association for Computational Linguistics, Seattle, WA, USA. Association for Computational Linguistics.
Mohammad Taher Pilehvar and Jose Camacho-Collados.
2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267–1273, Minneapolis, Minnesota. Association for Computational Linguistics.
Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *AAAI spring symposium: logical formalizations of commonsense reasoning*, pages 90–95.
Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2021. AdapterDrop: On the efficiency of adapters in transformers. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7930–7946, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019.
BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593–
4601, Florence, Italy. Association for Computational Linguistics.
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task:
Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–
147.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In *Advances in Neural Information* Processing Systems, volume 32. Curran Associates, Inc.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes Release 5.0.
Abacus Data Network.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
## A Experimental Details
Datasets In the full data setting, all train-dev-test splits follow P-Tuning v2 (Liu et al., 2022). For low resources setting, to generate k-shot (k = 16, 32)
datasets on SuperGLUE, the fixed set of random seed [11,21,42,87,100] is utilized to sample instances in training and development set, while the entire development set is treated as test set, where the average performance is reported in Table 2.
Experimental Setting We grid search the learning rate over [5e-3, 7e-3, 1e-2, 1e-4], training epoch over [20, 40, 60, 80, 100, 120], batch size over [8, 16, 32], and random seeds over [11, 21, 42, 87, 100]. For a fair comparison, the prefix length utilized by APT is consistent with P-Tuning v2. In low resources setting, the batch size we used is 2.
In Eq.(4), we take the hidden states of the first input token as representation in previous layer.
Experimental Computation We use the pretrained model BERT-base with 110M parameters, BERT-large with 335M parameters, RoBERTalarge with 355M parameters and DeBERTa-xlarge with 750M parameters. We conduct experiments on NVIDIA V100 or A100 GPUs for each task.
## B Further Ablation Results
We demonstrate further ablation results on BERTlarge and RoBERTa-large as shown in Table 5. It can be found that the beneficial impact of the three strategies and the observation is consistent with BERT-base in Section 4.3 in general.
![6_image_0.png](6_image_0.png)
## C Prefix Length
The prefix length is an important hyper-parameter for prefix tuning and APT. Figure 3 illustrates the performance of APT and P-Tuning v2 with different prefix lengths over a range. It can be observed that APT is superior to P-Tuning v2 in most prefix length settings, indicating that APT has a relatively wider range of prefix length to achieve better performance.
## D Scientific Artifacts
We use datasets involving SuperGLUE (Wang et al., 2019) benchmark including BoolQ (Clark et al., 2019a), COPA (Roemmele et al., 2011),
RTE (Wang et al., 2018), WiC (Pilehvar and Camacho-Collados, 2019) and WSC (Levesque et al., 2012) as well as 3 Named Entity Recognition
(NER) tasks including CoNLL03 (Tjong Kim Sang and De Meulder, 2003), CoNLL04 (Carreras and Màrquez, 2004), and OntoNotes 5.0 (Weischedel et al., 2013). The pre-trained model we used are BERT-base / large (Devlin et al., 2019), RoBERTalarge (Liu et al., 2019) and DeBERTa-xlarge (He et al., 2020). We use HuggingFace Transformers
(Wolf et al., 2020) and P-Tuning v2 (Liu et al.,
2022) as the codebase implemented by PyTorch 2. They are all open-source and we only use for academic research which is consistent with their intended use.
| Model | Setting | SuperGLUE | NER | | | | | | | | |
|---------------------|-------------------|-------------|-------|------|------|---------|---------|-----------|------|------|------|
| BoolQ | COPA | RTE | WiC | WSC | Avg. | CoNLL03 | CoNLL04 | OntoNotes | Avg. | | |
| APT | 76.0 | 79.0 | 79.4 | 75.1 | 70.2 | 75.9 | 90.7 | 85.8 | 88.6 | 88.4 | |
| w/o token-level α | 75.8 | 77.0 | 77.3 | 74.8 | 68.3 | 74.6 | 91.1 | 84.4 | 88.5 | 88.0 | |
| BERT-large | w/o layer-level λ | 75.4 | 74.0 | 76.9 | 74.6 | 68.3 | 73.8 | 90.7 | 83.7 | 88.4 | 87.6 |
| w/o hidden states h | 74.7 | 76.0 | 75.8 | 74.6 | 68.3 | 73.9 | 91.2 | 84.0 | 88.6 | 87.9 | |
| APT | 84.8 | 94.0 | 89.9 | 74.6 | 68.3 | 82.3 | 92.7 | 89.0 | 89.8 | 90.5 | |
| w/o token-level α | 84.3 | 88.0 | 88.1 | 73.0 | 65.4 | 79.8 | 92.2 | 88.7 | 89.5 | 90.1 | |
| RoBERTa-large | w/o layer-level λ | 84.7 | 88.0 | 86.3 | 72.1 | 64.4 | 79.1 | 92.0 | 88.7 | 89.8 | 90.2 |
| w/o hidden states h | 83.9 | 91.0 | 87.0 | 72.9 | 64.4 | 79.8 | 92.2 | 88.7 | 89.4 | 90.1 | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section limitations A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
section abstract and section 1 introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1
✓ B1. Did you cite the creators of artifacts you used?
section 4.1
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
section D Scientific Artifacts
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? section D Scientific Artifacts
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We use open-source datasets and do not change datasets for a fair comparison.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
It can be found in the cited paper.
## C ✓ **Did You Run Computational Experiments?** Section 4 Experiments
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Table 1 and section appendix A Experimental Computation The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section appendix A Experimental Details
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Table 2 report the mean and std results.
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We follow the existing work and keep consistent with them.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
fatemi-etal-2023-improving | Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting | https://aclanthology.org/2023.acl-short.108 | Existing studies addressing gender bias of pre-trained language models, usually build a small gender-neutral data set and conduct a second phase pre-training on the model with such data. However, given the limited size and concentrated focus of the gender-neutral data, catastrophic forgetting would occur during second-phase pre-training. Forgetting information in the original training data may damage the model{'}s downstream performance by a large margin. In this work, we empirically show that catastrophic forgetting occurs in such methods by evaluating them with general NLP tasks in GLUE. Then, we propose a new method, GEnder Equality Prompt (GEEP), to improve gender fairness of pre-trained models with less forgetting. GEEP freezes the pre-trained model and learns gender-related prompts with gender-neutral data. Empirical results show that GEEP not only achieves SOTA performances on gender fairness tasks, but also forgets less and performs better on GLUE by a large margin. | # Improving Gender Fairness Of Pre-Trained Language Models Without Catastrophic Forgetting
Zahra Fatemi1, Chen Xing2, Wenhao Liu2**, Caiming Xiong**2 1Department of Computer Science, University of Illinois Chicago 2Salesforce Research [email protected]
{cxing,wenhao.liu,cxiong}@salesforce.com
## Abstract
Existing studies addressing gender bias of pretrained language models, usually build a small gender-neutral data set and conduct a second phase pre-training on the model with such data.
However, given the limited size and concentrated focus of the gender-neutral data, catastrophic forgetting would occur during secondphase pre-training. Forgetting information in the original training data may damage the model's downstream performance by a large margin. In this work, we empirically show that catastrophic forgetting occurs in such methods by evaluating them with general NLP tasks in GLUE. Then, we propose a new method, GEnder Equality Prompt (GEEP), to improve gender fairness of pre-trained models with less forgetting. GEEP freezes the pre-trained model and learns gender-related prompts with genderneutral data. Empirical results show that GEEP
not only achieves SOTA performances on gender fairness tasks, but also forgets less and performs better on GLUE by a large margin.
## 1 Introduction
Pre-trained language models, e.g., BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), have shown competitive performance in a wide variety of NLP downstream applications. However, such models are often prone to exhibit gender bias
(de Vassimon Manela et al., 2021; Zhao et al., 2019; Webster et al., 2020), due to their large scale unsupervised training data from the web (Liu et al.,
2019; Brown et al., 2020). Gender bias refers to unbalanced model behaviors with respect to a specific gender (Cheng et al., 2020). Among various gender-biased behaviours of pre-trained models, bias on professions is the most prominent and wellstudied (de Vassimon Manela et al., 2021; Vig et al., 2020; Qian et al., 2019; Zhao et al., 2019). For example, in coreference resolution tasks, a pre-trained model would predict female pronoun and names for professions like "nurse" and "housekeeper",
while predict male pronouns for "computer programmer" or "doctor" (Kurita et al., 2019). The pre-trained models also wouldn't prefer genderneutral pronouns actively, which is unfair to other gender identities beyond males/females (Deutsch and Buchholz, 2015).
Given the large model size and tremendous time complexity for language model pre-training, training a gender-neutral model from scratch with manually filtered data seems impossible for most organizations. Due to this limitation, existing studies usually build a relatively small gender-neutral data set (for example building a data set that have more balanced gender pronouns for profession names),
and conduct second phase pre-training on the pretrained model with such data (Webster et al., 2020; de Vassimon Manela et al., 2021). However, given the limited size of the gender-neutral data and its potential distributional mismatch with the original pre-training data, *catastrophic forgetting* can occur during second-phase pre-training of such methods. Catastrophic forgetting (Kirkpatrick et al.,
2017) is a long-standing problem which illustrates the tendency of a neural network to forget previously learned information upon learning new information. When it comes to further training a pretrained model, using the small gender-neutral data to update the entire massive model could make the model forget the diverse information from the original pre-training data, which damages the model's downstream performance by a large margin.
In this paper, we first empirically verify that further updating a pre-trained model (such as RoBERTa (Liu et al., 2019)) with manually-built gender-neutral data can cause catastrophic forgetting. We follow existing work and build our profession-related gender-neutral data set by filtering out Wikipedia sentences mentioning professions and swapping their gender related pronouns.
We find that although our gender-neutral data is from Wikipedia which is part of RoBERTa's pre1 1249 training data, the model's performance on downstream tasks in GLUE (Wang et al., 2018) still drops with a considerable margin after secondphase pre-training, due to the smaller size and more concentrated focus of the gender-neutral data.
Therefore, we propose a new method, GEnder Equality Prompt (GEEP), to alleviate gender bias of pre-trained models without catastrophic forgetting. Specifically, inspired by recent prompt-tuning methods (Lester et al., 2021) for fine-tuning large pre-trained models, GEEP freezes the entire model, adds and updates new word embeddings of professions as gender equality prompts, instead of updating all model parameters at second-phase pretraining as previous methods. Since all the pretrained parameters are frozen during further training, diverse information from the original training data preserved in the pre-trained parameters is not erased. Therefore forgetting can be alleviated to large extent. Moreover, since the embeddings of professions are re-initialized when debiasing training starts, gender bias from previous data that is embedded in such representations is already removed before second-phase pre-training.
Therefore, GEEP also improves gender fairness of the model more effectively with much fewer iterations. Empirical results show that GEEP not only achieves state-of-the-art performances with fewer iterations on various gender fairness tasks such as pronoun coreference resolution, but also forgets less and achieves better results on GLUE tasks.
## 2 Related Work
Compared with the existing work focusing on quantifying and alleviating gender bias (Bolukbasi et al., 2016; Caliskan et al., 2017; Zhao et al., 2018b; Gonen and Goldberg, 2019; Sun et al., 2019; Garg et al., 2018; Zhao et al., 2018a; Bolukbasi et al.,
2016; Zhao et al., 2018b) in standard word embedding models, such as word2vec (Mikolov et al.,
2013) and GloVe (Pennington et al., 2014), gender bias in large pre-trained language models seems less studied. Recent work on gender fairness of pre-trained language models, such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), mostly focus on showing and measuring the gender bias embedded in such models (Zhao et al.,
2019; Tan and Celis, 2019). These studies propose metrics to quantify gender bias in pre-trained language models (de Vassimon Manela et al., 2021; Tan and Celis, 2019; Webster et al., 2018; Kurita et al., 2019). In our work, we employ such methods to evaluate GEEP and baseline methods on improving gender fairness. Existing works focusing on mitigating gender bias of pre-trained models usually collect and build gender-neutral data on their own and conduct a second phase pre-training on the released pre-trained model (Webster et al., 2020; de Vassimon Manela et al., 2021; Cheng et al.,
2020). In this work, we demonstrate empirically that the performance of the debiased model on general downstream tasks such as GLUE, still drops by a considerable margin after such second-phase pre-training. Then, given this phenomenon, we propose GEEP to alleviate gender bias in pre-trained models without forgetting.
## 3 Improving Gender Fairness Without Forgetting
In this section, we first describe the gender-neutral collection method we adopt from existing methods and the forgetting issue in such methods. Then we describe the proposed method GEnder Equality Prompt (GEEP).
## 3.1 Profession-Related Gender-Neutral Data Collection
We follow existing work to build a professionrelated gender neutral data set since professionrelated gender bias is a relatively well-studied aspect of gender bias. To construct profession-related data with equal numbers of references to male and female genders, we adopt the data filtering method by (Zhao et al., 2018a) on the English Wikipedia corpus. Specifically, we filter Wikipedia for sentences containing at least one profession that is supposed to be gender-neutral but generally viewed with gender bias, e.g., nurse, defined by (Bolukbasi et al., 2016). For each of these sentences, we swap the gendered terms with their opposite genders (such as "Man" →"Woman", "he"→"she",
and vice-versa). We also provide an analysis of the processed data in Appendix B.8. Our dataset includes both the original profession-related sentences and their gender-swapped counterparts. We get 6.1GB of profession-related gender-neutral text data. Compared with the original pre-training data of RoBERTa (160GB in text size from various sources), the gender-neutral data we have is smaller and less diverse.
After the gender-neutral data set is built, a common approach to mitigate gender bias in pre-trained
![2_image_0.png](2_image_0.png)
language models is to conduct second-phase pretraining to update all model parameters with this data set. We refer to such methods as *SPPA*
(Second-Phase Pre-training for All parameters). In Section 4, we empirically show that SPPA methods lead to forgetting and the model's performance on NLP benchmark GLUE drops by a large margin.
## 3.2 Gender Equality Prompt Approach
To alleviate forgetting while mitigating gender bias in pre-trained language models, we propose GEnder Equality Prompt (GEEP). In GEEP, instead of updating all model parameters during secondphase pre-training, we freeze all of the pre-trained model parameters and add new trainable embeddings for profession names as gender equality prompts. Since all previous pre-trained parameters are frozen, diverse information from original massive pre-training data that are memorized by the pre-trained parameters wouldn't be erased. Therefore, the forgetting of information from the original training data can be alleviated to the fullest extent.
Let X = {x1, x2*, ...x*n} denote the original vocabulary of the pre-trained model and Wx ∈ Rn×d be the original pre-trained token embedding matrix of the model with dimension of d. Given a set of m profession names, {p1, p2*, ..., p*m}, we build an embedding matrix Wp ∈ Rm×d where the embedding of each token is initialized randomly. To obtain an integrated word embedding matrix, we concatenate Wx and Wp as Wemb = Concat(Wx,Wp).
We note that we concatenate them along the dimension of words/tokens instead of in the embedding space. After concatenation, the model's representation size (hidden) remain unchanged. During both second-phase pre-training and the training/inference after that, once a profession occurs, we only update/use its new embedding in Wp. We show the comparison between GEEP and other second-phase pre-training methods in Figure 1.
Given all the pre-trained model's frozen parameters Wwhole that contains Wx, the objective function of second-phase pre-training of GEEP is,
$$\mathcal{L}(\mathbf{x}_{\text{masked}}|\mathbf{x}_{\text{context}},\,\mathbf{W}_{\text{whole}})\tag{1}$$ $$=\frac{1}{N_{\text{mask}}}(\sum_{t=1}^{N_{\text{mask}}}-\log p_{\theta}(x_{t}|\mathbf{x}_{\text{context}},\,\mathbf{W}_{\text{whole}})).\tag{2}$$
Nmask is the number of masked positions in the input sequence x. With such an objective, Wp is updated with gender-neutral data. Moreover, since the embeddings of professions are re-initialized when debiasing training starts in GEEP, gender bias from previous data that is embedded in such representations is already erased before second-phase pre-training. Therefore, it is also easier for GEEP
to debias the model during further pre-training. We note that GEEP can lead to a slight increase of the original model's parameter size. We report the scale of the increase and its effect in Appendix B.7.
## 4 Experiments
In this section, we present the results of GEEP and its baselines to show that GEEP achieves state-ofthe-art performances on gender fairness tasks while alleviating the forgetting issue of the baselines.
## 4.1 Experimental Setup
In our experiments, we mainly use the publicly released RoBERTa-base model as the pre-trained model. We have also conducted experiments on publicly released BERT during preliminary explorations. Details on BERT experiments are in Appendix B.9. Given a pre-trained RoBERTa-base model, we compare GEEP with two main baselines.
The first baseline is the pre-trained RoBERTa-base Table 1: Results on Coreference Resolution task.
| Data | RoBERTa | SPPA | GEEP |
|------------|-----------|--------|--------|
| Winogender | 50.9 | 57.3 | 64.5 |
| WSC | 50.1 | 50.9 | 52.7 |
| DPR/WSCR | 50.8 | 51.1 | 53.6 |
Task RoBERTa SPPA GEEP
MNLI 87.7 87.2 **87.7**
QNLI 92.4 92.4 **92.4**
QQP 91.8 91.3 **91.7** SST-2 95.4 94.7 **95.4**
CoLA 64.1 38.9 **50.5**
MRPC 91.4 88.8 **89.8**
RTE 78.4 60.2 **68.7**
STS-B 90.7 88.3 **89.9**
AVG 86.5 80.2 **83.3**
model without any further training. The other important type of baselines are SPPA methods. For a fair comparison, our SPPA baseline uses the same gender-neutral data set that we construct for GEEP
(details in Section 3.2) to further update all model parameters of the pre-trained RoBERTa-base. The detailed hyper-parameter settings of GEEP and SPPA can be found in Appendix B.1.
4.2 Evaluation Tasks To assess gender fairness, we conduct pronoun coreference resolution experiments on different data sets, Winogender (Rudinger et al., 2018),
Winograd Schema Challenge (WSC) (Levesque et al., 2012), and Definite Pronoun Resolution
(DPR) (Rahman and Ng, 2012). Pronoun coreference resolution is the task of linking the pronouns with their references in a text. In order to resolve a pronoun accurately, a model needs to overcome the biased link between gender and profession (e.g. the assumption that nurses are female) and instead make the decision based on available linguistic cues. Therefore, better performances on pronoun coreference resolution usually indicates less gender bias preserved in the model (Kurita et al., 2019).
Detailed setups of this experiment can be found in Appendix B.2. Additionally, we also qualitatively and quantitatively evaluate our method on direct pronoun prediction. Details of this experiment are in Appendix B.4. We note that given all existing tasks are designed for binary gender pronouns, we are unable to include all existing gender identities in our main experiments. We present an analysis on more gender identities in Appendix B.6.
To evaluate how much each debiased model forgets after second-phase pre-training, we report the performances of the debiased models on GLUE
benchmark (Wang et al., 2018). Detailed setups of this experiment can be found in Appendix B.3.
## 4.3 Results
We first show the pronoun coreference resolution results of different models on three datasets in Table 1. Results show that GEEP model obtains the best accuracy compared to other models, especially on the Wingender dataset where the candidate nouns are professions. We also conduct an ablation study to show the effect of total training iterations on the performances of both methods.
We find that GEEP can improve the model's performance with significantly fewer number of training iterations. Details are in Appendix B.1.
Then we show in Table 5 the performance of different models on 8 GLUE tasks, to see how severe the forgetting issue is after the second-phase training of SPPA and GEEP. Compared with RoBERTa, SPPA suffers from forgetting issue in 7 out of 8 tasks except QNLI. For tasks like CoLA and RTE, the model's performance drops significantly (more than 10 points) after SPPA. For tasks with larger data set for fine-tuning, such as MNLI, QQP and SST-2, they are less vulnerable to the quality of pre-training (Wu et al., 2020; Joshi et al., 2020).
Therefore, SPPA's performance drop on such data sets is less significant. GEEP mitigates the forgetting issue of SPPA in all sub-tasks. Since GEEP ditches the original pre-trained profession embeddings and uses a smaller data set to update new profession embeddings, the forgetting problem cannot be fully avoided. While GEEP still achieves an average GLUE score of 83.3, significantly outperforming SPPA. We have also included an empirical analysis regarding to the reasons behind SPPA's GLUE performance drop in Appendix B.5.
## 5 Closing Remarks
In this paper, we proposed GEEP to improve gender fairness of pre-trained language models with less catastrophic forgetting. For a fair comparison to existing work under the current gender fairness literature, we mainly conduct experiments with profession-related gender neutral data because profession-related gender bias is relatively more well studied so far. The good empirical results indicates it is worth to try applying GEEP to other more challenging and under-explored aspects of gender fairness, which would be our future work.
## References
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. *Advances* in neural information processing systems, 29:4349–
4357.
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *arXiv preprint arXiv:2005.14165*.
Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan.
2017. Semantics derived automatically from language corpora contain human-like biases. *Science*,
356(6334):183–186.
Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, and Lawrence Carin. 2020. Fairfil: Contrastive neural debiasing method for pretrained text encoders. In International Conference on Learning Representations.
Brandon Darr and Tyler Kibbey. 2016. Pronouns and thoughts on neutrality: Gender concerns in modern grammar. *Pursuit-The Journal of Undergraduate* Research at the University of Tennessee, 7(1):10.
Daniel de Vassimon Manela, David Errington, Thomas Fisher, Boris van Breugel, and Pasquale Minervini.
2021. Stereotype and skew: Quantifying gender bias in pre-trained and fine-tuned language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2232–2242.
Madeline B Deutsch and David Buchholz. 2015.
Electronic health records and transgender patients—practical recommendations for the collection of gender identity data. Journal of general internal medicine, 30(6):843–847.
J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL.
Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. *Proceedings* of the National Academy of Sciences, 115(16):E3635– E3644.
Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert:
Improving pre-training by representing and predicting spans. *Transactions of the Association for Computational Linguistics*, 8:64–77.
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks.
Proceedings of the national academy of sciences, 114(13):3521–3526.
V Kocijan, O-M Camburu, A-M Cretu, Y Yordanov, P Blunsom, and T Lukasiewicz. 2019. Wikicrem:
A large unsupervised corpus for coreference resolution. volume D19-1, page 4294–4303. Association for Computational Linguistics.
Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In *Proceedings of the* First Workshop on Gender Bias in Natural Language Processing, pages 166–172.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*.
Hector Levesque, Ernest Davis, and Leora Morgenstern.
2012. The winograd schema challenge. In *Thirteenth International Conference on the Principles of* Knowledge Representation and Reasoning.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta:
A robustly optimized bert pretraining approach.
ArXiv, abs/1907.11692.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality.
In *Advances in neural information processing systems*, pages 3111–3119.
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing
(EMNLP), pages 1532–1543.
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237.
Yusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun.
2019. Reducing gender bias in word-level language models with a gender-equalizing loss function. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 223–228.
Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: the winograd schema challenge. In *Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language* Processing and Computational Natural Language Learning, pages 777–789.
Christina Richards, Walter Pierre Bouman, Leighton Seal, Meg John Barker, Timo O Nieder, and Guy T'Sjoen. 2016. Non-binary or genderqueer genders.
International Review of Psychiatry, 28(1):95–102.
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana. Association for Computational Linguistics.
Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang.
2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1630–1640.
Tony Sun, Kellie Webster, Apu Shah, William Yang Wang, and Melvin Johnson. 2021. They, them, theirs: Rewriting with gender-neutral english. *arXiv* preprint arXiv:2102.06788.
Yi Chern Tan and L. Elisa Celis. 2019. Assessing social and intersectional biases in contextualized word representations. In *NeurIPS*.
Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. 2020. Investigating gender bias in language models using causal mediation analysis. In Advances in Neural Information Processing Systems, volume 33, pages 12388–12401. Curran Associates, Inc.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the gap: A balanced corpus of gendered ambiguous pronouns. *Transactions of the Association for Computational Linguistics*, 6:605–617.
Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed Chi, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. arXiv preprint arXiv:2010.06032.
Qiyu Wu, Chen Xing, Yatao Li, Guolin Ke, Di He, and Tie-Yan Liu. 2020. Taking notes on the fly helps language pre-training. In *International Conference* on Learning Representations.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 629–634.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics.
Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning gender-neutral word embeddings. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 4847–4853.
## A Limitations
In this paper, we only focus on investigating and improving gender fairness of pre-trained language models and didn't touch other fairness issues given the length of the paper. However, we would like to note that with the investigation of other fairness issues in human language more deeply conducted, if the biased words regarding other fairness issues can be more specifically concluded, GEEP can be directly applied to address other fairness problems in pre-trained large language models.
## B Appendix B.1 Hyper-Parameters For Sppa And Geep
For the main results presented in the paper of second-phase pre-training in GEEP and SPPA,
we further train RoBERTa-base for 100, 000 steps with our gender-neutral data. We use an AdamW optimizer with a learning rate of 1e − 5, max_seq_length of 128 and batch size 256. In GEEP method, we initialize the embedding of every profession prompt with a normal distribution and standard deviations of 0.2.
Alongside the final results, we also evaluate SPPA and GEEP during the second-phase pretraining. In Table 3 we show SPPA and GEEP's performance on pronoun coreference resolution at the 20k iteration and 50k iteration. From Table 3 we can see that GEEP improves the pre-trained model's gender fairness with much less number of iterations. At 20k iteration, GEEP's performance is already better than SPPA's final performance on all 3 tasks. At 50k iteration, GEEP's performance has almost converged to its final scores on all 3 tasks. While SPPA's performance is still far behind its final performances on Winogender and WSC.
## B.2 Pronoun Coreference Resolution Experiment Setup
Pronoun Coreference Resolution is the task of linking the pronouns with their references in a text.
Studies show that BERT performance decreases in a text where the gender pronoun is female and the topic is biased towards the male gender (Kurita et al., 2019). To assess the performance of different models in pronoun coreference, we fine-tune our models with GAP data set (Webster et al., 2018)
We fine-tune each model for one epoch with a train batch size of 64 and a learning rate of 5.0e − 6.
After fine-tuning, we evaluate the performance of different models on three data sets:
- Winogender: This dataset includes 1, 584 sentences with three mentions: a profession, a participant, and a pronoun (where the pronoun is referred to either profession or pronoun)(Rudinger et al., 2018).
- WSC: The Winograd Schema Challenge
(WSC) incorporates 273 sentences used for commonsense reasoning for resolution
(Levesque et al., 2012).
- DPR: The Definite Pronoun Resolution (DPR)
corpus with 131 test sentences contains exam-
ples with two noun phrases and a pronoun or possessive adjective referring to one of the noun phrases (Rahman and Ng, 2012).
## B.3 Glue Experiment Setup
To evaluate how much each debiased model forgets after second-phase pre-training, we fine-tune the pre-trained models on GLUE (General Language Understanding Evaluation) to evaluate the performance of the pre-trained models. We follow previous work to use eight tasks in GLUE, including CoLA, RTE, MRPC, STS, SST, QNLI, QQP, and MNLI. For evaluation metrics, we report Matthews correlation for CoLA, Pearson correlation for STSB, and accuracy for other tasks. We use the same optimizer (Adam) with the same hyper-parameters as in pre-training. Following previous work, we search the learning rates during the fine-tuning for each downstream task. For a fair comparison, we do not apply any published tricks for fine-tuning.
Each configuration is run five times with different random seeds, and the *average* of these five results on the validation set is calculated as the final performance of one configuration. We report the best number over all configurations for each task.
## B.4 Pronoun Prediction Experiment Setup And Results
Different approaches have been proposed to quantify and analyze the gender bias in contextual language models (de Vassimon Manela et al., 2021; Webster et al., 2020; Kurita et al., 2019). For BERT,
we choose one approach that can be directly applied to a model pre-trained with Masked Language Modeling (MLM) loss without further fine-tuning. In this approach, we first define a template containing a pronoun and a profession. The profession is supposed to be gender-neutral however it is currently viewed with gender bias to a large extent.
By masking the pronoun, the model is queried to predict the pronouns at the masked position given the context, including the profession. Here is an example, "[MASK]" is a registered nurse. The difference between the probabilities of filling the masked position in each sentence with "he" and
"she", is used to show gender bias in the model,
Pronoun Bias Score = (3) Prob("he") - Prob("she"). (4)
To assess fairness in BERT model, we consider 303 of professions used by (Bolukbasi et al., 2016). In
| Data | RoBERTa | SPPA-20k | GEEP-20k | SPPA-50k | GEEP-50k | SPPA-100k | GEEP-100k |
|------------|-----------|------------|------------|------------|------------|-------------|-------------|
| Winogender | 50.9 | 51.6 | 64.3 | 54.6 | 64.5 | 57.3 | 64.5 |
| WSC | 50.1 | 50.1 | 52.1 | 50.5 | 52.3 | 50.9 | 52.7 |
| DPR/WSCR | 50.8 | 50.9 | 52.1 | 51.1 | 53.4 | 51.1 | 53.6 |
| Avg GLUE | 86.5 | 82.7 | 85.9 | 80.7 | 84.5 | 80.2 | 83.3 |
Table 3: The average accuracy of different models on Coreference Resolution task. The best results are in bold.
our study, we analyze a public available pre-trained BERT-Base model 1that contains 12 layers, 768 hidden nodes, 12 heads, and 110M parameters. Figure 2 shows gender bias of 60 of such professions in BERT-base model. Positive values mean that the professions are biased towards male and vice versa. As the plots show, the contextual representations of professions in BERT-base model exhibits strong gender bias. Professions such as nurse and housekeeper are viewed as jobs for females while surgeon and mathematicians are assumed to be jobs for males.
To find the reference of each pronoun in the template sentences, we follow (Kocijan et al., 2019)
approach. Specifically, during the evaluation for every data set, in each sentence there are two candidate nouns (such as "nurse" or "surgeon") and a pronoun. The pronoun is replaced with a [MASK]
token, and the model makes a prediction at the masked pronoun position from the two candidate nouns. In order to resolve a pronoun accurately, a model needs to overcome the biased link between gender and profession (e.g. a normative assumption that nurses are female) and instead make the decision based on the available linguistic cues. We report the prediction accuracy of all 3 methods on the aforementioned three data sets.
Figure 3 displays the pronoun prediction bias score (defined in Equation 5) of all methods for 60 biased professions defined in (Bolukbasi et al.,
2016). Specifically, in both sub-figures, blue dots show the pronoun prediction bias score from BERTbase model for each profession. In Figure 3 (a),
the pink dots are the bias scores from BERT-SPPA
model. We can see from this sub-figure that compared with BERT-base, the bias scores from BERTSPPA model are indeed closer to 0, indicating that BERT-SPPA can mitigate gender bias of such professions to some extent. In Figure 3 (b), the blue dots are the bias scores from GEEP model.
Compared with both BERT-SPPA and BERT-base, GEEP's bias scores are significantly closer to 0, indicating that GEEP is more effective at removing gender bias from such biased professions compared 1https://github.com/google-research/bert with BERT-SPPA. Moreover, we also calculate the average absolute pronoun prediction bias score for all 303 gender-neutral profession words in (Bolukbasi et al., 2016). We obtain 0.44 for BERT-base, 0.16 for BERT-SPPA and 0.13 for GEEP. GEEP
model gets the lowest average bias with 70% reduction compared to the BERT-base model.
## B.5 Analysis Regarding Sppa'S Performance Drop On Glur
We conduct experiments to analyze reasons behind the GLUE performance drop of SPPA demonstrated in Table 2 in our original submission. The performance drop of SPPA compared to RoBERTa can be of two reasons: 1) the model is further trained with a subset of Wikipedia significantly smaller than the RoBERTa pre-train data, which could enforce the model to forget about the information embedded in the large RoBERTa pre-train data; 2) we processed the subset of Wikipedia to make them gender-neutral, which could potentially introduce noise and distribution mismatch with the downstream data. To provide a more detailed analysis, we conduct experiments as follows.
First, starting from a pre-trained RoBERTa, we further train the model with SPPA on the same subset of Wikipedia that we used in main experiments without making the data subset gender-neutral. We name this model SPPA-without-GN (Gender Neutralization). We also run GEEP-without-GN to see whether GEEP can still alleviate forgetting when the data is just small but not debiased. For GEEPwithout-GN, we further train a RoBERTa with the same Wiki subset without gender neutralization.
During this further training of GEEP-without-GN,
we follow GEEP to add and update new profession embeddings while freezing the rest entire model.
GLUE results of SPPA-without-GN and GEEPwithout-GN are in Table 4 in this pdf.
By comparing SPPA, SPPA-without-GN and the original RoBERTa, we can find SPPA-withoutGN performs better than SPPA while worse than RoBERTa. It suggests that both data subset selection and gender neutralization contribute to the performance drop of SPPA compared to RoBERTa.
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
We would also like to note that GEEP-without-GN
outperforms SPPA-without-GN as well and achieve similar GLUE score as RoBERTa. This indicates that GEEP can also alleviate forgetting introduced by data subset selection effectively when there is not gender-neutralizing procedure is taken.
## B.6 Discussions On Non-Binary Gender Identities
In this discussion, we would like to start with the pronoun choices for different gender identities. Because in our submission we mainly try to address the unfair pronoun preference of pre-trained models. According to social research, gender-neutral pronouns are more appropriate for referring to transgender and non-binary individuals (Deutsch and Buchholz, 2015). 'Zie' and 'hir' are specific to transgender community, but people outside of the community are not familiar with these pronouns. Deutsch and Buchholz (2015) has proposed a Gender-ID to pronoun mapping for transgenders and Genderqueer in electronic health records
(EHR). In this system, transgenders are mapped to he/his or she/her where there exists gender bias, but genderqueer are mapped to they/them. For people who prefer binary pronouns(he/she) regardless of their gender identities, our experiments still hold because the pronoun coreference resolution tasks that we evaluate on, i.e. Winogender, WSC and DPR/WSCR, are all binary-pronoun tasks.
However, an alternative to asking for preferred pronouns would be to use singular pronouns to address everyone until the individual indicates a preference to use certain pronouns and/ or reveal their gender identity (Darr and Kibbey, 2016). One optional term that is already used as a singular pronoun like "they/their" (Darr and Kibbey, 2016; Richards et al., 2016; Sun et al., 2021). If such singular pronoun can be promoted to a larger community, the pronoun unfairness issue can be resolved from the data fundamentally.
## B.7 **The Capacity Increase Of Geep Compared** To Sppa
By adding profession embeddings, it is true that the total number of model parameters slightly increases. However, the entire size of the newlyadded parameters is 303*768=232k, which is only 0.21% of the original RoBERTa parameter size
(110 million). 303 is the number of professions and 768 is the embedding size of RobERTa. Therefore, even if we extend this method to other fairness problems in the future and add more new word embeddings such as 3000 words or 10000 words, the newly-added parameters would be just around 2% or 9% of the original parameter size, which wouldn't cause serious scaling issue.
Moreover, we run a new SPPA variant that has the same capacity (the same number of parameters)
with GEEP. In the new SPPA variant, we conduct SPPA training after adding new word embedding of the profession names, same as GEEP. We refer this model as SPPA-with-NPE (new profession embeddings). The difference between SPPA-withNPE and GEEP is GEEP's core implementation to prevent forgetting, that GEEP freezes the rest parameters during further training and only update new profession embeddings. While SPPA-withNPR updates all parameters including the original model parameters and the newly added profession embeddings. When encountering the pre-defined profession names in training or fine-tuning, SPPAwith-NPR also updates their new embeddings instead of old word/token embeddings. GLUE results are shown in Table 4. Compared with SPPA,
SPPA-with-NPE can alleviate forgetting slightly and achieve better debiasing results, while still significantly under-perform GEEP. Results on pronoun coreference resolution tasks show the same trend. SPPA-with-NPE got 58.6 on Winogender, 51.3 on WSC and 52.4 on DPR/WSCR. They are all slightly better than SPPA while significantly lower than GEEP.
## B.8 Quality Of Gender-Neutral Data
The relatively big performance drop of both our method and SPPA compared to the original RoBERTa motivates us to analyze more on the quality of our gender-neutral data.
While first we note that CoLA and RTE are known to be more sensitive to quality of pre-trained models compared with other tasks in GLUE, due to their small data sizes. In other words, if the pre-trained model is trained insufficiently or with less data, we can see a larger performance drop on CoLA and RTE compared with other tasks. While if the pre-trained model's quality is better, we can see larger improvements on them as well. This trend has been observed in BERT vs RoBERTa, BERT vs Span-BERT, and BERT vs ELECTRA.
Therefore, the reason for the large performance drop on COLA can partially be its natural sensitivity to our small data size of further training
Task RoBERTa SPPA GEEP SPPA-without-GN GEEP-without-GN SPPA-with-NPE
MNLI 87.7 87.2 **87.7** 87.3 87.7 87.2
QNLI 92.4 92.4 **92.4** 92.3 92.4 92.3
QQP 91.8 91.3 **91.7** 91.4 91.8 91.5
SST-2 95.4 94.7 **95.4** 95.0 95.4 94.7
CoLA 64.1 38.9 **50.5** 40.2 59.6 39.3
MRPC 91.4 88.8 **89.8** 88.8 90.5 88.8 RTE 78.4 60.2 **68.7** 66.4 73.1 61.0
STS-B 90.7 88.3 **89.9** 89.5 90.4 88.5
AVG 86.5 80.2 **83.3** 81.4 85.1 80.4
## Roberta.
Second, the gender neutralization process of the training data could cause gender mismatch between pronouns and some very rare nouns. we did follow the reviewer's suggestion to sample 500 sentences from the augmented dataset and manually checked whether there are grammar errors. In these 500 sentences, there are no grammar errors, such as mismatches between nouns and verb formats (e.g.
"he are"). Because during the gender neutralization, we follow previous work to just swap the genderrelated pronouns (such as he/she) or nouns (such as uncle/aunt) when profession names occur. And such gender-related nouns share the same verb formats with their counterparts. We also share the full list of gender-related nouns in the appendix in this submission. However, when we sample more modified sentences, we find that if a rare gender-related noun, such as "spinster", that is not on the published gender-related noun list occurs, the gender neutralization process would change the pronoun while leave the noun unchanged since it is not on the list. Although it happens quite rarely, this causes pronoun misuse that could lead to grammar errors in pre-training data that contribute to the performance drop on CoLA.
## B.9 Experiment Results On Bert
During the preliminary exploration on this problem, we have also applied SPPA and GEEP on publicly
| Task | BERT-base | BERT-SPPA | GEEP |
|--------|-------------|-------------|--------|
| MNLI | 84.3 | 84.0 | 84.1 |
| QNLI | 91.4 | 90.0 | 91.3 |
| QQP | 90 | 90.1 | 90.4 |
| SST-2 | 93 | 92.2 | 92.4 |
| CoLA | 54.0 | 52.0 | 53.0 |
| MRPC | 85.7 | 84.1 | 84.9 |
| RTE | 69.4 | 69.8 | 69.1 |
| STS-B | 88.0 | 88.0 | 87.0 |
| AVG | 82.0 | 81.3 | 81.6 |
released BERT and conducted pronoun coreference resolution and GLUE experiments on them. In this experiment, we only further trained the released BERT model for 10k iterations with our genderneutral data. Moreover, our gender-neutral data set (7.1 GB) is not significantly smaller than the original pre-training data of BERT (16 GB), and the two data sets both come from Wikipedia. Due to these two reasons, the forgetting problem on this BERT experiment is not as obvious for SPPA.
Table 5 shows the performance of different methods on 8 GLUE tasks. Although the forgetting is less server, SPPA still suffers from forgetting issue in the following 6 tasks out of the total 8 tasks, CoLA, MRPC, STS-B, MNLI, QNLI, and SST-2.
As for the average GLUE score, SPPA is 0.7 point lower after its second-phase pre-training, which is not a small margin considering it is the average score of 8 tasks. GEEP mitigates the forgetting issue of SPPA in all sub-tasks except in RTE. GEEP
also gets the average GLUE score of 82.8, which outperforms SPPA and is similar to the original GLUE score of the pre-trained BERT.
Table 6 shows the coreference resolution results of different models on three data sets. Results show that GEEP model obtains the best accuracy compared to other models, especially in Wingender dataset where the candidate nouns are professions.
We observe that the SPPA method also can help improve coreference resolution performance of the pre-trained model, but not as effective as GEEP.
| Data | BERT-base | BERT-SPPA | GEEP |
|------------|-------------|-------------|--------|
| Winogender | 50 | 50.7 | 62.9 |
| WSC | 50.1 | 50.2 | 50.5 |
| DPR/WSCR | 50.7 | 50.9 | 52.8 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
shao-etal-2023-class | Class-Incremental Learning based on Label Generation | https://aclanthology.org/2023.acl-short.109 | Despite the great success of pre-trained language models, it is still a challenge to use these models for continual learning, especially for the class-incremental learning (CIL) setting due to catastrophic forgetting (CF). This paper reports our finding that if we formulate CIL as a continual label generation problem, CF is drastically reduced and the generalizable representations of pre-trained models can be better retained. We thus propose a new CIL method (VAG) that also leverages the sparsity of vocabulary to focus the generation and creates pseudo-replay samples by using label semantics. Experimental results show that VAG outperforms baselines by a large margin. |
## Class-Incremental Learning Based On Label Generation
Yijia Shao1, Yiduo Guo1, Dongyan Zhao1,2,3 **and Bing Liu**4 1Wangxuan Institute of Computer Technology, Peking University 2National Key Laboratory of General Artificial Intelligence 3BIGAI, Beijing, China 4Department of Computer Science, University of Illinois at Chicago [email protected], [email protected], [email protected], [email protected]
## Abstract
Despite the great success of pre-trained language models, it is still a challenge to use these models for continual learning, especially for the *class-incremental learning* (CIL) setting due to *catastrophic forgetting* (CF). This paper reports our finding that if we formulate CIL as a *continual label generation* problem, CF is drastically reduced and the generalizable representations of pre-trained models can be better retained. We thus propose a new CIL method
(VAG) that also leverages the sparsity of vocabulary to focus the generation and creates pseudo-replay samples by using label semantics. Experimental results show that VAG outperforms baselines by a large margin.1
## 1 Introduction
Large pre-trained language models (PLMs) have become the *de facto* standard in building NLP systems. However, how to best use them for continual learning (CL) is still a significant question (Huang et al., 2021; Xia et al., 2022; Pasunuru et al., 2021; Ke et al., 2021). Many existing studies focus on task-incremental learning (TIL) where the model learns distinct tasks sequentially and is given the task identity for inference. These works usually keep the PLM unchanged and update a series of additional structures such as adapters (Gururangan et al., 2022) or prompts (Zhu et al., 2022; Qin and Joty, 2022). Though effective, these methods cannot be used in a more challenging setting of class-incremental learning (CIL) which does not provide task information at test time.
CIL aims to build a single model to make predictions over incrementally learned classes organized as tasks (formal definition in §2). Wu et al. (2022)
conducted a comparative study on PLM in CL and showed that PLMs perform extremely poorly in the
![0_image_0.png](0_image_0.png)
CIL setting due to *catastrophic forgetting* (CF)2.
Also, as the task information is unknown, CIL further requires the model to predict the task identity of each test instance correctly.
In this work, we re-examine the problem of using PLM for CIL and discovered that *formulating CIL*
as **continual label generation** can greatly improve PLMs' continual learning ability. As illustrated in Figure 1, a traditional classifier views the PLM
as a large feature extractor and uses a linear classification head to map the extracted features to a probability distribution on both old and new labels. However, we can also use a generation approach to directly fine-tune the PLM to generate a label sequence (indicating a label) for a test instance. The final label is retrieved from the label pool of the classes learned so far based on text similarity.
Some existing CL works have leveraged generation. For example, LAMOL (Sun et al., 2019) is a TIL system that uses generation to unify different types of tasks and creates pseudo replay samples; 2CF means that a neural network forgets previously learned knowledge when trained on new tasks, resulting in a decline in performance on earlier tasks (McCloskey and Cohen, 1989).
1Our code is publicly available at https://github.com/
shaoyijia/VAG.
Zhang et al. (2022) focuses on the continual learning of different generation tasks.3 Different from these works, we are the first to directly use the generation objective to effectively ease the CF issue in the CIL process. Our experiments demonstrate that the generation objective is more suitable to the continual learning of PLM. To study the inner working of the paradigm shift, in §3.1, we quantitatively show that the generation objective can prevent the PLM from representation collapse (Aghajanyan et al., 2021), thus preserving its ability to continually learn new classes.
To further improve the generation approach, we propose the VAG (Vocabulary-Aware Label Generation) system for CIL. VAG modifies the generation loss by focusing on different vocabulary subsets when learning different tasks. Owning to the natural sparsity of vocabulary, the modified loss leads to a sparse model update that greatly eases the CF issue. Moreover, VAG exploits the label semantics to create pseudo replay data via a label-based augmentation. Extensive experiments on 5 datasets show that VAG drastically outperforms baselines in non-exemplar based CIL (*i.e.*, without saving any replay sample) and also achieves better results when a small amount of saved replay data is used.
## 2 Background
Class-Incremental Learning (CIL). CIL learns a sequence of tasks {1*, ..., T*} incrementally (Kim et al., 2022). Each task t learns a set of new classes Ct. At task t ∈ {1*, ..., T*}, the system is given a training set Dt = (Xt, Yt), where Xt = {x
(t)
j}
Nt j=1 is the input data, Yt = {y
(t)
j}
Nt j=1 is the set of their class labels and y
(t)
j ∈ Ct. The classes in different tasks are disjoint, Ct ∩ Ct′ = ∅, ∀t′ ̸= t. At inference, given a test instance, the system selects a class label from ST
t=1 Ct *without knowing the task* identity. The performance of the system is evaluated in the accuracy of the test samples from all seen classes.
Encoder-Decoder Model Encoder-decoder models take a sequence of tokens as input X =
x1*, ..., x*n and generate the target sequence Y =
y1*, ..., y*m in an auto-regressive manner. Specifically, the encoder maps the input sequence to a vector representation c = fθenc
(X) ∈ R
denc . Suppose the auto-regressive decoder has already generated
![1_image_0.png](1_image_0.png)
Y1:i−1 = y1*, ..., y*i−1, the next-token probability is
$$P(y_{i}|c,Y_{1:i-1})=\frac{\exp(E_{y_{i}}^{\mathsf{T}}f_{\theta_{d c c}}(c,Y_{1:i-1}))}{\sum_{w\in\mathcal{V}}\exp(E_{w}^{\mathsf{T}}f_{\theta_{d c c}}(c,Y_{1:i-1}))}.\tag{1}$$
Here, Ew ∈ R
ddec denotes the word embedding of token w ∈ V, where V is the model vocabulary.
The model parameters are optimized to minimize the negative log-likelihood of ground truth yt.
## 3 Vag System
We present the proposed VAG system which reframes CIL as a continual label generation problem.
Figure 3 gives an overview of VAG with two major components.
## 3.1 Classification Via Generation
VAG solves classification via label generation and maintains a label pool P of label sequences. Each label c ∈ Ctis a sequence of tokens representing a class label. When training task t, instead of mapping Ctto integer indexes representing class labels, VAG retains the label semantics and finetunes the PLM M to generate the label sequence conditioned on the input sequence x
(t)
j. In the CIL
process, P keeps growing to contain all distinct label sequences seen so far. At inference, the most relevant label sequence will be retrieved from P
based on the similarity between all the candidate labels and ygen generated by M given the input x:
$$y_{\rm gen}=\mbox{generate}({\cal M},x)$$ $$y_{\rm pred}=\mbox{argmax}\cos(\mbox{embed}(y),\mbox{embed}(y_{\rm gen}))\tag{2}$$
Here, embed(·) is parameterized by a SentenceBERT model (Reimers and Gurevych, 2019). Although the idea of solving CIL via generation is simple, the framework change yields a great performance boost. Figure 2 compares the classifier framework and the generation framework on CLINC150 (Larson et al., 2019) which contains 150 classes and is split into 15 tasks. With no additional mechanism to handle CF, using the same PLM, *i.e.* BARTbase (Lewis et al., 2020), the generation framework gives much better results.
Generation loss prevents PLMs from collapsing.
To understand the inner working of the framework change, we look into the PLM's representation ability in the CIL process. Unlike single-task learning, CIL requires the PLM to maintain the representation ability as much as possible for future classes, which is nontrivial because PLMs tend to have representation collapse4 during fine-tuning (Aghajanyan et al., 2021). Figure 2 (b) compares the change of the PLM's representation ability in the two frameworks by using the neural collapse metric
(N C) proposed in Zhu et al. (2021c):
$${\mathcal{N C}}:={\frac{1}{K}}\operatorname{trace}\left(\mathbf{\Sigma}_{W}\mathbf{\Sigma}_{B}^{\dagger}\right),$$
$\frac{1}{2}$ and $\frac{1}{3}$ 2.
where ΣW , ΣB ∈ R
denc×denc denote the withinclass and between-class covariance matrices of the encoded sequences, Σ
†B
denotes the pseudo inverse of ΣB, and K denotes the number of classes in the dataset. As clearly shown, when learning more and more tasks, both frameworks witness a drop of the PLM's representation ability. However, the PLM in the generation framework keeps a relatively steady representation ability in the CIL process, thus remaining capable of learning unseen classes.
## 3.2 Vocabulary-Aware Generation Loss
One major challenge of CIL is that the previously learned decision boundaries may be corrupted when the model weights are updated to learn new classes (Zhu et al., 2021a). Beyond using the generation framework to retain the PLM's representation ability, we further propose a *vocabulary-aware generation loss* (VAG loss) to ease the task interference
(which causes catastrophic forgetting).
Note that although the PLM is pre-trained with a large vocabulary (*e.g.*, BART has a vocabulary size of 50,265), only a tiny subset will be used for the label generation in each task. VAG loss leverages this natural sparsity of vocabulary by masking the probability of tokens that will not be used in the current task before calculating the generation loss.
4Representation collapse refers to the degradation of generalizable representations of pre-trained models during finetuning (Aghajanyan et al., 2021).
![2_image_0.png](2_image_0.png)
$$(3)$$
Specifically, denote the vocabulary set of Ct as Vt, P(yi|*c, Y*1:i−1) in Equation (1) is changed to
$$P^{\prime}(y_{i}|c,Y_{1:i-1})=\frac{\exp(E_{y_{i}}^{\top}f_{\theta_{dcc}}(c,Y_{1:i-1}))}{\sum_{w\in\mathcal{V}_{t}}\exp(E_{w}^{\top}f_{\theta_{dcc}}(c,Y_{1:i-1}))}.\tag{4}$$ Since $|\mathcal{V}_{t}|\ll|\mathcal{V}|$ maximizing the modified proba
Since |Vt*| ≪ |V|*, maximizing the modified probability leads to a sparse update of E and effectively eases the forgetting of previous classes.
## 3.3 Label-Based Pseudo Replay
Another major challenge of CIL is that the system needs to separate new classes in task t and classes in previous tasks since the task identity is unknown at inference. To help construct decision boundaries across tasks and mitigate forgetting, VAG creates pseudo replay data by *augmenting the label sequences* in previous tasks.
Specifically, given the label sequence y, the augmented sequence aug(y) will be used as a pseudo replay data instance with label y. To preserve the label semantics as well as to create diverse samples, we implement aug(·) by randomly adding related tokens to the original label sequence based on contextual word embeddings (Ma, 2019):
$${\mathcal{D}}_{<t}^{L P R}=\{(\mathrm{aug}(y),y)|y\in\cup_{i=1}^{t-1}\mathcal{Y}_{i}\}\qquad\quad(5)$$
When training task t, we sample λ|Dt| pairs from DLP R
<t (λ is a hyper-parameter), and combine them with Dt as the training data. The VAG loss is also applied to the pseudo replay sample (aug(y), y),
i.e., for each y ∈ Yi, its associated vocabulary subset Vi will be used in the denominator in Equation (4).
## 4 Experiments 4.1 Datasets And Baselines
Datasets. We use 5 datasets. Following Wu et al.
(2022), we randomly split each dataset into X
tasks with Y classes per task, expressed as (X/Y).
CLINC150 (Larson et al., 2019) (15/10) and Banking77 (Casanueva et al., 2020) (7/10) for intent classification, 20 Newsgroups (20News) (Lang, 1995)
(10/2) for topic classification, FewRel (Han et al.,
2018) (8/10) and TACRED (Zhang et al., 2017)
(8/5) for relation classification. Additional details about the datasets are given in Appendix B.1.
Baselines. We consider the following baselines:
(1) **Vanilla** fine-tunes the PLM sequentially. (2)
EWC (Kirkpatrick et al., 2017) is a regularizationbased method. (3) KD (Hinton et al., 2015) uses knowledge distillation. (4) L2P (Wang et al., 2022)
dynamically prompts the PLM without the task identity. These baselines use the classifier framework, and we adapt them to the generation framework as another set of baselines (X-G). We also consider 3 methods which use generation for CL:
(5) **LAMOL** (Sun et al., 2019) fine-tunes GPT-2 continually with manual prompts and incorporates pseudo replay. Since LAMOL is a TIL system, we adapt it to CIL by using the same prompt. (6)
PAGeR (Varshney et al., 2022) extends LAMOL
with contrastive training and knowledge distillation.
(7) ACM (Zhang et al., 2022) extends LAMOL by adding compositional adapters. ACM is not designed for classification, so we adapt it by training the PLM to generate the class label.
Implementation details are in Appendix B.2.
## 4.2 Main Results
Table 1 shows the results in the non-exemplar (nonreplay) based CIL setting. The reported results are averaged over 5 random seeds.
Baselines using the generation objective give better results. In accord with the findings in Wu et al. (2022), regularization-based methods (*e.g.*,
EWC, KD) perform poorly. For L2P, although it keeps the PLM fixed, the algorithm cannot converge in our experiments due to the randomness introduced by the error-prone prompt selection. Comparing the same method in two frameworks (*e.g.*,
EWC *v.s.* EWC-G), we can see that the framework switch is highly effective, which indicates the superiority of solving CIL via label generation. Moreover, the best-performing baseline ACM also adopts the generation objective.
![3_image_0.png](3_image_0.png)
Superiority of VAG. On all the datasets, VAG
achieves the best performance, even outperforming other baselines in the generation framework by a large margin (Table 1). Figure 4 also shows that VAG has less forgetting in the CIL process than the two best baselines. However, compared with the results in the non-continual learning setting (Non-CL
in Table 1) which represent the performance upper bound for each dataset, our method still has considerable room for improvement, thereby encouraging future endeavors.
Extending VAG to use real replay data. Notably, VAG can be directly extended to utilize real or saved replay data when they are available. Since real replay data are from the training distribution, we optimize the original generation loss upon the combination of Dt and the real replay data besides optimizing the VAG loss.5 We consider ER (LopezPaz and Ranzato, 2017), **DER++** (Buzzega et al.,
2020) and **LDBR** (Huang et al., 2021) as replaybased baselines and experiment with different replay buffer sizes. Table 2 shows the comparison results. VAG still performs the best, especially when the buffer size is small (see the *Avg.* row)6.
## 4.3 Ablation Study And Analysis
We analyze the effect of each component in our VAG system and Figure 4 shows the ablation results. While the full VAG uniformly gives the best results, we further observe that: (1) Both VAG loss and label-based replay can benefit CIL independently. (2) Label-based replay has a relatively small effect especially when we have already adopted VAG loss.
5More details are included in Appendix B.3.
6When the buffer size is large, all the methods approach the non-CL results (performance upper bound), so the performance gap between VAG and other baselines gets smaller.
#Tasks **Softmax Classifier Generation**
Vanilla EWC KD L2P Vanilla-G EWC-G KD-G L2P-G LAMOL PAGeR ACM VAG Non-CL
CLINC150 15 7.37 7.67 9.39 3.32 37.63 44.23 36.51 43.84 42.56 39.39 48.78 **65.69** 94.66 Banking77 7 14.43 14.51 14.59 1.98 26.88 29.99 21.36 34.42 39.51 43.85 54.72 **55.19** 88.61
20News 10 9.96 9.96 10.00 6.84 44.17 49.81 30.84 25.47 52.05 49.61 60.79 **73.51** 86.81
FewRel 8 12.39 13.09 12.33 6.60 19.44 25.12 15.95 6.52 34.69 39.09 29.74 **52.26** 85.14
TACRED 8 10.96 10.41 12.04 4.85 23.44 24.36 17.44 10.18 16.46 27.99 18.67 **46.15** 70.38 Avg. \ 11.02 11.13 11.67 4.72 30.31 34.70 24.42 24.09 37.05 39.99 42.54 **58.56** 85.12
Table 1: Final accuracy (%) of VAG and baseline methods for non-exemplar based CIL. The gray column shows the results in the non-continual learning setting which provides an upper bound. The reported results are averaged over 5 random seeds and the **standard deviations** are reported in Appendix B.4.
In Appendix C, we compare the confusion matrices of "VAG (full)" and "w/o VAG loss". We find VAG loss effectively prevents the model from biasing towards predicting the latest learned classes, thus effectively easing the forgetting issue. In Appendix D, we further analyze the impact of different label-based replay ratios (λ in §3.3). Figure 6 shows that a small amount of label-based replay data already improves the results markedly, indicating the usefulness of leveraging label semantics for pseudo replay.
As discussed in §3.1, the generation loss eases the drop of the PLM's representation power in the CIL process. Appendix E reports the neural collapse metric N C of different methods after CIL.
The VAG system preserves the representation ability of the PLM to the greatest extent.
## 5 Conclusion
We presented the VAG system which solves CIL
based on label generation. We showed that migrating to the generation framework gives a drastic performance boost and eases the representation collapse of the pre-trained model. Experimental results demonstrate the effectiveness of VAG.
## Limitations
One limitation of this work is that VAG does not achieve zero forgetting. Although we show solving CIL based on label generation can effectively
| Buffer size = 1% | Buffer size = 3% | Buffer size = 5% | | | | | | | | | | | |
|--------------------|--------------------|--------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| ER | DER++ | LDBR | VAG | ER | DER++ | LDBR | VAG | ER | DER++ | LDBR | VAG | | |
| CLINC150 | 65.69 | 55.62 | 56.85 | 67.34 | 72.44 | 78.06 | 73.29 | 81.34 | 81.53 | 85.31 | 80.37 | 86.49 | 85.00 |
| Banking77 | 55.19 | 45.24 | 48.32 | 54.76 | 58.96 | 65.22 | 65.73 | 70.16 | 70.57 | 74.32 | 73.06 | 74.37 | 74.81 |
| 20News | 73.51 | 84.53 | 84.24 | 85.30 | 84.76 | 85.45 | 85.30 | 86.53 | 85.29 | 85.79 | 85.66 | 86.83 | 85.85 |
| FewRel | 52.26 | 60.77 | 63.21 | 51.26 | 68.56 | 74.20 | 72.92 | 65.21 | 75.99 | 78.08 | 78.09 | 70.48 | 78.42 |
| TACRED | 46.15 | 36.09 | 37.03 | 38.21 | 49.70 | 49.66 | 52.12 | 46.93 | 58.00 | 56.93 | 55.72 | 52.22 | 61.28 |
| Avg. | 58.56 | 56.45 | 57.93 | 59.37 | 66.88 | 70.52 | 69.87 | 70.03 | 74.28 | 76.09 | 74.58 | 74.08 | 77.07 |
ease forgetting and representation collapse of the pre-trained model, it is still interesting to further explore how to explicitly solve the forgetting issue in this new framework. The proposed techniques in VAG are a step in the exploration.
Another limitation is that we directly use the label sequences provided by the original dataset.
This may be suboptimal because the quality of the manually created label is hard to guarantee as it may fail to capture the semantic information of the samples in a class. A potential direction is to study creating label sequences automatically by summarizing the training samples. We leave this for future work.
## Ethics Statement
While our proposed VAG system involves generation, it does not have the general ethical concern of generation, *i.e.*, outputting biased or discriminative texts, because the final output of the system is retrieved from the label pool which is highly controllable. For our experiments, we use public datasets and believe that none of them contains offensive contents. Also, although the training of the VAG system requires computational resources, the CIL paradigm is resource-efficient because the model preserves the previously learned knowledge and continually learns new classes.
## References
Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta.
2021. Better fine-tuning by reducing representational collapse. In International Conference on Learning Representations.
Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. 2020. Dark experience for general continual learning: a strong, simple baseline. Advances in neural information processing systems, 33:15920–15930.
Iñigo Casanueva, Tadas Temcinas, Daniela Gerz, ˇ
Matthew Henderson, and Ivan Vulic. 2020. ´ Efficient intent detection with dual sentence encoders. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 38–45, Online. Association for Computational Linguistics.
Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, and Luke Zettlemoyer. 2022. DEMix layers: Disentangling domains for modular language modeling. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5557–5576, Seattle, United States.
Association for Computational Linguistics.
Xu Han, Yi Dai, Tianyu Gao, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2020. Continual relation learning via episodic memory activation and reconsolidation. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 6429–6440, Online. Association for Computational Linguistics.
Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A
large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803–4809, Brussels, Belgium. Association for Computational Linguistics.
Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015.
Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pages 2790–2799. PMLR.
Yufan Huang, Yanzhe Zhang, Jiaao Chen, Xuezhi Wang, and Diyi Yang. 2021. Continual learning for text classification with information disentanglement based regularization. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 2736–2746, Online. Association for Computational Linguistics.
Zixuan Ke, Bing Liu, Hu Xu, and Lei Shu. 2021. CLASSIC: Continual and contrastive learning of aspect sentiment classification tasks. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 6871–6883, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Zixuan Ke, and Bing Liu. 2022. A theoretical study on solving continual learning. In *Advances in Neural* Information Processing Systems.
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks.
Proceedings of the national academy of sciences, 114(13):3521–3526.
Ken Lang. 1995. Newsweeder: Learning to filter netnews. In *Proceedings of the Twelfth International* Conference on Machine Learning, pages 331–339.
Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A.
Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-ofscope prediction. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 1311–1316, Hong Kong, China. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Guodun Li, Yuchen Zhai, Qianglong Chen, Xing Gao, Ji Zhang, and Yin Zhang. 2022. Continual few-shot intent detection. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 333–343, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. *IEEE transactions on pattern analysis* and machine intelligence, 40(12):2935–2947.
Qingbin Liu, Xiaoyan Yu, Shizhu He, Kang Liu, and Jun Zhao. 2021. Lifelong intent detection via multi-strategy rebalancing. *arXiv preprint* arXiv:2108.04445.
David Lopez-Paz and Marc'Aurelio Ranzato. 2017.
Gradient episodic memory for continual learning. *Advances in neural information processing systems*, 30.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Edward Ma. 2019. Nlp augmentation.
https://github.com/makcedward/nlpaug.
Arun Mallya and Svetlana Lazebnik. 2018. Packnet:
Adding multiple tasks to a single network by iterative pruning. In *Proceedings of the IEEE conference* on Computer Vision and Pattern Recognition, pages 7765–7773.
Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pages 109–165. Elsevier.
Fei Mi, Liangwei Chen, Mengjie Zhao, Minlie Huang, and Boi Faltings. 2020. Continual learning for natural language generation in task-oriented dialog systems. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3461–3474, Online. Association for Computational Linguistics.
Natawut Monaikul, Giuseppe Castellucci, Simone Filice, and Oleg Rokhlenko. 2021. Continual learning for named entity recognition. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 35, pages 13570–13577.
Ramakanth Pasunuru, Veselin Stoyanov, and Mohit Bansal. 2021. Continual few-shot learning for text classification. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 5688–5702, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Chengwei Qin and Shafiq Joty. 2022. LFPT5: A unified framework for lifelong few-shot language learning based on prompt tuning of t5. In *International Conference on Learning Representations*.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. 2022. Fine-tuned language models are continual learners.
Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. 2018. Overcoming catastrophic forgetting with hard attention to the task. In *International* Conference on Machine Learning, pages 4548–4557.
PMLR.
Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual learning with deep generative replay. *Advances in neural information processing* systems, 30.
Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. 2019.
Lamol: Language modeling for lifelong language learning. In *International Conference on Learning* Representations.
Vaibhav Varshney, Mayur Patidar, Rajat Kumar, Lovekesh Vig, and Gautam Shroff. 2022. Prompt augmented generative replay via supervised contrastive learning for lifelong intent detection. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1113–1127, Seattle, United States. Association for Computational Linguistics.
Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. 2022. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 139–149.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, and Ali Farhadi. 2020. Supermasks in superposition. *Advances in Neural Information Processing Systems*, 33:15173–15184.
Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, and Gholamreza Haffari. 2022. Pretrained language model in continual learning: A comparative study. In *International Conference on Learning Representations*.
Yu Xia, Quan Wang, Yajuan Lyu, Yong Zhu, Wenhao Wu, Sujian Li, and Dai Dai. 2022. Learn and review:
Enhancing continual named entity recognition via reviewing synthetic samples. In *Findings of the Association for Computational Linguistics: ACL 2022*,
pages 2291–2300, Dublin, Ireland. Association for Computational Linguistics.
Wenpeng Yin, Jia Li, and Caiming Xiong. 2022. ConTinTin: Continual learning from task instructions.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3062–3072, Dublin, Ireland.
Association for Computational Linguistics.
Yanzhe Zhang, Xuezhi Wang, and Diyi Yang. 2022.
Continual sequence generation with adaptive compositional modules. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3653–3667, Dublin, Ireland. Association for Computational Linguistics.
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling.
In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 35–45, Copenhagen, Denmark. Association for Computational Linguistics.
Kang Zhao, Hua Xu, Jiangong Yang, and Kai Gao. 2022.
Consistent representation learning for continual relation extraction. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3402–
3411, Dublin, Ireland. Association for Computational Linguistics.
Fei Zhu, Zhen Cheng, Xu-Yao Zhang, and Cheng-lin Liu. 2021a. Class-incremental learning via dual augmentation. *Advances in Neural Information Processing Systems*, 34:14306–14318.
Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and Cheng-Lin Liu. 2021b. Prototype augmentation and self-supervision for incremental learning. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pages 5871–5880.
Qi Zhu, Bing Li, Fei Mi, Xiaoyan Zhu, and Minlie Huang. 2022. Continual prompt tuning for dialog state tracking. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1124–1137, Dublin, Ireland. Association for Computational Linguistics.
Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, and Qing Qu. 2021c. A geometric analysis of neural collapse with unconstrained features. *Advances in Neural Information Processing* Systems, 34:29820–29834.
## A Related Work
Continual Learning. Continual learning requires a model to sequentially learn a series of tasks. The main challenge that existing papers focus on is overcoming *catastrophic forgetting*
(CF) (McCloskey and Cohen, 1989). Previous works usually fall in the following categories: (1)
Regularization-based methods, which penalize the parameter update and preserve the previous task knowledge (Kirkpatrick et al., 2017; Huang et al.,
2021; Zhu et al., 2021b; Li and Hoiem, 2017).
(2) Parameter-isolation methods, which separate parameters for different tasks by finding subnetworks in the over-parameterized model (Wortsman et al., 2020; Serra et al., 2018; Mallya and Lazebnik, 2018) or adding additional task-specific modules (Houlsby et al., 2019; Ke et al., 2021). These methods need to know the task identity for inference. (3) Replay-based methods, which jointly train the model with new task data and some saved examples (Lopez-Paz and Ranzato, 2017; Buzzega et al., 2020) or generated pseudo data (Shin et al., 2017; Sun et al., 2019) of previous tasks. In real applications, storing replay samples may not be possible due to the data privacy issue or memory overhead (Zhu et al., 2021b).
Based on the differences in evaluation protocols, continual learning can be summarized into three major settings: class-incremental learning (CIL), task-incremental learning (TIL), and domain-incremental learning (DIL) (Yin et al., 2022). Among them, CIL which aims to build a single predictive model on all seen classes, is the most difficult one because the task identity is not available for inference. This requires the model to not only tackle catastrophic forgetting of the within-task prediction ability but also predict the task identity correctly (Kim et al., 2022). In the language domain, prior works have studied CIL for intent detection (Liu et al., 2021; Li et al., 2022),
relation classification (Han et al., 2020; Zhao et al.,
2022), named entity recognition (Monaikul et al.,
2021; Xia et al., 2022), *etc.* Despite the great success of pre-trained language models (PLMs), these models still suffer from severe CF issue in continual learning. In a large-scale comparative study, Wu et al. (2022) concluded that PLMs perform extremely poorly in the CIL setting. In their study, a PLM is leveraged by fine-tuning the model with a classification head. However, in this work, we find that PLMs can show better CIL ability if we
| Dataset | Class | Task | Train | Validation | Test |
|-----------|---------|--------|---------|--------------|--------|
| CLINC150 | 150 | 15 | 15,000 | 3,000 | 4,500 |
| Banking77 | 77 | 7 | 7,191 | 1,800 | 2,800 |
| 20News | 20 | 10 | 10,000 | 3,998 | 5,999 |
| FewRel | 80 | 8 | 33,600 | 11,200 | 11,200 |
| TACRED | 42 | 8 | 5,909 | 1,482 | 1,259 |
## Fine-Tune The Plm In A Generation Framework.
Text Generation in Continual Learning Study.
With the success of natural language generation using PLMs (Radford et al., 2019; Lewis et al.,
2020; Raffel et al., 2020), some works on continual learning of NLP utilize the generation ability of PLMs to unify different potential tasks through prompting (Qin and Joty, 2022) or instruction tuning (Yin et al., 2022; Scialom et al., 2022). The text generation can also be used to create pseudo replay data for previous task. LAMOL (Sun et al.,
2019) is a typical system in this line of work which simultaneously learns to solve all the tasks in a unified question-answering manner and generates pseudo replay samples in the TIL setting. While LAMOL is closely related to our work which also leverages generation, the key difference is that we focus on CIL instead of TIL and show for the first time that the generation objective itself can effectively ease the CF issue. We also show that the generation objective bears a link with preventing the representation collapse of the PLM and further propose the VAG approach to exploit the generation framework for CIL. Some other works in the continual learning literature directly focus on generation tasks (not classification tasks) and study the problem of continual sequence generation (Zhang et al., 2022; Mi et al., 2020). These works naturally involve generation due to the property of their studied tasks.
## B Additional Details Of Experiments B.1 Dataset Details
As described in §4.1, we use 5 datasets for our experiments. **CLINC150** (Larson et al., 2019) and Banking77 (Casanueva et al., 2020) are two intent classification datasets with 150 classes and 77 classes respectively. Each intent class is described by a short phrase (*e.g.*, "change language", "edit personal details") in the original dataset, and we directly use these phrases as the label sequences.
20 Newsgroups (20News) is a topic classification dataset with 20 categories associated with hierarchical labels (*e.g.*, "comp.sys.ibm.pc.hardware" and
"misc.forsale"). We convert the hierarchical labels into label sequences by replacing "." with a whitespace and extending the abbreviations into complete words (*e.g.*, "computer system ibm pc hardware",
"miscellaneous forsale"). **FewRel** (Han et al., 2018)
is a relation classification dataset with 80 relations.
TACRED (Zhang et al., 2017) is another relation classification dataset with 42 relations and it has highly unbalanced samples for each relation. In these two datasets, each relation is described by a short phrase (*e.g.*, "exhibition history", "organization related: founded by") and we use them as the label sequences.
Following Wu et al. (2022), we randomly split CLINC150, Banking77, FewRel into disjoint tasks with 10 classes per task. We split 20News into 10 tasks with 2 classes per task and TACRED into 8 tasks with 5 classes per task for a more challenging evaluation. Table 3 summarizes the dataset statistics.
Note that among the datasets we used, CLINC1507, Banking778, FewRel9, TACRED10 are licensed. We ensure that we did not violate any license condition when conducting our experiments.
## B.2 Implementation Details
We implement VAG and baseline (1)-(4) with the Transformers library (Wolf et al., 2020) and use BARTbase 11 (\#parameters: 139M) as the backbone PLM. For LAMOL12 and ACM13, we directly use their official implementation and use the same question prompt for each task14 so that they do not need the task identity for inference any more and can suit the CIL setting. For PAGeR, we use our own implementation because its source code is not publicly available. Table 4 gives the hyper-parameters of baseline implementations.
For learning each task, we train the model for 10 epochs and use the validation set of the current task for early stopping. We set the batch size as 8 and the max sequence length as 128.
We use AdamW optimizer (Loshchilov and Hutter, 2019) with β1 = 0.9, β2 = 0.999 and the learning rate of 1e-5. For the label-based pseudo replay component of VAG, we implement aug(·)
using the ContextualWordEmbdsAug in the nlpaug library15 which adds 0.3 × token_num(y) related tokens to the original label sequence y and the hyper-parameter λ is set to 0.1. At inference, we use greedy decoding to decode the generated sequence and embed(·) in Equation (2) is parameterized by paraphrase-MiniLM-L6-v2 provided in the Sentence-Transformers library16. We use NVIDIA GeForce RTX 2080 Ti GPU to conduct all our experiments.
## B.3 Exemplar-Based Setting
As discussed in §4.2, we extend VAG system to the exemplar-based CIL setting where real replay data are available. In exemplar-based CIL, the training objective of VAG at task t is to minimize
$$\mathbb{E}_{\mathcal{D}_{\leq t}^{ER}\cup\mathcal{D}_{t}}[\ell_{normal}(x,y)]+\mu\mathbb{E}_{\mathcal{D}_{\leq t}^{LPR}\cup\mathcal{D}_{t}}[\ell_{VAG}(x,y)],\tag{6}$$
(6)
where DER
<t represents the real replay data of previous tasks, DLP R
<t represents the label-based pseudo replay data (see Equation (5)), and µ is a hyperparameter balancing two replay terms. We set µ to 1 in our experiments.
For comparison, we consider 3 typical replaybased methods: (1) ER (Lopez-Paz and Ranzato, 2017) directly combines replay samples and current task samples in training batches to fine-tune the classifier. (2) **DER++** (Buzzega et al., 2020)
exploits replay data in training and adds a regularization term to prevent the logits of replay data from changing. (3) **LDBR** (Huang et al., 2021)
uses information disentanglement based regularization and selects replay samples through K-means clustering. We experiment with different buffer sizes by storing 1%, 3%, and 5% of previous training data. Other training hyper-parameters are in accord with the non-exemplar based setting.
| Method | Key | Value | Note |
|----------|-------|--------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
| EWC | λ | 5,000 | The weight for penalty, selected from [500, 1,000, 2,000, 5,000, 10,000, 20,000, 50,000]. |
| KD | λ | 0.1 | The weight for knowledge distillation loss, selected from [0.1, 0.5, 1.0]. |
| L2P | M | 10 | The total number of prompts, following the original paper. |
| N | 5 | The number of dynamically selected prompts, following the original paper. | |
| λ | 0.5 | The weight of key selection loss, following the original paper. | |
| LAMOL | γ | 0.2 | The sampling ratio of pseudo replay data, following the original paper. |
| PAGeR | λ1 | 1 | The weight of the generation loss and distillation loss, following the original paper. |
| λ2 | 0.25 | The weight of the replay data generation loss, following the original paper. | |
| λ3 | 0.25 | The weight of the supervised contrastive training loss, following the original paper. | |
| γ | 0.2 | Refer to γ in LAMOL. | |
| ACM | γ | 0.01 | The entropy coefficient, using the default value of the official implementation. |
| c | 0.15 | The initialization of the coefficient weights, using the default value of the official implementation. | |
![10_image_0.png](10_image_0.png)
## B.4 Standard Deviations
In §4.2, we evaluated our proposed system VAG in both non-exemplar and exemplar-based CIL setting.
Table 5 and Table 6 give the standard deviations of the reported results.
## C Confusion Matrices
In §4.3, we analyze the effectiveness of each component in the proposed VAG system. To study the effect of VAG loss, we compare the confusion matrixes of "VAG (full)" and "w/o VAG loss". As shown in Figure 5, VAG loss effectively prevents the model from having a strong bias towards predicting the latest learned classes. Since VAG loss limits the denominator to the vocabulary used by the current task, training with VAG loss has less interference to previous task knowledge, thus yielding better final performance.
## D Analysis Of Label-Based Replay Ratio
As discussed in §3.3, VAG samples λ|Dt| pseudo replay data instances created by label-based data augmentation and combines them with Dt as the
![10_image_1.png](10_image_1.png)
training data. Here, we analyze the impact of different label-based replay ratios λ. Figure 6 shows the results. We observe that a small amount of labelbased replay data can already yield improvements and the results are similar when we further increase the label-based replay ratio λ. We set λ to 0.1 in our main experiments (see §4).
## E Neural Collapse With Different Methods
As discussed in §3.1, we find the generation framework can better preserve the representation ability of the pre-trained model in the CIL process. Table 7 gives the neural collapse metric N C of different methods after CIL. In general, after the continual learning process, all the models have lower N C
compared with the original PLM, especially when we fine-tuned the PLM using the traditional classifier framework. We also observe that while we modify the generation loss in the VAG system, its desired property is retained and our proposed CIL
| Softmax Classifier | Generation | | | | | | | | | | | | |
|----------------------|--------------|-------|-------|-----------|-------|-------|-------|-------|-------|-------|-------|--------|-------|
| Vanilla | EWC | KD | L2P | Vanilla-G | EWC-G | KD-G | L2P-G | LAMOL | PAGeR | ACM | VAG | Non-CL | |
| CLINC150 | ±0.56 | ±0.50 | ±1.50 | ±0.34 | ±2.95 | ±1.72 | ±1.44 | ±4.99 | ±0.74 | ±3.04 | ±2.50 | ±1.54 | ±0.67 |
| Banking77 | ±0.68 | ±0.51 | ±0.46 | ±0.40 | ±3.28 | ±2.02 | ±0.83 | ±3.01 | ±0.92 | ±2.78 | ±1.54 | ±0.37 | ±0.94 |
| 20News | ±0.02 | ±0.01 | ±0.04 | ±0.35 | ±3.43 | ±5.04 | ±2.02 | ±1.69 | ±2.80 | ±1.55 | ±2.55 | ±3.81 | ±0.35 |
| FewRel | ±0.30 | ±0.55 | ±1.06 | ±0.68 | ±1.26 | ±1.14 | ±1.13 | ±3.43 | ±1.41 | ±1.69 | ±1.88 | ±1.29 | ±0.73 |
| TACRED | ±1.09 | ±0.29 | ±1.33 | ±0.30 | ±1.08 | ±1.36 | ±1.30 | ±0.94 | ±0.26 | ±1.08 | ±1.76 | ±0.59 | ±0.33 |
Table 5: Standard deviations of the proposed VAG system and the baselines in non-exemplar based class-incremental learning setting. The corresponding averaged results are in Table 1.
| Buffer size = 1% | Buffer size = 3% | Buffer size = 5% | | | | | | | | | | | |
|--------------------|--------------------|--------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| VAG | | | | | | | | | | | | | |
| (non-exemplar) | ER | DER++ | LDBR | VAG | ER | DER++ | LDBR | VAG | ER | DER++ | LDBR | VAG | |
| CLINC150 | ±1.54 | ±8.42 | ±7.90 | ±1.75 | ±0.56 | ±3.35 | ±0.83 | ±1.52 | ±0.88 | ±1.40 | ±1.17 | ±0.50 | ±1.05 |
| Banking77 | ±0.37 | ±6.38 | ±2.78 | ±1.80 | ±1.95 | ±2.24 | ±1.21 | ±0.09 | ±1.72 | ±2.77 | ±1.69 | ±2.48 | ±1.18 |
| 20News | ±3.81 | ±1.01 | ±1.41 | ±0.04 | ±0.39 | ±0.28 | ±0.28 | ±0.35 | ±0.49 | ±0.28 | ±0.07 | ±0.34 | ±0.28 |
| FewRel | ±1.29 | ±3.37 | ±4.91 | ±1.46 | ±0.94 | ±0.92 | ±1.41 | ±1.41 | ±0.65 | ±0.72 | ±1.21 | ±1.74 | ±0.63 |
| TACRED | ±0.59 | ±3.85 | ±3.97 | ±0.71 | ±2.02 | ±2.61 | ±4.30 | ±1.47 | ±3.24 | ±2.96 | ±1.75 | ±1.43 | ±0.99 |
Table 6: Standard deviations of the proposed VAG system and the baselines for class-incremental learning setting with different buffer sizes. The corresponding averaged results are in Table 2.
Table 7: N C of models before and after classincremental learning with different training methods.
| PLM | | | | |
|--------------|---------|-----------|--------|--------|
| (before CIL) | Vanilla | Vanilla-G | VAG | |
| CLINC150 | 65.84 | 8.70 | 53.47 | 57.24 |
| Banking77 | 109.55 | 46.34 | 72.34 | 71.04 |
| 20News | 15.92 | 2.16 | 13.95 | 15.51 |
| FewRel | 321.09 | 77.31 | 170.25 | 190.09 |
| TACRED | 46.79 | 32.78 | 40.54 | 45.54 |
framework preserves the representation ability of the PLM to the greatest extent.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We include a "Limitations" section in the paper.
✓ A2. Did you discuss any potential risks of your work?
We include an "Ethics Statement" section in the paper.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1 summarize the paper's main claims and the main components of our proposed system.
✗ A4. Have you used AI writing assistants when working on this paper?
I didn't use AI writing assistants for this work.
## B ✓ **Did You Use Or Create Scientific Artifacts?** We Use Public Datasets In Section 4.
✓ B1. Did you cite the creators of artifacts you used?
We cite the public datasets we use in Section 4.1.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We discuss the license in Appendix B.1.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. We used public datasets only for model evaluation and did not create any artifact.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We use public datasets which haven't been reported to have any offensive content or ethics issue.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We discuss the dataset details in Appendix B.1.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We include the dataset statistics in Appendix B.1.
C ✓ **Did you run computational experiments?**
We run computational experiments in Section 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B.2 report the number of parameters in the models used and the computing infrastructure.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B.2 report the experimental setup.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We report average results in Section 4 and report the standard deviations in Appendix B.4.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We report the implementation, model and parameter settings in Appendix B.2.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
tsvilodub-franke-2023-evaluating | Evaluating pragmatic abilities of image captioners on {A}3{DS} | https://aclanthology.org/2023.acl-short.110 | Evaluating grounded neural language model performance with respect to pragmatic qualities like the trade off between truthfulness, contrastivity and overinformativity of generated utterances remains a challenge in absence of data collected from humans. To enable such evaluation, we present a novel open source image-text dataset {``}Annotated 3D Shapes{''} (A3DS) comprising over nine million exhaustive natural language annotations and over 12 million variable-granularity captions for the 480,000 images provided by Burgess {\&} Kim (2018).We showcase the evaluation of pragmatic abilities developed by a task-neutral image captioner fine-tuned in a multi-agent communication setting to produce contrastive captions. The evaluation is enabled by the dataset because the exhaustive annotations allow to quantify the presence of contrastive features in the model{'}s generations. We show that the model develops human-like patterns (informativity, brevity, over-informativity for specific features (e.g., shape, color biases)). | # Evaluating Pragmatic Abilities Of Image Captioners On A3Ds
Polina Tsvilodub and **Michael Franke**
Department of Linguistics University of Tübingen
{polina.tsvilodub, michael.franke}@uni-tuebingen.de
## Abstract
Evaluating grounded neural language model performance with respect to pragmatic qualities like the trade off between truthfulness, contrastivity and overinformativity of generated utterances remains a challenge in absence of data collected from humans. To enable such evaluation, we present a novel open source image-text dataset "Annotated 3D Shapes" (A3DS) comprising over nine million *exhaustive* natural language annotations and over 12 million variablegranularity captions for the 480,000 images provided by Burgess and Kim (2018). We showcase the evaluation of pragmatic abilities developed by a task-neutral image captioner fine-tuned in a multi-agent communication setting to produce *contrastive* captions. The evaluation is enabled by the dataset because the exhaustive annotations allow to quantify the presence of contrastive features in the model's generations. We show that the model develops human-like patterns (informativity, brevity, over-informativity for specific features (e.g.,
shape, color biases)).
## 1 Introduction And Related Work
In human communication, language is rarely used as a unimodal channel; rather, language is mostly used in reference to the surroundings, i.e., it is grounded in the physical world. Thus, in order to build artificial agents that could be potentially employed in scenarios requiring natural communication with humans, it is crucial to develop approaches for training such agents to communicate about the world in a human-like way (Lake et al., 2017). However, automatically evaluating the human-likeness of a trained system without costly human feedback is a recurring problem in NLP.
In this paper, we set out to provide tools for evaluating human-like pragmatic abilities of grounded models and evaluate a model trained interactively via reinforcement learning, which is commonly suggested to give rise to task-oriented behavior
(Lazaridou and Baroni, 2020).
Grounding of neural language models has been advanced greatly in recent years through *image* captioning models. Starting with the work by Vinyals et al. (2016) and Karpathy et al. (2014),
neural encoder-decoder architectures have been dominating the field, recently extending to unified architectures (Zhou et al., 2020). However, these approaches are *task neutral*, i.e., the models are trained to produce generally true image captions.
In contrast, humans are highly flexible and *pragmatic* in their use of language and, e.g., adapt the granularity of their utterances to the requirements of the communicative task (Searle, 1969). It is generally guided by conversational maxims, suggesting that cooperative speakers should only provide as much information as required in a given context, be truthful, relevant, and brief (Grice, 1975).
Therefore, faced with a simple referential task of picking out a target item among an array of distractors, humans tend to mention *contrastive* features of the target (e.g., Kramer and van Deemter, 2012), i.e., the ones setting it apart from distractors. On the other hand, *biases* towards producing shape and color descriptions even when these aren't contrastive have been identified (e.g., Degen et al., 2020). For grounded language models, the underlying pragmatic reasoning formalized as nested Bayesian inference about the behavior of speakers and listeners (Goodman and Frank, 2016)
inspired decoding schemes applied on top of standardly trained models (e.g., Cohn-Gordon et al.,
2018; Zarrieß et al., 2021; Shen et al., 2019; Vedantam et al., 2017; Andreas and Klein, 2016).
However, evaluating the pragmatic qualities of models' predictions when they are applied to specific tasks (e.g., referential tasks) remains a challenge. Currently standard metrics like BLEU-n, ROUGE, CIDEr and METEOR (Papineni et al., 2002; Banerjee and Lavie, 2005; Vedantam et al.,
1277
![1_image_0.png](1_image_0.png)
2015; Lin, 2004) for evaluating models' generations make reference to the surface form of ground truth image annotations. They cannot provide insight into models' mechanics and possible biases based on *context-dependent functional aspects* like mentioning contrastive features or being overinformative. Given that model predictions might not always be syntactically well-formed and yet still count as functionally expedient for a human (e.g., see Fig. 1), evaluating pragmatic aspects of natural language image captions is important. We propose a new dataset and metrics facilitating such evaluation in the next sections.
## 2 Methods 2.1 A3Ds
To enable such evaluation, we provide novel annotations for the dataset 3DShapes (Burgess and Kim, 2018) (introduced in Kim and Mnih (2018))
in the "Annotated 3D Shapes" (A3DS) dataset. The image dataset consists of 480,000 unique images of 3D geometric objects, constructed by varying six features (×number of distinct feature values): shape type (×4), shape color (×10), shape scale
(×8), shape orientation relative to the background
(×15), wall color (×10) and floor color (×10). For each image, two sets of ground truth captions were generated: *exhaustive* captions mentioning all six features and their values, and *short* captions, mentioning two or three features of the image only (see example annotation in Fig. 1). The captions were constructed with a hand-written grammar from the numeric labels shipped with the original dataset.
1The last token was predicted nine times. This shows how the caption can be contrastive for the given task inspite of surface form artefacts.
For each distinct feature value, different natural language descriptions were created. In total, over nine million exhaustive captions and 12 million short captions are released as part of this work.2 The important advantage of this synthetic dataset for investigating referential language use of models trained on it is that the numeric labels allow to easily identify *contrastive* versus *redundant* features of the target image in any given context of distractor images. Furthermore, training with fully exhaustive captions allows to focus evaluations on models' contrastive abilities, excluding insufficient granularity of training data as a potential reason for a system's failure to be contrastive.
Because all natural language expressions for each label are known, it is possible to comprehensively evaluate model predictions by-feature.
Predictions of fine-tuned models which may deviate from ground truth captions in their surface form
(e.g., due to language drift; see, e.g., Lazaridou et al. (2020)) can also be evaluated. We consider a caption contrastive if at least one of the known contrastive features for a given context (target and distractors) is mentioned in the target's description.
For contrastive color features, a caption is considered contrastive if it mentions the respective color irrespective of other mentioned aspects, if the color is unique for the target. If several features in the target image have the same color, the description is considered contrastive only if the color name occurs together with the correct head noun (e.g.,
"floor", "wall", object shape). For other contrastive features like shape, the respective expression (e.g., "ball", "in the left corner") has to literally occur in the generated caption. For the example, in Fig. 1, we were able to identify that the caption is contrastive because the contrastive feature is the red color of the ball in the target image (left), there is only one red feature in the target image, and the generated caption contains the term "red".
We suggest informative metrics for evaluating pragmatic abilities of models on this dataset in the next section.
## 2.2 Evaluation Metrics
The metrics are informed by notions that are considered important in the cognitive science literature for cooperative and efficient pragmatic communi-2https://tinyurl.com/2p8w6rct. The repository also contains endpoints for running model evaluations described in the next section and a sandboxed version of the dataset and the pretrained model for easy exploration.
cation (e.g., Grice, 1975) and used commonly in the literature on computational generation of referring expressions (e.g., Kramer and van Deemter, 2012). In the context of a reference task, we define pragmatically relevant categories of features a model might mention. Given a target and distractor image, each feature falls in one of the following three categories:
- *Contrastive* feature: true of target and false of distractor.
- *Non-contrastive* feature: true of both the target and the distractor, and, therefore, redundant for the purpose of reference.
- *False* feature: false of the target.
From these categories, we derive the following metrics (higher values are better), where c is the number of contrastive features mentioned in a generated caption y, k is the total number of features mentioned in y, and z is the ground truth number of contrastive features between the images:
- *Discriminativity* d: d = 1 if c > 0 else 0, indicating if the caption successfully identifies the target, thus a binary measure of task success.
- *Contrastive efficiency* e (applies only to discriminative captions, i.e., for d = 1): e = 1 if k = c = 1, else: e = 1 −
c−1 k−1
, indicating whether the description avoids overmodification with contrastive features. This notion captures the extent to which the caption is economic and observes the communicative Maxim of Quantity, i.e., includes necessary details for the task but not more (Grice, 1975).
- *Relevance* r: r = 1 −
k−c 6−z
, indicates the propensity to avoid producing redundant noncontrastive features. This is formalized via the proportion of mentioned non-contrastive features (k − c) compared to all non-contrastive features (6 − z). It represents the communicative Maxim of Relevance (Grice, 1975)
by measuring the degree to which details unnecessary for the task are excluded.
- *Optimal discriminativity* od: od = 1 if c = 1 else 0. It is a binary indicator summarizing d and e, by binarizing the observance of the Maxim of Quantity for contrastive captions only (Grice, 1975).
In the next section, we showcase how these metrics can be applied in order to evaluate the development of pragmatic abilities of an image captioner through fine-tuning in an interactive setting.
## 2.3 Experiment
The multi-agent communication setting wherein the image captioner is trained as the sender agent together with an artificial receiver agent to complete a communicative task (e.g., reference game) allows to fine-tune the sender's captioning behavior based directly on task performance, e.g., via deep reinforcement learning (e.g., Lazaridou et al.,
2020; Lazaridou and Baroni, 2020; Lazaridou et al.,
2016; Havrylov and Titov, 2017), without making use of a supervised task specific dataset. Applied to the reference task, the idea is that the sender agent will learn to produce more contrastive descriptions which are helpful for the receiver to complete the task. Lazaridou et al. (2020) compare sender agent architectures in terms of their taskspecific improvement, but they do not investigate properties like overinformativity that might have emerged during the multi-agent training.
To investigate these potentenial effects, following the "multi-task learning" training regime from Lazaridou et al. (2020) we pretrained a *baseline* image captioner (B) on 150,000 image-exhaustive caption pairs constructed from 30,000 images sampled from A3DS. It was then fine-tuned on another 150,0000 pairs on a reference game together with a listener agent. In the reference game, both agents received concatenated pairs of images i = [i1;i2],
where it, t ∈ {1, 2} was the target known only to the sender. The sender was trained to produce a description of the target, so that the listener guesses the target correctly, given the same images in randomized order. The sender received the reward r = 1 if the guess was correct, and r = −1 otherwise. Both the sender and the listener consisted of a pretrained ResNet-50 image encoder which was not fine-tuned during the reference game, and a trainable linear layer projecting the ResNet image features to 512-dimensional features. These were input into one-layer LSTM language modules with the hidden layer size h = 512. Further architectural and training details followed Lazaridou et al.
(2020).3 We trained two sender-agent pairs in the reference game setting: in the *random pairs* setting (RP),
3The weight λs for the speaker loss was set to 0.75.
| one feature | two features | three features | | | | | | | |
|------------------------|----------------|------------------|-------|-------|-------|-------|-------|-------|-------|
| Score | B | RP | SP | B | RP | SP | B | RP | SP |
| Discriminativity | 0.999 | 0.822 | 0.824 | 0.997 | 0.576 | 0.586 | 0.984 | 0.527 | 0.541 |
| Contrastive efficiency | 0.198 | 0.879 | 0.875 | 0.203 | 0.963 | 0.955 | 0.251 | 0.856 | 0.875 |
| Relevance | 0.150 | 0.668 | 0.640 | 0.162 | 0.522 | 0.521 | 0.149 | 0.684 | 0.665 |
| Optimal contrastivity | 0.014 | 0.457 | 0.452 | 0.039 | 0.485 | 0.476 | 0.148 | 0.335 | 0.367 |
| Mentioned features # | 5.880 | 2.944 | 3.125 | 5.871 | 2.950 | 3.133 | 5.876 | 2.955 | 3.135 |
| Listener accuracy | - | 0.919 | 0.895 | - | 0.887 | 0.900 | - | 0.862 | 0.860 |
![3_image_0.png](3_image_0.png)
the agents saw pairs of (distinct) images selected at random. In the *similar* pairs setting (SP), they received images which had at least three overlapping features (e.g., target and distractor depicted the same shape of the same color with background of the same color).4
## 3 Results
The agents were evaluated on three categories of test sets, each set containing 7500 image pairs. In the *one-feature* category, six sets were constructed where test pairs matched at least on one of each possible features. The *two-features* category included three sets of pairs matched on at least two object features and a set with two random matching features. The *three-features* category included sets where at least all object features, all background features, or three randomly sampled features matched. These sets allowed to evaluate in which conditions it was more difficult for the sender to produce appropriate captions. In the following, the fine-tuned sender models (RP and SP) are compared to the baseline model (B), which is the pretrained task-neutral image captioner. The average number of falsely named features was 0.720 for baseline, 0.139 (RP) and 0.316 (SP). Table 1 shows listener test accuracies on all test splits, showing that the agents successfully learned the reference task (0.5 is chance). In terms of discriminativity d, it was more difficult for the fine-tuned models to identify the correct feature when two or three features were identical across the pair (Table 1).
These average difficulties were driven by the failure on test sets where the non-contrastive features included shape (e.g., a pair showing a red vs. a blue block), indicating that the shape was easiest to pick up on for the models, although all features were mentioned in all training captions. For instance, d was 0.750 for SP on the object color-scale matched test set, and 0.724 on the random two-feature test set, but 0.501 on the shape-object color matched set.
The discriminativity on random and background feature matched three-feature test sets was 0.618 | 0.875 (RP) and 0.854 | 0.605 (SP), while it was only 0.087 (RP) and 0.164 (SP) on the object feature matched test set. The better contrastive performance of the baseline came at a cost of generally overmodifying the messages with contrastive features (see low contrastive efficiency, Table 1). Low relevance scores also show that the baseline did not identify functionally appropriate features well. In contrast, both fine-tuned models showed higher contrastive efficiency and relevance, indicating that the task based fine-tuning might have helped the models to learn contrastiveness. The fine-tuned models also showed higher optimal constrastivity which is, however, still far from perfect. In general, no qualitative differences between the two- and threefeature datasets or RP and SP settings are apparent.
Figure 2 shows how frequently the models' predictions mentioned a specific feature when it was contrastively *irrelevant* (i.e., it zooms in on predictions where r < 1). For the fine-tuned models, it suggests potential biases towards redundantly producing object-related features (shape, scale, color of object), matching human biases (see Section 1),
as opposed to background descriptions. The proportions slightly increase for object color and scale in the two- and three-feature test sets, potentially hinting at overmodification as the model's loophole behavior in a more complex setting. The SP
model has a stronger redundancy propensity than RP. The apparent trend towards mentioning shape is in line with the pattern of discriminativity results described above where models relied on the shape being the discriminative feature between target and distractor.
## 4 Conclusion
We provide the A3DS dataset alongside evaluation metrics for investigating referential pragmatic abilities acquired by grounded language models on this dataset. With this dataset, we identify that an image captioner fine-tuned interactively via reinforcement learning developed a strikingly human-like shape bias, while being less overinformative than a task-neutral model. Future research could expand such evaluations by including metrics which investigate additional aspects that might matter to human referential expression generation (e.g., the current metrics are agnostic to the surface order of discriminative features, while humans have preferences towards certain adjective ordering; Scontras et al. (2017)). Although these results are specific to the given architecture, with this work we hope to inspire research opening up black box language models—an important task in the age of LLMs.
## Limitations
The identified tendencies towards mentioning object-related features and the reliance on the shape as a contrastive feature might be driven by the grammatical structure of the annotations, mostly presenting object features in sentence-initial subject position, although 40% of exhaustive captions mention either the scale or the object color as the last word in the sentence. Therefore, these results call for investigating the biases of model architectures less sensitive to sentence length than LSTMs, as well as extending the annotations with additional grammars. Further, this evaluation provides descriptive results of the models' pragmatic abilities, leaving the question of whether it is indeed a pragmatic inductive bias or, e.g., structural language drift (Lazaridou et al., 2020) causing the observed patterns, unanswered. Finally, since the evaluation pertains to the surface form of the predictions, applying decoding schemes other than greedy decoding used in this work might provide different patterns, indicating to which degree potential biases are due to model mechanics in opposition to sampling parameters.
## Acknowledgements
We would like to thank Elia Bruni for his support of the work which led to this paper, and Xenia Ohmer and Leon Schmid for helpful discussions. We also acknowledge support by the state of Baden-Württemberg through the computing resources provided by bwHPC and the German Research Foundation (DFG) through grant INST
35/1597-1 FUGG. Michael Franke is a member of the Machine Learning Cluster of Excellence, EXC
number 2064/1 - Project number 39072764.
## References
Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1173–
1182, Austin, Texas. Association for Computational Linguistics.
Satanjeev Banerjee and Alon Lavie. 2005. METEOR:
An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics.
Chris Burgess and Hyunjik Kim. 2018. 3d shapes dataset. https://github.com/deepmind/3dshapesdataset/.
Reuben Cohn-Gordon, Noah Goodman, and Christopher Potts. 2018. Pragmatically informative image captioning with character-level inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 439–443, New Orleans, Louisiana. Association for Computational Linguistics.
Judith Degen, Robert D Hawkins, Caroline Graf, Elisa Kreiss, and Noah D Goodman. 2020. When redundancy is useful: A Bayesian approach to "overinformative" referring expressions. *Psychological Review*,
127(4):591.
Noah D Goodman and Michael C Frank. 2016. Pragmatic language interpretation as probabilistic inference. *Trends in cognitive sciences*, 20(11):818–829.
Herbert P Grice. 1975. Logic and conversation. In Speech acts, pages 41–58. Brill.
Serhii Havrylov and Ivan Titov. 2017. Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. *Advances in* neural information processing systems, 30.
Andrej Karpathy, Armand Joulin, and Li F Fei-Fei. 2014.
Deep fragment embeddings for bidirectional image sentence mapping. Advances in neural information processing systems, 27.
Hyunjik Kim and Andriy Mnih. 2018. Disentangling by factorising. In *Proceedings of the 35th International Conference on Machine Learning*, volume 80 of *Proceedings of Machine Learning Research*, pages 2649–2658. PMLR.
Emiel Kramer and Kees van Deemter. 2012. Computational generation of referring expressions: A survey.
Computational Linguistics, 38(1):173–218.
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2017. Building machines that learn and think like people. Behavioral and Brain Sciences, 40:e253.
Angeliki Lazaridou and Marco Baroni. 2020. Emergent multi-agent communication in the deep learning era.
ArXiv, abs/2006.02419.
Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2016. Multi-agent cooperation and the emergence of (natural) language. *arXiv preprint* arXiv:1612.07182.
Angeliki Lazaridou, Anna Potapenko, and Olivier Tieleman. 2020. Multi-agent communication meets natural language: Synergies between functional and structural language learning. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 7663–7674, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Gregory Scontras, Judith Degen, and Noah D Goodman. 2017. Subjectivity predicts adjective ordering preferences. *Open Mind*, 1(1):53–66.
John R Searle. 1969. Speech acts: An essay in the philosophy of language, volume 626. Cambridge university press.
Sheng Shen, Daniel Fried, Jacob Andreas, and Dan Klein. 2019. Pragmatically informative text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4060–4067, Minneapolis, Minnesota. Association for Computational Linguistics.
Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 251–260.
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In *Proceedings of the IEEE* conference on computer vision and pattern recognition, pages 4566–4575.
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2016. Show and tell: Lessons learned from the 2015 MSCOCO image captioning challenge.
IEEE transactions on pattern analysis and machine intelligence, 39(4):652–663.
Sina Zarrieß, Hendrik Buschmeier, Ting Han, and Simeon Schüz. 2021. Decoding, fast and slow: A case study on balancing trade-offs in incremental, character-level pragmatic reasoning. In *Proceedings* of the 14th International Conference on Natural Language Generation, pages 371–376, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, and Jianfeng Gao. 2020. Unified visionlanguage pre-training for image captioning and VQA.
In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 34, pages 13041–13049.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✗ A2. Did you discuss any potential risks of your work?
The work is confined to a synthetic dataset depicting and describing geometric objects, such that the used tools cannot be directly applied to socially relevant scenarios which might pose risks. The used architectures were light-weight, such that training theoretically doesn't require resources beyond modern laptop hardware.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract, section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2.1
✓ B1. Did you cite the creators of artifacts you used?
abstract, 2.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The license and terms are provided in the online repository released with the paper. The original data source distributed the data under Apache 2.0 allowing reuse.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Given the original Apache 2.0 license, the original data allows both research, private and commercial use, therefore not imposing any limitations. The submitted online repository providing the data provides the same conditions for the derivatives.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data only contains descriptions of abstract synthetically generated geometric shapes.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
2.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sections 2.3., 3 provide train/test split set sizes and construction statistics.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** 2.3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
2.3
✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No hyperparameter search was conducted. Since the computational experiment architecture replicates existing cited work, parameters reported there or single selected parameters were used.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
The used Spacy model is reported in the supplementary online repository documentation exposing the newly created resource.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wang-etal-2023-art | The Art of Prompting: Event Detection based on Type Specific Prompts | https://aclanthology.org/2023.acl-short.111 | We compare various forms of prompts to represent event types and develop a unified framework to incorporate the event type specific prompts for supervised, few-shot, and zero-shot event detection. The experimental results demonstrate that a well-defined and comprehensive event type prompt can significantly improve event detection performance, especially when the annotated data is scarce (few-shot event detection) or not available (zero-shot event detection). By leveraging the semantics of event types, our unified framework shows up to 22.2{\%} F-score gain over the previous state-of-the-art baselines. | # The Art Of Prompting: Event Detection Based On Type Specific Prompts Sijia Wang♣, Mo Yu♠**, Lifu Huang**♣
♣Virginia Tech, ♠WeChat AI
♣{sijiawang,lifuh}@vt.edu,♠[email protected]
## Abstract
We compare various forms of prompts to represent event types and develop a unified framework to incorporate the event type specific prompts for supervised, few-shot, and zeroshot event detection. The experimental results demonstrate that a well-defined and comprehensive event type prompt can significantly improve event detection performance, especially when the annotated data is scarce (fewshot event detection) or not available (zero-shot event detection). By leveraging the semantics of event types, our unified framework shows up to 22.2% F-score gain over the previous stateof-the-art baselines1.
## 1 Introduction
Event detection (ED) (Grishman, 1997; Chinchor and Marsh, 1998; Ahn, 2006) is the task of identifying and typing event mentions from natural language text. Supervised approaches, especially deep neural networks (Chen et al., 2020; Du and Cardie, 2020; Lin et al., 2020; Liu et al., 2020; Li et al.,
2020; Lyu et al., 2021), have shown remarkable performance under a critical prerequisite of a large amount of manual annotations. However, they cannot be effectively generalized to new languages, domains or types, especially when the annotations are not enough (Huang et al., 2016; Huang and Ji, 2020; Lai et al., 2020b; Shen et al., 2021) or there is no annotation available (Lyu et al., 2021; Zhang et al., 2021b; Pasupat and Liang, 2014).
Recent studies have shown that both the accuracy and generalizability of ED can be improved via leveraging the semantics of event types based on various forms of prompts, such as event type specific queries (Lyu et al., 2021; Du and Cardie, 2020; Liu et al., 2020), definitions (Chen et al.,
2020), structures (Lin et al., 2020; Wang et al.,
1The source code, model checkpoints and data are publicly available at https://github.com/VT-NLP/Event_
APEX.
2019), or a few prototype event triggers (Wang and Cohen, 2009; Dalvi et al., 2012; Pasupat and Liang, 2014; Bronstein et al., 2015; Lai and Nguyen, 2019; Zhang et al., 2021b; Cong et al., 2021). These studies further encourage us to take another step forward and think about the following three questions:
(1) does the choice of prompt matter when the training data is abundant or scarce? (2) what's the best form of ED prompt? (3) how to best leverage the prompt to detect event mentions?
To answer the above research questions, we conduct extensive experiments with various forms of prompts for each event type, including (a) *event* type name, (b) *prototype seed triggers*, (c) *definition*, (d) *event type structure* based on both event type name and its predefined argument roles, (e) free parameter based *continuous soft prompt*, and
(f) a more comprehensive event type description
(named *APEX prompt*) that covers all the information of prompts (a)-(d). We observe that (1) by considering the semantics of event types with most forms of prompts, especially seed triggers and the comprehensive event type descriptions, the performance of ED under all settings can be significantly improved; (2) Among all forms of event representations, the comprehensive description based prompts show to be the most effective, especially for fewshot and zero-shot ED; (3) Different forms of event type representations provide complementary improvements, indicating that they capture distinct aspects and knowledge of the event types.
The contributions of this work are as follows:
- We investigate various prompts to represent event types for both supervised and weakly supervised ED, and prove that a well-defined and comprehensive event type prompt can dramatically improve the performance of ED and the transferability from old types to new types.
- A unified framework is developed to leverage the semantics of event types with prompts for supervised, few-shot, and zero-shot ED, and demonstrate state-of-the-art performance with up to 22.2% Fscore improvement over the strong baseline methods.
## 2 Related Work
Supervised ED: Most of the existing Event Detection studies follow a supervised learning paradigm (Ji and Grishman, 2008; Liao and Grishman, 2010; McClosky et al., 2011; Li et al.,
2013; Chen et al., 2015; Cao et al., 2015; Feng et al., 2016; Yang and Mitchell, 2016; Nguyen et al.,
2016; Zhang et al., 2017; Lin et al., 2020; Wang et al., 2021b). However, they cannot be directly applied to detect new types of events. Recently studies have shown that, by leveraging the semantics of event types based on type-specific questions (Du and Cardie, 2020; Liu et al., 2020; Li et al., 2020; Lyu et al., 2021) or seed event triggers (Bronstein et al., 2015; Lai and Nguyen, 2019; Wang et al.,
2021a), the event detection performance can be improved. However, it is still unknown whether they are the best choices for representing the semantics of event types.
Few-shot ED: Two primary learning strategies in few-shot classification tasks are MetaLearning (Kang et al., 2019; Li et al., 2021; Xiao and Marlet, 2020; Yan et al., 2019; Chowdhury et al., 2021) and Metric Learning (Sun et al., 2021; Wang et al., 2020b; Zhang et al., 2021a; Agarwal et al., 2021). Several studies have exploited metric learning to align the semantics of candidate events with a few examples of the novel event types for few-shot event detection (Lai et al., 2020a; Deng et al., 2020; Lai et al., 2020b; Cong et al., 2021; Chen et al., 2021; Shen et al., 2021).
Zero-shot ED: Huang et al. (2018) first exploited zero-shot event extraction by leveraging Abstract Meaning Representation (Banarescu et al.,
2013) to represent event mentions and types into a shared semantic space. Recent studies (Zhang et al.,
2021b; Lyu et al., 2021) further demonstrate that by leveraging a large external corpus with abundant anchor triggers, zero-shot event detection can also be achieved with decent performance without using any training data.
Prompt Learning Prompt learning aims to learn a task-specific prompt while keeping most of the model's parameters frozen (Li and Liang, 2021; Hambardzumyan et al., 2021; Brown et al., 2020).
It has shown competitive performance in many applications of natural language processing (Raffel et al., 2020; Brown et al., 2020; Shin et al., 2020; Jiang et al., 2020; Lester et al., 2021; Schick and Schütze, 2021b). Previous work either used a manual (Petroni et al., 2019; Brown et al., 2020; Schick and Schütze, 2021a) or automated approach (Jiang et al., 2020; Yuan et al., 2021; Li and Liang, 2021)
to create prompts.
## 3 Problem Formulation
Here, we first define each setting of the event detection task and then describe the various forms of event type prompts.
## 3.1 Settings Of Ed
For supervised ED (SED), we follow the conventional supervised event detection setting where the training, validation, and evaluation data sets cover the same set of event types. The goal is to learn a model f to identify and classify event mentions for the target event types.
For few-shot ED (FSED), there are two separate training data sets for few-shot event detection:
(1) A large-scale data set Dbase = {(xi, yi)}M
i=1 that covers the old event types (named *base types*)
where M denotes the number of base event types;
(2) a smaller data set D*novel* = {(xj , yj )}
N×K
j=1 that covers N novel event types, with K examples each. Note that the base and novel event types are disjoint except for the Other class. The model f will be first optimized on D*base*, and then further fine-tuned on D*novel*. The goal is to evaluate the generalizability and transferability of the model from base event types to new event types with few annotations.
For zero-shot ED (ZSED), the training data sets are the only difference between zero-shot and fewshot event detection. In zero-shot event detection, there is only a large-scale base training data set Dbase = {(xi, yi)}M
i=1 for the base event types.
The model f will be only optimized on base event types and evaluated on the novel types.
## 3.2 Event Type Prompts
We compare the following five forms of prompts to represent the event types: (a) **Event Type Name**
is the event class name, usually consisting of one to three tokens. (b) **Definition** can be a short sentence that formally describes the meaning of the event types. (c) **Prototype Seed Triggers** a list of
![2_image_0.png](2_image_0.png)
tokens or phrases that are frequently identified as event triggers. (d) **Event Type Structure** consists of event key argument roles, indicating the core participants of the target event type. (e) Prompts can also be **Continuous Soft Prompt**, that is, a free vector of parameters to represent each event type.
(f) We further define a more comprehensive description **APEX Prompt** that is manually written and covers all previous prompts except soft prompts. Examples of all event type prompts are shown in Figure 1 and Appendix A. Detailed prompt token selection is in Appendix B.
## 4 A Unified Framework For Ed
We adapt (Wang et al., 2021a) and design a unified event detection framework (as shown in Figure 1)
which leverages event type specific prompts to detect events under supervised, few-shot, and zeroshot settings. Formally, given an input sentence W = {w1, w2*, . . . , w*n}, we take each event type prompt T
t = {τ t 1
, τ t 2
, . . . , τ tm} as a query of M
tokens to extract triggers for event type t. Specifically, we first concatenate them into a sequence
[CLS] τ t 1
... τ tm [SEP] w1 *... w*n [SEP]. We use a pre-trained BERT encoder (Devlin et al., 2019)
to get contextual representations for the input sentence W = {w0, w2*, ...,* wn} as well as the event type prompt T = {τ t0
, τ t1
, ..., τ tm}
2.
Given a prompt of each event type, we aim to extract corresponding event triggers from the input sentence. To achieve this goal, we need to capture the semantic correlation of each input token to the event type Thus we learn a weight distribution over the sequence of contextual representations of the event type prompt, to obtain event type t aware contextual representation At i = P|T
t| j=1 αij · τ t j
, where αij = cos(wi, τ t j
), where 2In our experiments, the representation of each wi or τ i is based on the contextual embedding of the first sub-token.
τ j is the contextual representation of the j-th prompt token. cos(·) is the cosine similarity function between two vectors.
With that, the event type aware contextual representation At i will be concatenated with the original contextual representation wi from the encoder, and classified into a binary label, indicating whether it is a candidate trigger of event type t or not:
y˜
t i = Uo([wi; At i; P i]), where [; ] denotes concatenation operation, Uo is a learnable parameter matrix for event trigger detection, and P iis the one-hot part-of-speech (POS) encoding of word wi. For continuous soft prompt based event detection, we follow Li and Liang (2021) where a prefix index q is prepended to the input sequence W′ = [q; W]. The prefix embedding is learned by q = MLPθ(Qθ[q]), where Qθ ∈ R*|Q|×*k denotes the embedding lookup table for the vocabulary of prefix indices. Both MLPθ and Qθ are trainable parameters. Detailed learning strategy is in Appendix C.
## 5 Experiment Setup
We perform experiments on three public benchmark datasets, including ACE05-E+ (Automatic Content Extraction), ERE (Entity Relation Event) (Song et al., 2015),and MAVEN (Wang et al., 2020a). On each dataset, we conduct experiments for SED, FSED, and ZSED. For SED, we use the same data split as the previous studies (Li et al.,
2013; Wadden et al., 2019; Lin et al., 2020; Du and Cardie, 2020; Lin et al., 2020; Nguyen et al.,
2021; Wang et al., 2020a) on all the three benchmark datasets. For FSED and ZSED on MAVEN,
we follow the previous study (Chen et al., 2021)
and choose 120 event types with the most frequent mentions as the base event types and the rest 45 event types as novel ones. For FSED and ZSED on ACE and ERE, previous studies (Lai et al., 2020b,a;
| Method | SED | FSED | ZSED | Method | SED | FSED | ZSED |
|--------------------------|---------------|---------------|----------------|----------|-------|--------|--------|
| 73.3 | 35.2∗ | 49.1 ∗ | | | | | |
| Previous SOTA | (Nguyen | (Lai et al., | (Zhang et al., | | | | |
| et al., 2021) | 2020b) | 2021b) | | | | | |
| (a) Event type name | 72.2 | 52.7 | 49.8 | | | | |
| (b) Definition | 73.1 | 46.7 | 45.5 | | | | |
| (c) Seed triggers | 73.7 | 53.8 | 49.6 | | | | |
| (d) Event structure | 72.8 | 50.4 | 48.0 | | | | |
| (e) Soft prompt | 68.1 | 48.2 | - | | | | |
| Majority voting of (a-e) | 73.9 | 52.1 | 48.7 | | | | |
| (f) APEX Prompt | 74.9 | 57.4 | 51.2 | 68.5 | 57.0 | 40.2* | |
| Previous SOTA | (Wang et al., | (Chen et al., | (Zhang et al., | | | | |
| 2021b) | 2021) | 2021b) | | | | | |
| (a) Event type name | 68.8 | 63.4 | 58.8 | | | | |
| (b) Definition | 67.1 | 56.9 | 52.9 | | | | |
| (c) Seed triggers | 68.7 | 65.1 | 59.1 | | | | |
| (e) Soft prompt | 64.5 | 38.6 | - | | | | |
| Majority voting of (a-e) | 68.4 | 63.4 | 58.1 | | | | |
| (f) APEX Prompt | 68.8 | 68.4 | 59.9 | | | | |
Table 1: Results of event detection (ED) on ACE05
(F1-score, %) ∗indicates evaluation on our data set split based on the authors' public implementations.
| Method | SED | FSED | ZSED |
|--------------------------|-------------|--------------|--------|
| 59.4 | 33.0∗ | 41.2 | |
| Previous SOTA | (Lu et al., | (Lai et al., | |
| 2021) | 2020b) | | |
| (a) Event type Name | 58.2 | 44.8 | 40.5 |
| (b) Definition | 57.9 | 44.2 | 40.4 |
| (c) Seed triggers | 60.4 | 50.4 | 46.2 |
| (d) Event structure | 59.1 | 48.5 | 48.7 |
| (e) Soft prompt | 55.6 | 41.7 | - |
| Majority voting of (a-e) | 60.2 | 47.9 | 45.6 |
| (f) APEX Prompt | 63.4 | 52.6 | 48.9 |
Chen et al., 2021) follow different data splits and settings, making it hard for a fair comparison. Considering the research goals of FSED and ZSED, we define the following conditions to split the ACE and ERE datasets: (i) The base event types and novel event types should be disjoint except Other.
(ii) Each base or novel event type should contain at least 15 instances. (iii) The training set should contain sufficient annotated event mentions.
To meet the above conditions, for ACE, we define the event types of 5 main event categories:
Business, Contact, Conflict, *Justice* and *Movement* as the base event types, and types of the remaining 3 main categories: Life, *Personnel* and *Transaction* as the novel event types. In total, there are 18 qualified base types and 10 qualified novel types (the others do not satisfy the second condition). For ERE, we use the exact same 10 novel event types as ACE, and the rest 25 types as base event types.
Detailed data and hyperparameter descriptions are in Appendix D and Appendix E.
## 6 Results And Discussion
Overall Results The experimental results for SED, FSED, and ZSED on ACE05, ERE, and MAVEN are shown in Table 1-3, from which we see that (1) the APEX prompt achieves the best performance among all the forms of prompts under all the settings of the three benchmark datasets. Compared with the previous state of the art, the APEX
prompt shows up to 4% F-score gain for SED (on ERE), 22.2% F-score gain for FSED (on ACE),
and 19.7% F-score gain for ZSED (on MAVEN);
(2) All the forms of prompts provide significant improvement for FSED and ZSED, demonstrating the benefit of leveraging the semantics of event types via various forms of prompts. (3) Except APEX, seed triggers provide more improvements than other forms of event type prompts under most settings, suggesting its potential to represent the semantics of event types accurately. (4) Continuous soft prompt does not provide comparable performance as other forms of event type representations, which proves the necessity of leveraging event type specific prior knowledge to the representations; (5)
The majority voting does not show improvement over individual prompts since each prompt captures a particular aspect of the event type semantics.
Supervised Event Detection By carefully investigating the event mentions that are correctly detected by the APEX prompt while missed by other prompts, we find that the APEX prompt is more effective in detecting two types of event mentions:
homonyms (multiple-meaning words) and intricate words. General homonyms are usually hard to be detected as event mentions as they usually have dozens of meanings in different contexts. For example, consider the following two examples: (i)
Airlines are getting [Transport:Movement] flyers to destinations on time more often . (ii) *If the board* cannot vote to give [Transaction:Transfer-Money']
themselves present money. Here, "get" and "give"
![4_image_0.png](4_image_0.png)
are not detected based on the event type name or seed triggers but are correctly identified by the definition and APEX prompts. The definition and APEX prompts make 10% and 7% fewer false predictions than seed triggers on general homonyms.
For intricate words, their semantics usually cannot be captured with an individual prompt. In the following two examples: (i) It is reasonable, however, to reimburse board members for legitimate expenses (ii) · · · ever having discussed being compensated by the board in the future *· · ·*, "reimburse" and "compensated" indicate sophisticated meaning of *Transaction:Transfer-Money*, which may not be captured by prompts, such as seed triggers. With the event definition and the argument roles in the APEX prompt, the highly correlated contexts, such as "board members" and "legitimate expenses",
can help the model correctly detect *reimburse* as an event mention of *Transaction:Transfer-Money*.
Few-shot Event Detection Figure 2 shows the F-score distribution of all novel types based on various forms of event type prompts, from which we observe that: (1) The event type name, seed triggers, and APEX prompt generally perform better than definition and structure, as they carry more straightforward semantics of event types. (2) Event type name based prompts show lower performance on Personnel:End-Position, *Personnel:Start-Position* and *Transaction:Transfer-Money* than other event types, as the semantics of these event type names are less indicative than other event types. (3) Seed trigger based prompts perform worse than event type name and APEX prompts on two event types, Life:injure and *Life:die*, probably because the prototype seed triggers are not properly selected. (4)
The structure based prompt outperforms the other prompts on Life:Injure as *Life:Injure* events require the existence of a person or victim. (5)
APEX prompt shows consistently (almost) best performance on all the event types because it combines all the information of other prompts. (6) We also observe that the performance of *Life:Be-Born*,
Life:Die, *Life:Marry*, and *Personnel:Elect* based on various forms of prompts are consistently better than the other types as the intrinsic semantics of those types the corresponding event triggers are concentrated.
Zero-shot Event Detection The proposed prompt-based method is more affordable to be generalized compared with the prior state-ofthe-art zero-shot approach (Zhang et al., 2021b).
The average length of created APEX prompts is less than 20 tokens. Thus manually creating them will not take much human effort. On the contrary, Zhang et al. (2021b) requires an extensive collection of anchor sentences to perform zero-shot event detection, e.g., 4,556,237 anchor sentences for ACE and ERE. This process is time-consuming and expensive.
## 7 Conclusion
We investigate a variety of prompts to represent the semantics of event types, and leverage them with a unified framework for supervised, few-shot and zero-shot event detection. Experimental results demonstrate that, a well-defined and comprehensive description of event types can significantly improve the performance of event detection, especially when the annotations are limited (few-shot event detection) or even not available (zero-shot event detection), with up to 22.2% F-score gain over the prior state of the art.
## Limitations
We have demonstrated that an accurate description can perform better for both supervised and weakly supervised event detection. However, the event types from most existing ontologies are not properly defined. For example, in ACE annotation guideline (Linguistic Data Consortium, 2005),
transfer-money is defined as "*giving, receiving, borrowing, or lending money when it is not in the context of purchasing something*". However, it is hard for the model to interpret it accurately, especially the constraints "*not in the context of purchasing* something". In addition, many event types from MAVEN, e.g., Achieve, *Award*, and *Incident*, are not associated with any definitions. A potential future research direction is to leverage mining-based approaches or state-of-the-art generators to automatically generate a comprehensive event type description based on various sources, such as annotation guidelines, example annotations, and external knowledge bases.
## Acknowledgments
We thank the anonymous reviewers and area chair for their valuable time and constructive comments.
This research is based upon work supported by the Amazon Research Award.
## References
Ashutosh Agarwal, Anay Majee, Anbumani Subramanian, and Chetan Arora. 2021. Attention guided cosine margin for overcoming class-imbalance in fewshot road object detection.
David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events, pages 1–8.
Collin F Baker, Charles J Fillmore, and John B Lowe.
1998. The berkeley framenet project. In *36th Annual Meeting of the Association for Computational* Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 86–90.
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
Ofer Bronstein, Ido Dagan, Qi Li, Heng Ji, and Anette Frank. 2015. Seed-based event trigger labeling: How
far can event descriptions get us? In *Proceedings* of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers), pages 372–376.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Kai Cao, Xiang Li, Miao Fan, and Ralph Grishman.
2015. Improving event detection with active learning. In *Proceedings of the International Conference Recent Advances in Natural Language Processing*, pages 72–77, Hissar, Bulgaria. INCOMA Ltd.
Shoumen, BULGARIA.
Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun.
2021. Honey or poison? solving the trigger curse in few-shot event detection via causal intervention.
Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167–176.
Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, and Benjamin Van Durme. 2020. Reading the manual: Event extraction as definition comprehension. In *Proceedings of the Fourth Workshop on* Structured Prediction for NLP, pages 74–83, Online.
Association for Computational Linguistics.
Nancy Chinchor and Elaine Marsh. 1998. Muc-7 information extraction task definition. In Proceeding of the seventh message understanding conference
(MUC-7), Appendices, pages 359–367.
Arkabandhu Chowdhury, Mingchao Jiang, and Chris Jermaine. 2021. Few-shot image classification: Just use a library of pre-trained feature extractors and a simple classifier. abs/2101.00562.
Xin Cong, Shiyao Cui, Bowen Yu, Tingwen Liu, Yubin Wang, and Bin Wang. 2021. Few-shot event detection with prototypical amortized conditional random field. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP*.
Bhavana Dalvi, William W. Cohen, and Jamie Callan.
2012. Websets: extracting sets of entities from
the web using unsupervised information extraction.
ArXiv, abs/1307.0261.
Shumin Deng, Ningyu Zhang, Jiaojian Kang, Yichi Zhang, Wei Zhang, and Huajun Chen. 2020. Metalearning with dynamic-memory-based prototypical network for few-shot event detection. *Proceedings* of the 13th International Conference on Web Search and Data Mining.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Xinya Du and Claire Cardie. 2020. Event extraction by answering (almost) natural questions. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 671–683, Online. Association for Computational Linguistics.
Xiaocheng Feng, Lifu Huang, Duyu Tang, Heng Ji, Bing Qin, and Ting Liu. 2016. A language-independent neural network for event detection. In *Proceedings* of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
pages 66–71, Berlin, Germany. Association for Computational Linguistics.
Ralph Grishman. 1997. Information extraction: Techniques and challenges. In International summer school on information extraction, pages 10–27.
Springer.
Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. 2021. WARP: Word-level Adversarial ReProgramming. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4921–4933, Online. Association for Computational Linguistics.
Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare Voss, Jiawei Han, and Avirup Sil. 2016.
Liberal event extraction and event schema induction.
In *Proceedings of the 54th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 258–268.
Lifu Huang and Heng Ji. 2020. Semi-supervised new event type induction and event detection. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 718–724.
Lifu Huang, Heng Ji, Kyunghyun Cho, Ido Dagan, Sebastian Riedel, and Clare Voss. 2018. Zero-shot transfer learning for event extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2160–2170, Melbourne, Australia. Association for Computational Linguistics.
Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In *Proceedings of ACL-08: Hlt*, pages 254–262.
Zhengbao Jiang, Frank F. Xu, J. Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438.
Bingyi Kang, Zhuang Liu, Xin Wang, Fisher Yu, Jiashi Feng, and Trevor Darrell. 2019. Few-shot object detection via feature reweighting. In *2019* IEEE/CVF International Conference on Computer Vision (ICCV), pages 8419–8428.
Viet Dac Lai, Franck Dernoncourt, and Thien Huu Nguyen. 2020a. Exploiting the matching information in the support set for few shot event classification.
Pacific-Asia Conference on Knowledge Discovery and Data Mining, page 233–245.
Viet Dac Lai and Thien Huu Nguyen. 2019. Extending event detection to new types with learning from keywords. *arXiv preprint arXiv:1910.11368*.
Viet Dac Lai, Thien Huu Nguyen, and Franck Dernoncourt. 2020b. Extensively matching for few-shot learning event detection. In *Proceedings of the First* Joint Workshop on Narrative Understanding, Storylines, and Events, pages 38–45, Online. Association for Computational Linguistics.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In *EMNLP*.
Bohao Li, Boyu Yang, Chang Liu, Feng Liu, Rongrong Ji, and Qixiang Ye. 2021. Beyond max-margin: Class margin equilibrium for few-shot object detection.
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7359–7368.
Fayuan Li, Weihua Peng, Yuguang Chen, Quan Wang, Lu Pan, Yajuan Lyu, and Yong Zhu. 2020. Event extraction as multi-turn question answering. In Findings of the Association for Computational Linguistics:
EMNLP 2020, pages 829–838, Online. Association for Computational Linguistics.
Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In *Proceedings of the 51st Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 73–82, Sofia, Bulgaria.
Association for Computational Linguistics.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, abs/2101.00190.
Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In *Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics*,
pages 789–797.
Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020.
A joint neural model for information extraction with global features. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 7999–8009, Online. Association for Computational Linguistics.
Linguistic Data Consortium. 2005. English annotation guidelines for events. https://www.ldc.
upenn.edu/sites/www.ldc.upenn.edu/
files/english-events-guidelines-v5.
4.3.pdf.
Jian Liu, Yubo Chen, Kang Liu, Wei Bi, and Xiaojiang Liu. 2020. Event extraction as machine reading comprehension. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1641–1651, Online. Association for Computational Linguistics.
Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, and Shaoyi Chen. 2021. Text2Event: Controllable sequence-tostructure generation for end-to-end event extraction.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2795–2806, Online. Association for Computational Linguistics.
Qing Lyu, Hongming Zhang, Elior Sulem, and Dan Roth. 2021. Zero-shot Event Extraction via Transfer Learning: Challenges and Insights. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers), pages 322–332, Online.
Association for Computational Linguistics.
David McClosky, Mihai Surdeanu, and Christopher D
Manning. 2011. Event extraction as dependency parsing. In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 1626–1635.
Minh Van Nguyen, Viet Dac Lai, and Thien Huu Nguyen. 2021. Cross-task instance representation interactions and label dependencies for joint information extraction with graph convolutional networks.
Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In *Proceedings of the 2016 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309, San Diego, California.
Association for Computational Linguistics.
Panupong Pasupat and Percy Liang. 2014. Zero-shot entity extraction from web pages. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 391–401.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 2463–2473, Hong Kong, China. Association for Computational Linguistics.
Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *JMLR*.
Timo Schick and Hinrich Schütze. 2021a. Few-shot text generation with pattern-exploiting training.
Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also fewshot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics, pages 2339–2352.
Shirong Shen, Tongtong Wu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari, and Sheng Bi. 2021. Adaptive knowledge-enhanced bayesian meta-learning for fewshot event detection. In *Findings of the Association* for Computational Linguistics, page 2417–2429. Association for Computational Linguistics (ACL). Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing 2021, ACL-IJCNLP
2021 ; Conference date: 01-08-2021 Through 06-082021.
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV,
Eric Wallace, and Sameer Singh. 2020. AutoPrompt:
Eliciting knowledge from language models with automatically generated prompts. In Empirical Methods in Natural Language Processing (EMNLP).
Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ere: annotation of entities, relations, and events. In *Proceedings of the the 3rd Workshop on EVENTS:*
Definition, Detection, Coreference, and Representation, pages 89–98.
Bo Sun, Banghuai Li, Shengcai Cai, Ye Yuan, and Chi Zhang. 2021. Fsce: Few-shot object detection via contrastive proposal encoding. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), pages 7348–7358.
David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784–
5789, Hong Kong, China. Association for Computational Linguistics.
Richard C Wang and William Cohen. 2009. Characterlevel analysis of semi-structured documents for set expansion. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1503–1512.
Sijia Wang, Mo Yu, Shiyu Chang, Lichao Sun, and Lifu Huang. 2021a. Query and extract: Refining event extraction as type-oriented binary decoding. arXiv preprint arXiv:2110.07476.
Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. 2019. Heterogeneous graph attention network. In *The World Wide Web* Conference, WWW '19, page 2022–2032, New York, NY, USA. Association for Computing Machinery.
Xiaozhi Wang, Ziqi Wang, Xu Han, Wangyi Jiang, Rong Han, Zhiyuan Liu, Juanzi Li, Peng Li, Yankai Lin, and Jie Zhou. 2020a. MAVEN: A massive general domain event detection dataset. In Proceedings of EMNLP 2020.
Xin Wang, Thomas E. Huang, Trevor Darrell, Joseph E
Gonzalez, and Fisher Yu. 2020b. Frustratingly simple few-shot object detection.
Ziqi Wang, Xiaozhi Wang, Xu Han, Yankai Lin, Lei Hou, Zhiyuan Liu, Peng Li, Juanzi Li, and Jie Zhou.
2021b. CLEVE: Contrastive Pre-training for Event Extraction. In *Proceedings of ACL-IJCNLP*, pages 6283–6297, Online. Association for Computational Linguistics.
Yang Xiao and Renaud Marlet. 2020. Few-shot object detection and viewpoint estimation for objects in the wild. In *ECCV*.
Xiaopeng Yan, Ziliang Chen, Anni Xu, Xiaoxi Wang, Xiaodan Liang, and Liang Lin. 2019. Meta r-cnn:
Towards general solver for instance-level low-shot learning. *2019 IEEE/CVF International Conference* on Computer Vision (ICCV), pages 9576–9585.
Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context.
In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 289–299, San Diego, California. Association for Computational Linguistics.
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
BARTScore: Evaluating generated text as text generation. In Advances in Neural Information Processing Systems.
Gongjie Zhang, Kaiwen Cui, Rongliang Wu, Shijian Lu, and Yonghong Tian. 2021a. Pnpdet: Efficient few-shot detection without forgetting via plug-andplay sub-networks. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 3822–3831.
Hongming Zhang, Haoyu Wang, and Dan Roth. 2021b.
Zero-shot Label-aware Event Trigger and Argument Classification. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 1331–1340, Online. Association for Computational Linguistics.
Tongtao Zhang, Spencer Whitehead, Hanwang Zhang, Hongzhi Li, Joseph Ellis, Lifu Huang, Wei Liu, Heng Ji, and Shih-Fu Chang. 2017. Improving event extraction via multimodal integration. In *Proceedings* of the 25th ACM international conference on Multimedia, pages 270–278.
## A Apex Prompt Examples For Ace
Table 4 and Table 5 show APEX prompt examples for ACE events.
## B Prompt Token Selection
In our experiments, the event type names and event type structures are automatically extracted from the target event ontology, such as ACE (Linguistic Data Consortium, 2005), ERE (Song et al., 2015)
and MAVEN (Wang et al., 2020a). The prototype seed triggers are automatically selected from the annotated data for supervised and few-shot event extraction. For zero-shot event extraction, we manually select R words from the NLTK synonyms of each event type as its prototype seed triggers.
The definitions and APEX prompts are based on the official annotation guides for each target event ontology (Linguistic Data Consortium, 2005; Song et al., 2015; Wang et al., 2020a) and the available definitions in FrameNet (Baker et al., 1998) with manual editing.
## C Learning Strategy
The learning strategy varies for supervised, fewshot, and zero-shot learning. For supervised learning, we optimize the following objective for event trigger detection L = −1 |T ||N | Pt∈T
P*|N |* i=1 y t i·
log y˜
t i
, where T is the set of target event types and N is the set of tokens from the training dataset. y t i denotes the ground truth label vector. For few-shot event detection, we optimize the model on both base training data set and the smaller training data set for novel event types: L = −1 |T B||N B| Pt∈T B
P|N B| i=1 y t i· log y˜
t i −
β1 |T N ||N N | Pt∈T N
P|N N | i=1 y t i· log y˜
t i, where T
B and N B denote the set of base event types and tokens from the base training data set, respectively. T
N is the set of novel event types. N N is the set of tokens from the training data set for novel event types. β is a hyper-parameter to balance the two objectives.
For zero-shot event detection, as we only have the base training data set, we minimize the following objective: L = −
1 |T B||N B| Pt∈T B
P|N B| i=1 y t i·
log y˜
t i
.
## D Dataset
After defining the base and novel event types, we create the training, validation, and evaluation split for all three datasets. We use the sentences with only base event type mentions as the base training data set for few-shot event detection, and randomly select 10 sentences with novel event type mentions as the additional smaller training data set. We use the sentences with both base and novel event type mentions as the development set and use the remaining sentences with only novel event type mentions as the evaluation dataset. We use the same development and evaluation set as few-shot event detection for zero-shot event detection and remove the instances with novel event mentions from the training set. We randomly split the sentences without any event annotations proportionally to the number of sentences with event mentions in each set for both zero-shot and few-shot event detection.
Table 6 shows the detailed data statistics for all the three datasets under the few-shot and zero-shot event extraction settings.
## E Hyperparameters And Evaluation
For a fair comparison with the previous baseline approaches, we use the same pre-trained bert-large-uncased model for fine-tuning and optimizing our model with BertAdam. For supervised event detection, we optimize the parameters with grid search: training epoch is 3, learning rate ∈ [3e-6, 1e-4], training batch size ∈
{8, 12, 16, 24, 32}, dropout rate ∈ {0.4, 0.5, 0.6}.
The running time is up to 3 hours on one Quadro RTX 8000. For evaluation, we use the same criteria as previous studies (Li et al., 2013; Chen et al.,
2015; Nguyen et al., 2016; Lin et al., 2020): an event mention is correct if its span and event type match a reference event mention.
| Event Rep Type | Comprehensive Prompt |
|------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Business:DeclareBankruptcy | Declare Bankruptcy [SEP] bankruptcy bankruptcies bankrupting [SEP] Organization request legal protection from debt collection at a Place |
| Business:End-Org | End Organization [SEP] dissolving disbanded [SEP] an Organization goes out of business at a Place |
| Business:Merge-Org | Merge Organization [SEP] merging merger [SEP] two or more Organizations come together to form a new organization at a Place |
| Business:Start-Org | Start Organization [SEP] founded [SEP] an Agent create a new Organization at a Place |
| Conflict:Attack | Attack [SEP] invaded airstrikes overthrew ambushed [SEP] An Attacker physically attacks a Target with Instrument at a Place |
| Conflict:Demonstrate | Demonstrate [SEP] demonstrations protest strikes riots [SEP] Entities come together in a Place to protest or demand official action |
| Contact:Meet | Meet [SEP] reunited retreats [SEP] two or more Entities come together at same Place and interact in person |
| Contact:Phone-Write | Phone Write [SEP] emailed letter [SEP] phone or written communication between two or more Entities |
| Justice:Acquit | Acquit [SEP] acquitted [SEP] a trial of Defendant ends but Adjudicator fails to produce a conviction at a Place |
| Justice:Appeal | Appeal [SEP] appeal [SEP] the decision for Defendant of a court is taken to a higher court for Adjudicator review with Prosecutor |
| Justice:Arrest-Jail | Arrest Jail [SEP] arrested locked [SEP] the Agent takes custody of a Person at a Place |
| Justice:Charge-Indict | Charge Indict [SEP] indictment [SEP] a Defendant is accused of a crime by a Prosecutor for Adjudicator |
| Justice:Convict | Convict [SEP] pled guilty convicting [SEP] an Defendant found guilty of a crime by Adjudicator at a Place |
| Justice:Execute | Execute [SEP] death [SEP] the life of a Person is taken by an Agent at a Place |
| Justice:Extradite | Extradite [SEP] extradition [SEP] a Person is sent by an Agent from Origin to Destination |
| Justice:Fine | Fine [SEP] payouts financial punishment [SEP] a Adjudicator issues a financial punishment Money to an Entity at a Place |
| Justice:Pardon | Pardon [SEP] pardoned lift sentence [SEP] an Adjudicator lifts a sentence of Defendant at a Place |
| Justice:Release-Parole | Release Parole [SEP] parole [SEP] an Entity ends its custody of a Person at a Place |
| Justice:Sentence | Sentence [SEP] sentenced punishment [SEP] the punishment for the defendant is issued by a state actor |
| Justice:Sue | Sue [SEP] lawsuits [SEP] Plaintiff initiate a court proceeding to determine the liability of a Defendant judge by Adjudicator at a Place |
| Justice:Trial-Hearing | Trial Hearing [SEP] trial hearings [SEP] a court proceeding initiated to determine the guilty or innocence of a Person with Prosecutor and Adjudicator at a Place |
| Life:Be-Born | Be Born [SEP] childbirth [SEP] a Person is born at a Place |
| Life:Die | Die [SEP] deceased extermination [SEP] life of a Victim ends by an Agent with Instrument at a Place Table 4: APEX templates for ACE event types |
| Event Rep Type | Comprehensive Prompt |
|---------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Life:Divorce | Divorce [SEP] people divorce [SEP] two Person are officially divorced at a place |
| Life:Injure | Injure [SEP] hospitalised paralyzed dismember [SEP] a Victim experiences physical harm from Agent with Instrument at a Place |
| Life:Marry | Marry [SEP] married marriage marry [SEP] two Person are married at a Place |
| Movement:Transport | Transport [SEP] arrival travels penetrated expelled [SEP] an Agent moves an Artifact from Origin to Destination with Vehicle at Price |
| Personnel:Elect | Elect [SEP] reelected elected election [SEP] a candidate Person wins an election by voting Entity at a Place |
| Personnel:End-Position | End Position [SEP] resigning retired resigned [SEP] a Person stops working for an Entity or change office at a Place |
| Personnel:Nominate | Nominate [SEP] nominate [SEP] a Person is nominated for a new position by another Agent at a Place |
| Personnel:StartPosition | Start Position [SEP] hiring rehired recruited [SEP] a Person begins working for an Entity or change office at a Place |
| Transaction:TransferMoney | Transfer Money [SEP] donations reimbursing deductions [SEP] transfer Money from the Giver to the Beneficiary or Recipient at a Place Transfer Ownership [SEP] purchased buy sell loan [SEP] buying selling loaning |
| Transaction:TransferOwnership | borrowing giving receiving of Artifacts from Seller to Buyer or Beneficiary at a Place at Price |
| Table 5: APEX templates for ACE event types (continued) | |
| Dataset | ACE05-E+ | ERE-EN | MAVEN | |
|-------------|-------------|-------------|---------|-------|
| # Types | Base | 18 | 25 | 120 |
| Novel | 10 | 10 | 45 | |
| # Mentions | Base | 3572 | 5449 | 93675 |
| Novel | 1724 | 3183 | 3201 | |
| Train | Few-shot | 3216 | 3886 | 88085 |
| Zero-shot | 3116 | 3786 | 87635 | |
| Validation | 900 | 2797 | 3883 | |
| ( 51%/49% ) | ( 53%/47% ) | ( 71%/23% ) | | |
| Evaluation | 1195 | 2012 | 1652 | |
Table 6: Data statistics for ACE2005, ERE and MAVEN datasets under few-shot/zero-shot event detection settings.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 8
✓ A2. Did you discuss any potential risks of your work?
Section 8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5 And Appendix C
✓ B1. Did you cite the creators of artifacts you used?
Section 5 and Appendix C
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 5
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 5
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix C
## C ✓ **Did You Run Computational Experiments?** Section 5 And Appendix D
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix D
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5 and Appendix D
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix D
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
mao-etal-2023-exploring | Exploring the Impact of Layer Normalization for Zero-shot Neural Machine Translation | https://aclanthology.org/2023.acl-short.112 | This paper studies the impact of layer normalization (LayerNorm) on zero-shot translation (ZST). Recent efforts for ZST often utilize the Transformer architecture as the backbone, with LayerNorm at the input of layers (PreNorm) set as the default. However, Xu et al. (2019) has revealed that PreNorm carries the risk of overfitting the training data. Based on this, we hypothesize that PreNorm may overfit supervised directions and thus have low generalizability for ZST. Through experiments on OPUS, IWSLT, and Europarl datasets for 54 ZST directions, we demonstrate that the original Transformer setting of LayerNorm after residual connections (PostNorm) consistently outperforms PreNorm by up to 12.3 BLEU points. We then study the performance disparities by analyzing the differences in off-target rates and structural variations between PreNorm and PostNorm. This study highlights the need for careful consideration of the LayerNorm setting for ZST. | # Exploring The Impact Of Layer Normalization For Zero-Shot Neural Machine Translation
Zhuoyuan Mao 1 Raj Dabre 2 **Qianying Liu** 1 Haiyue Song 1 Chenhui Chu 1 **Sadao Kurohashi** 1,3 1 Kyoto University, Japan 2 NICT, Japan 3 NII, Japan
{zhuoyuanmao, ying, song, chu, kuro}@nlp.ist.i.kyoto-u.ac.jp [email protected]
## Abstract
This paper studies the impact of layer normalization (LayerNorm) on zero-shot translation
(ZST). Recent efforts for ZST often utilize the Transformer architecture as the backbone, with LayerNorm at the input of layers (PreNorm) set as the default. However, Xu et al. (2019) has revealed that PreNorm carries the risk of overfitting the training data. Based on this, we hypothesize that PreNorm may overfit supervised directions and thus have low generalizability for ZST. Through experiments on OPUS, IWSLT,
and Europarl datasets for 54 ZST directions, we demonstrate that the original Transformer setting of LayerNorm after residual connections
(PostNorm) consistently outperforms PreNorm by up to 12.3 BLEU points. We then study the performance disparities by analyzing the differences in off-target rates and structural variations between PreNorm and PostNorm. This study highlights the need for careful consideration of the LayerNorm setting for ZST.
## 1 Introduction
Multilingual neural machine translation (MNMT)
enables translation between unseen language pairs, i.e., zero-shot translation (ZST) (Johnson et al.,
2017; Firat et al., 2017). Prior studies have explored techniques such as language tags (Wu et al.,
2021), residual connections (Liu et al., 2021), and novel training objectives (Al-Shedivat and Parikh, 2019; Pham et al., 2019; Arivazhagan et al., 2019; Gu et al., 2019; Zhu et al., 2020; Zhang et al., 2020; Wang et al., 2021; Yang et al., 2021) for improving ZST. They primarily used the Transformer architecture (Vaswani et al., 2017), which has two variations depending on the position of layer normalization (LayerNorm) (Ba et al., 2016), namely, PreNorm (applied at the input of layers) (Baevski and Auli, 2019) and PostNorm (applied after residual connections), as shown in Fig. 1. As previous studies showed that PreNorm can result in more stable training and faster convergence compared to
![0_image_0.png](0_image_0.png)
PostNorm for MNMT (Xiong et al., 2020), most ZST works (Pham et al., 2019; Wu et al., 2021; Liu et al., 2021) use PreNorm as the default setting following those MNMT studies. However, Xu et al.
(2019) revealed that PreNorm carries the risk of overfitting the training data. We thus hypothesize that in a multilingual scenario, PreNorm may overfit supervised directions and have poor ZST generalizability. We systematically explore PreNorm and PostNorm's effect on ZST to verify this.
Using the OPUS, IWSLT, and Europarl datasets and a total of 54 ZST directions, we show that PostNorm consistently outperforms PreNorm by up to 12.3 BLEU points. Following previous work, we also evaluate different language tag (Wu et al.,
2021) and residual connection (Liu et al., 2021) settings, as they have been shown to impact ZST but we observe that PostNorm continues to be superior thereby lending credibility to our hypothesis.
1300 To better understand the performance differences, we introduce a novel analysis approach called layer-wise language recognition (LLR),
which tracks the off-target rates for each encoder and decoder layer by training token-level classifiers to recognize the source or target language.
This analysis shows that PreNorm is more sensitive to language tag settings than PostNorm, negatively impacting ZST performance. Additionally, by examining the unraveled view of PreNorm
(Fig. 1) inspired by Veit et al. (2016), we reveal structural flaws in PreNorm for ZST. Our analysis demonstrates that the order of LayerNorm and selfattention/feed-forward network in PreNorm is the main factor affecting its ZST performance.
Given the prevalent use of PreNorm as the default setting in ZST baselines and frameworks such as Fairseq (Ott et al., 2019)
1and Tensor2Tensor (Vaswani et al., 2018), our study emphasizes the importance of careful consideration in the LayerNorm setting for ZST.
## 2 Background: Layernorm
LayerNorm (Ba et al., 2016) normalizes the input x by zero-centering and scaling to have a unit standard deviation, followed by an additional trainable transformation, including a gain and bias adjustment. Specifically, it is formulated as:
$$\mathrm{LayerNorm}(\mathbf{x})={\frac{\mathbf{x}-\mathbf{E}(\mathbf{x})}{\sqrt{\mathbf{V}(\mathbf{x})}}}\cdot\mathbf{g}+\mathbf{b},\quad(1)$$
where g and b are trainable gain and bias. E
and V indicate expectation and variance. LayerNorm is commonly used in two positions in the Transformer, as shown in Fig. 1. PostNorm, which is the originally proposed setting of the Transformer (Vaswani et al., 2017), involves applying LayerNorm after each sub-module (i.e., selfattention or feed-forward network) and residual connections. PreNorm (Baevski and Auli, 2019),
on the other hand, involves applying LayerNorm directly before each sub-module and is known to stabilize Transformer training. While variants of Transformer LayerNorm like RMSNorm (Zhang and Sennrich, 2019) have been proposed, the vanilla PreNorm and PostNorm are still the most widely adopted settings in current multilingual NMT literature. Therefore, we only focus on PreNorm and PostNorm in this work.
1https://github.com/facebookresearch/fairseq/
tree/main/examples/multilingual
| Datasets | Languages | Nzero | Strain | Arch. |
|----------------|--------------------|---------|----------|---------|
| OPUS | ar, de, en, | 30 | 12.00M | base |
| fr, nl, ru, zh | | | | |
| IWSLT | en, it, nl, ro | 6 | 1.38M | base |
| Europarl | de, en, es, fr, nl | 12 | 15.78M | big |
Nguyen and Salazar (2019) have explored the impacts of normalization and initialization choices on supervised low-resource NMT settings, however, we delve deeper and focus on the significance of the positioning of LayerNorm for zero-shot NMT.
We expect this to complete the understanding of LayerNorm's role in multilingualism, particularly in the context of zero-shot translation.
## 3 Experiments And Results
We evaluate the performance of PreNorm and PostNorm for ZST on various datasets and language pairs. We then analyze the off-target rates and structural discrepancies between PreNorm and PostNorm to understand performance differences.
## 3.1 Experimental Settings
Datasets We perform ZST experiments on three datasets: OPUS (Zhang et al., 2020), IWSLT (Cettolo et al., 2017), and Europarl (Koehn, 2005). The statistics of the datasets are summarized in Table 1.
We include 7, 4, and 5 languages for each dataset.
The training data consists of only English-centric sentence pairs, resulting in 30, 6, and 12 ZST directions for each dataset. The total number of parallel sentences for each dataset is 12.00M, 1.38M, and 15.78M, respectively. We apply BPE (Sennrich et al., 2016) with merge operations of 50k, 40k, and 50k to create a joint vocabulary for each dataset.
Training We employ Transformer-base model for OPUS and IWSLT, and Transformer-big for Europarl, in accordance with the distinct sizes of training data. We consider the following settings:
(1) PreNorm or PostNorm: PreNorm involves LayerNorm directly before each sub-module (i.e.,
self-attention or feed-forward network), while PostNorm applies LayerNorm after each sub-module and residual connections, as shown in Fig. 1.
2 2We also experiment with the setting of LayerNorm without trainable parameters (Xu et al., 2019) in Appendix E.
(2) S-ENC-T-DEC or T-ENC: Source language tag on the encoder-side and target language tag on the decoder-side; or only target language tag on the encoder-side. Wu et al. (2021) showed that this setting impacts ZST for Transformer with PreNorm.
(3) w/ or w/o Res.: With the residual connection for self-attention in the middle (4 th) encoder layer or not. Liu et al. (2021) revealed that "w/o Res."
improves ZST for the model trained with PreNorm.
We experiment this with different LayerNorm settings as this may reduce the potential of overfitting on supervised directions, then further impacts ZST,
which aligns with our hypothesis.
The settings above lead to eight different combinations, shown in Table 2 (\#1 - \#8). Additional training details are in Appendix A.
| # | Layer | Language | Res. | Zero-shot | Supervised | | | | |
|------|----------|-------------|--------|---------------|---------------|---------------|----------|------|------|
| Norm | Tag | OPUS | IWSLT | Europarl | OPUS | IWSLT | Europarl | | |
| 0 | Pivot | 21.8 | 20.0 | 29.5 | - | - | - | | |
| 1 | PreNorm | S-ENC-T-DEC | w/ | 10.1 (42.19%) | 4.9 (64.84%) | 24.9 (07.73%) | 33.7 | 31.5 | 34.3 |
| 2 | PostNorm | S-ENC-T-DEC | w/ | 16.8 (08.59%) | 12.4 (10.61%) | 29.2 (00.34%) | 33.9 | 31.5 | 34.5 |
| 3 | PreNorm | T-ENC | w/ | 13.3 (22.99%) | 13.7 (03.98%) | 29.5 (00.23%) | 33.7 | 31.6 | 34.4 |
| 4 | PostNorm | T-ENC | w/ | 14.0 (22.86%) | 15.5 (04.59%) | 30.8 (00.11%) | 34.1 | 31.5 | 34.5 |
| 5 | PreNorm | S-ENC-T-DEC | w/o | 14.3 (20.67%) | 8.0 (50.16%) | 16.7 (41.87%) | 33.6 | 30.9 | 34.3 |
| 6 | PostNorm | S-ENC-T-DEC | w/o | 16.0 (15.27%) | 17.4 (01.83%) | 29.0 (00.41%) | 33.8 | 30.7 | 34.4 |
| 7 | PreNorm | T-ENC | w/o | 13.4 (27.15%) | 16.2 (01.54%) | 29.9 (02.15%) | 33.5 | 30.9 | 34.3 |
| 8 | PostNorm | T-ENC | w/o | 13.9 (26.68%) | 17.8 (01.50%) | 30.8 (00.13%) | 33.9 | 30.6 | 34.4 |
## 3.2 Main Results
We evaluate ZST systems using SacreBLEU (Post, 2018) and off-target rates. We report in Table 2 BLEU scores for both zero-shot and supervised directions. For ZST, we also present pivot-based translation results as a reference. Implementation details of evaluation can be found in Appendix B.
Our findings are as follows:
PreNorm vs. PostNorm: We find that PostNorm consistently yields better BLEU scores than PreNorm for ZST across various language tag and residual connection settings, while their performance is comparable for supervised directions.
Impact of Language Tag and Residual Connection: We observe that using the "T-ENC" language tag and "w/ Res." improves ZST performance for IWSLT, which aligns with the findings of Wu et al.
(2021) and Liu et al. (2021). Nevertheless, the best performance is achieved using "w/ Res." for Post-
Norm with "S-ENC-T-DEC" and "T-ENC" tags for OPUS and Europarl, respectively (\#2 and \#4).
Given that Wu et al. (2021) and Liu et al. (2021)
used PreNorm as the default setting (\#2, \#4, \#6 and \#8 are unreported results in their work), our results emphasize the need to consider PostNorm as the default setting for ZST, while the language tag and residual connection settings have less impact.
Off-target Rates: Off-target rates help understand the different BLEU score gaps between PreNorm and PostNorm, which ranges from 0.5 to 12.3 BLEU points. For PreNorm and PostNorm with the
"T-ENC" language tag (\#3, \#4, \#7, and \#8), they have similar off-target rates, with a discrepancy ranging from −0.61% to 2.02%, which results in narrow BLEU score gaps, ranging from 0.5 to 1.8 points. However, for PreNorm and PostNorm with the "S-ENC-T-DEC" language tag (\#1, \#2, \#5, and
\#6), the off-target rates show a more considerable discrepancy, ranging from 5.40% to 54.23%, resulting in BLEU score gaps from 1.7 to 12.3 points.
Further analysis of the nature of Transformer hidden states in the next section explores the reason for these different off-target rates in translations.
## 3.3 Tracking Off-Targets Within Transformer
We probe the language independence of hidden states to track off-targets within Transformer and reveal the differences between PreNorm and PostNorm. In previous work, language independence was primarily analyzed using either SVCCA (Raghu et al., 2017) or language classification accuracy (LCA) (Liu et al., 2021). However, we provide evidence in Appendix C that SVCCA,
which measures the cosine similarity between hidden states, are not suitable for ZST systems. In-
![3_image_0.png](3_image_0.png)
stead, LCA trains a classifier to inspect the hidden states on top of the encoder, but it does not simulate the training of a ZST system, which may introduce bias in the analysis for ZST.3In this work, we propose a novel approach for ZST based on LCA:
LLR tailors classifiers for each layer to recognize the source or target language. We train a tokenlevel linear classifier for each layer to utilize hidden states in each layer as features to identify the source or target language. We use hidden states obtained by feeding sentence pairs in supervised directions to simulate the training of ZST. We then test each layer's classifer's ability to recognize the source or target language for supervised or zeroshot directions. This approach enables the trained classifier to best represent the language recognition ability of hidden states in a ZST system.
We train two types of linear classifiers for each encoder and decoder layer. One is for recognizing the source language, and the other is for the target language. Each linear classifier is a linear transformation from the dimension of the hidden states
(512 or 1, 024) to the number of source or target languages (e.g., 7 for OPUS). We use the validation set of all supervised directions to obtain the hidden state of each token in each layer and set their source language tag or target language tag as the gold labels. Note that the decoder hidden state of each token in each layer is obtained auto-regressively without teacher-forcing. We train each classifier for 3 epochs4 with a learning rate of 1e-3 and a batch size of 64 sentences. For inference, we utilize the test sets of all supervised or zero-shot directions for computing the LLR results for corresponding directions, respectively.
The LLR results for \#1 and \#2 in Table 2 are presented in Fig. 2. First, we find that the encoder and decoder hidden states are highly correlated with the target and source languages, respectively, for supervised directions (L1 to L6 of Pre/Post-Tgt and L7 to L12 of Pre/Post-Src of 3 upper sub-figures),
which may impact the generalizability for ZST. Second, we see that the encoder hidden states of PostNorm are less dependent on the source language than PreNorm (L6 of Pre/Post-Src of 3 lower subfigures). Third, we observe that the hidden states in all the decoder layers of PostNorm are more dependent on the target language and less on the source language than PreNorm (L7 to L12 of 3 lower subfigures). The latter two points contribute to the observed gaps in off-target rates between PreNorm and PostNorm. Conclusions for \#5 and \#6 with the
"S-ENC-T-DEC" tag are identical (Appendix G).
![4_image_0.png](4_image_0.png)
For systems using "T-ENC," we find that the LLR are similar between PreNorm and PostNorm
(Appendix G) and attribute the BLEU score gaps to translation quality (i.e., adequacy and fluency).
## 3.4 Unraveling Structural Flaws Of Prenorm
We investigate the structural differences between PreNorm and PostNorm to explain the observed differences in hidden states for models trained with the "S-ENC-T-DEC" tag. Inspired by Veit et al. (2016), we present an "unraveled view" for PreNorm, which decomposes the residual connections by the summation of several sub-networks, as shown in Fig. 1 (paths with different colors indicate sub-networks). However, this is not applicable to PostNorm, as LayerNorm is located after residual connections. Based on this analysis, the structural characteristic of PreNorm is:
(1) Shallow Sub-network Nature: PreNorm includes shallow sub-networks, such as the embedding layer output fed through encoder layers without any operation except for the final LayerNorm
(red path in Fig. 1), but PostNorm does not.
(2) LayerNorm Before SA/FFN: In PreNorm, LayerNorm is placed directly before the self-attention
(SA) or feed-forward module (FFN) within the residual connection module.
To analyze the impact of these structural characteristics on the generalizability of PreNorm in ZST, we swap the order of LayerNorm and SA/FFN
within the residual connection module (**SwapPreNorm**), while keeping the shallow sub-network nature of PreNorm. Refer to Appendix D for specific illustrations of Swap-PreNorm. The results, presented in Fig 3, show that PreNorm can be significantly improved through Swap-PreNorm, with Swap-PreNorm approaching the performance of PostNorm. This demonstrates that ZST is more sensitive to the position of LayerNorm in PreNorm than its shallow sub-network nature.
## 4 Conclusion
In this paper, we comprehensively explored the effects of LayerNorm on ZST performance. Our results demonstrate that PostNorm consistently outperforms PreNorm for ZST, regardless of the language tag and residual connection settings used.
Through in-depth analysis of off-target rates and structural flaws in the PreNorm model, we were able to identify the underlying factors that contribute to the performance discrepancy. Our study suggests that care should be taken when selecting the LayerNorm setting for ZST in future research.
## Limitations
According to us there are 3 limitations of our work which will be addressed in future work.
- The impact of LayerNorm, language tags, and residual connection settings on ZST was analyzed in this study. However, other factors, such as the number of layers of the Transformer model, may also have an effect and should be further investigated.
- Our conclusions were based on overall scores across all ZST directions. Further examination of how LayerNorm impacts specific language pairs is necessary.
- We explored the setting of LayerNorm for ZST systems trained from scratch. Exploration of how the LayerNorm setting of multilingual pre-trained models such as mBART (Liu et al., 2020) impacts the finetuning for ZST will be needed.
## Ethical Considerations
In this study, we utilized only publicly accessible datasets for model training. Though our experiments focused on neural machine translation models, it is worth noting that these models may produce biased translations. Although this can be mitigated through a debiasing filtering process, it is beyond the scope of this work. Regarding the composition of this paper, only Grammarly5 was utilized for grammar correction, and there is no originally machine-generated text in the paper.
5https://app.grammarly.com/
## Acknowledgements
This work was supported by JSPS KAKENHI
Grant Number 22KJ1843.
## References
Maruan Al-Shedivat and Ankur Parikh. 2019. Consistency by agreement in zero-shot neural machine translation. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1184–1197, Minneapolis, Minnesota. Association for Computational Linguistics.
Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey.
2019. The missing ingredient in zero-shot neural machine translation. *CoRR*, abs/1903.07091.
Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E.
Hinton. 2016. Layer normalization. *CoRR*,
abs/1607.06450.
Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Jan Niehues, Sebastian Stüker, Katsuhito Sudoh, Koichiro Yoshino, and Christian Federmann. 2017. Overview of the IWSLT 2017 evaluation campaign.
In Proceedings of the 14th International Conference on Spoken Language Translation, pages 2–14, Tokyo, Japan. International Workshop on Spoken Language Translation.
Orhan Firat, Kyunghyun Cho, Baskaran Sankaran, Fatos T. Yarman-Vural, and Yoshua Bengio. 2017.
Multi-way, multilingual neural machine translation.
Comput. Speech Lang., 45:236–252.
Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O.K. Li. 2019. Improved zero-shot neural machine translation via ignoring spurious correlations.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 1258–
1268, Florence, Italy. Association for Computational Linguistics.
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. *Transactions of the Association for Computational Linguistics*, 5:339–351.
Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hervé Jégou, and Tomás Mikolov. 2016. Fasttext.zip: Compressing text classification models. *CoRR*, abs/1612.03651.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *3rd International Conference on Learning Representations,*
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics.
Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In *Proceedings of* Machine Translation Summit X: Papers, MTSummit 2005, Phuket, Thailand, September 13-15, 2005, pages 79–86.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
Danni Liu, Jan Niehues, James Cross, Francisco Guzmán, and Xian Li. 2021. Improving zero-shot translation by disentangling positional information.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1259–1273, Online. Association for Computational Linguistics.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory F. Diamos, Erich Elsen, David García, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed precision training. In *6th International Conference on* Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
Toan Q. Nguyen and Julian Salazar. 2019. Transformers without tears: Improving the normalization of selfattention. In *Proceedings of the 16th International* Conference on Spoken Language Translation, Hong Kong. Association for Computational Linguistics.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for
sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*,
pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics.
Ngoc-Quan Pham, Jan Niehues, Thanh-Le Ha, and Alexander Waibel. 2019. Improving zero-shot translation with language-independent constraints. In *Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)*, pages 13–23, Florence, Italy. Association for Computational Linguistics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. 2017. SVCCA: singular vector canonical correlation analysis for deep learning dynamics and interpretability. In *Advances in Neural* Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6076–6085.
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020.
BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
Ashish Vaswani, Samy Bengio, Eugene Brevdo, François Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Lukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2tensor for neural machine translation. In *Proceedings of the 13th Conference of* the Association for Machine Translation in the Americas, AMTA 2018, Boston, MA, USA, March 17-21, 2018 - Volume 1: Research Papers, pages 193–199.
Association for Machine Translation in the Americas.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Andreas Veit, Michael J. Wilber, and Serge J. Belongie.
2016. Residual networks behave like ensembles of relatively shallow networks. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 550–558.
Weizhi Wang, Zhirui Zhang, Yichao Du, Boxing Chen, Jun Xie, and Weihua Luo. 2021. Rethinking zeroshot neural machine translation: From a perspective of latent variables. In *Findings of the Association* for Computational Linguistics: EMNLP 2021, pages 4321–4327, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Liwei Wu, Shanbo Cheng, Mingxuan Wang, and Lei Li. 2021. Language tags matter for zero-shot neural machine translation. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 3001–3007, Online. Association for Computational Linguistics.
Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. 2020. On layer normalization in the transformer architecture. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 10524–10533. PMLR.
Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, and Junyang Lin. 2019. Understanding and improving layer normalization. In *Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019,*
NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 4383–4393.
Yilin Yang, Akiko Eriguchi, Alexandre Muzio, Prasad Tadepalli, Stefan Lee, and Hany Hassan. 2021. Improving multilingual translation by representation and gradient regularization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7266–7279, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Biao Zhang and Rico Sennrich. 2019. Root mean square layer normalization. In *Advances in Neural* Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 12360–12371.
Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628–
1639, Online. Association for Computational Linguistics.
Changfeng Zhu, Heng Yu, Shanbo Cheng, and Weihua Luo. 2020. Language-aware interlingua for multilingual neural machine translation. In *Proceedings*
of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1650–1655, Online. Association for Computational Linguistics.
## A Training Details
For data preprocessing, we utilize jieba6for Chinese segmentation and Moses7(Koehn et al., 2007)
for tokenization of other languages. After applying BPE, we obtain vocabularies with sizes of 66, 158, 40, 100, and 50, 363 for OPUS, IWSLT, and Europarl, respectively. For multilingual training, we do not apply oversampling as the data size for each language pair is comparable. The maximum sentence length is set to 256. We train Transformer models using Fairseq8and set the dropout rate to 0.1, 0.4, and 0.3 for each dataset. Adam (Kingma and Ba, 2015) is used as the optimizer with a learning rate of 5e-4, 1e-3, and 5e-4 for each dataset, and 4, 000 warm-up steps are employed. We train the Transformer-base model using 4 32G V100 GPUs and the Transformer-big model using 8 32G V100 GPUs with the batch size of 4, 096 tokens. Additionally, we employ mixed precision training (Micikevicius et al., 2018) to accelerate the training process. We train each dataset for 200, 100, and 400 epochs, respectively.
## B Evaluation Details
For OPUS, we use the test sets following (Zhang et al., 2020), while for IWSLT and Europarl, we choose the test sets following (Wu et al., 2021).
We select the checkpoint with the lowest validation loss for evaluation. The inference is performed on the trained models using a beam size of 5. For calculating SacreBLEU,9 we utilize the "zh" tokenization mode for Chinese, and the "13a" tokenization mode for other languages. We use the model of setting \#410 (Table 2) for pivot-based translation. To calculate the off-target rates, we utilize the language identification tool provided by FastText (Joulin et al., 2016).11 Our experiment has revealed that this tool is slightly more accurate than another tool called "langdetect,"12 as it can achieve
| Zero-shot | Supervised | |
|----------------------|--------------|------|
| PreNorm | 9.8 | 33.8 |
| PostNorm | 17.5 | 33.8 |
| PreNorm w/o Enc-Last | 11.2 | 33.7 |
![7_image_0.png](7_image_0.png)
an accuracy of 98% when decoding reference English sentences in the test set, whereas "langdetect" only achieves accuracy of around 92%.
## C Discussion About Svcca Score
In previous work (Wu et al., 2021; Liu et al., 2021),
the SVCCA score (Raghu et al., 2017), a cosine similarity measure between the hidden states of neural models, was used to compare two ZST models. However, we demonstrate that this method is unsuitable for comparing different ZST systems through an experiment. We removed the final LayerNorm from the PreNorm encoder, denoting it as "PreNorm w/o Enc-Last." We then evaluated the BLEU scores of PreNorm, PostNorm, and
"PreNorm w/o Enc-Last" on the OPUS dataset, as reported in Table 3. We subsequently calculated the encoder layer-wise SVCCA score for each LayerNorm setting using the mean-pooled hidden states of each encoder layer. The average SVCCA score between all the "en-xx" and "xx-en" directions is reported in Fig. 4. When comparing Fig. 4 with Table 3, we observe that PostNorm has a higher SVCCA score on top of the encoder (L6) than PreNorm, which suggests that the encoder of Post-
![8_image_1.png](8_image_1.png)
Norm is more language-agnostic and thus has a higher ZST BLEU score in Table 3, aligning with the results found in Wu et al. (2021) and Liu et al.
(2021). However, "PreNorm w/o Enc-Last" shows an extremely high SVCCA score on top of the encoder, whereas its ZST BLEU performance is significantly lower than PostNorm by 6.3 BLEU
points. This reveals the significant inconsistency between the SVCCA score and the performance of ZST models. Therefore, it is crucial to carefully consider how to leverage SVCCA for ZST analysis in the future.
On the other hand, our proposed LLR score is consistent with the ZST BLEU score, as shown in Fig. 5. Specifically, we observe the lowest LLR
score on top of the encoder of PostNorm for the source language and the highest LLR scores in all the decoder layers, which aligns with its best ZST
performance among the three systems.
## D Swap-Prenorm
Fig. 6 illustrates the implementation of SwapPreNorm, which incorporates LayerNorm following the SA/FFN layers within the residual connection block. Compared with PostNorm, SwapPreNorm alters the order of LayerNorm and residual connections. As depicted in the unraveled view of Swap-PreNorm in Fig. 6, it preserves the shallow sub-network characteristics of PreNorm, which is the main difference compared with PostNorm.
![8_image_0.png](8_image_0.png)
## E Layernorm Without Trainable Parameters
Xu et al. (2019) demonstrated that the overfitting issue of PreNorm can be alleviated by removing the trainable parameters of LayerNorm (LayerNormsimple). We apply this technique to our ZST experimental settings to investigate the overfitting state of PreNorm and PostNorm. PreNorm and PostNorm after applying this technique are denoted as PreNorm-simple and PostNorm-simple. As reported in Table 4, the results indicate that PreNormsimple and PostNorm-simple outperform their respective original versions in supervised directions, which aligns with the findings of Xu et al. (2019).
Additionally, we observe comparable or better BLEU scores for PreNorm-simple than PreNorm
(except for \#7 on Europarl), indicating that the original PreNorm had low generalizability for ZST. For PostNorm-simple, we observe significant improvement only for \#4 on OPUS, which suggests the superior generalizability of the original PostNorm for ZST. Despite this improvement, PreNorm-simple still underperforms PostNorm, highlighting the severe overfitting problem of the original PreNorm.
## F Details Of The Llr Results
We show the LLR results of \#3 - \#8 (Table 2) for ZST and supervised directions in Fig. 7.
## G Details Of The Main Results
We report the specific BLEU score for each translation direction and each random seed in Ta-
| # | LayerNorm- | Language | Res. | Zero-shot | Supervised | | | | |
|--------|-----------------|-------------|--------|-------------|--------------|-------------|-------------|-------------|-------------|
| simple | Tag | OPUS | IWSLT | Europarl | OPUS | IWSLT | Europarl | | |
| 1 | PreNorm-simple | S-ENC-T-DEC | w/ | 10.1 (+0.0) | 5.9 (+1.0) | 25.0 (+0.1) | 33.9 (+0.2) | 31.9 (+0.4) | 34.4 (+0.1) |
| 2 | PostNorm-simple | S-ENC-T-DEC | w/ | 15.8 (-1.0) | 11.5 (-0.9) | 28.7 (-0.5) | 34.1 (+0.2) | 32.1 (+0.6) | 34.5 (+0.0) |
| 3 | PreNorm-simple | T-ENC | w/ | 13.7 (+0.4) | 14.5 (+0.8) | 29.4 (-0.1) | 33.9 (+0.2) | 31.9 (+0.3) | 34.4 (+0.0) |
| 4 | PostNorm-simple | T-ENC | w/ | 14.9 (+0.9) | 15.4 (-0.1) | 30.8 (+0.0) | 34.0 (-0.1) | 31.9 (+0.4) | 34.6 (+0.1) |
| 5 | PreNorm-simple | S-ENC-T-DEC | w/o | 15.4 (+1.1) | 7.8 (-0.2) | 19.4 (+2.7) | 33.7 (+0.1) | 31.3 (+0.4) | 34.1 (-0.2) |
| 6 | PostNorm-simple | S-ENC-T-DEC | w/o | 16.4 (+0.4) | 16.0 (-1.4) | 29.2 (+0.2) | 33.9 (+0.1) | 31.3 (+0.6) | 34.4 (+0.0) |
| 7 | PreNorm-simple | T-ENC | w/o | 13.1 (-0.3) | 16.8 (+0.6) | 28.7 (-1.2) | 33.7 (+0.2) | 31.4 (+0.5) | 34.3 (+0.0) |
| 8 | PostNorm-simple | T-ENC | w/o | 14.0 (+0.1) | 17.9 (+0.1) | 31.0 (+0.2) | 33.7 (-0.2) | 31.1 (+0.5) | 34.4 (+0.0) |
bles 5, 6, 7, 8, 9, and 10.
13 In addition to BLEU
scores, we present model-based evaluation results obtained using BLEURT (Sellam et al., 2020)
14 in Table 11. The results trend is consistent with those obtained from BLEU scores.
![10_image_0.png](10_image_0.png)
Layer Direction S-ENC-T-DEC w/ Res. T-ENC w/ Res. S-ENC-T-DEC w/o Res. T-ENC w/o Res.
Norm
ar-de 5.3 5.9 5.2 5.5 10.0 11.0 10.0 10.3 9.8 8.4 9.5 9.2 11.4 8.3 10.8 10.2 ar-fr 17.5 16.1 17.2 16.9 16.3 19.9 18.3 18.2 19.9 20.3 20.8 20.3 20.9 18.6 20.4 20.0
ar-nl 8.6 6.3 7.9 7.6 13.2 13.1 12.6 13.0 13.3 14.0 12.2 13.2 13.5 12.8 13.6 13.3 ar-ru 8.1 9.1 9.5 8.9 14.8 16.2 15.9 15.6 13.0 10.9 13.0 12.3 19.6 17.8 19.6 19.0 ar-zh 12.7 13.4 13.8 13.3 28.1 28.1 27.3 27.8 25.1 19.8 24.4 23.1 31.2 31.0 31.3 31.2 de-ar 3.6 3.3 2.5 3.1 5.6 5.0 3.9 4.8 6.9 6.4 5.6 6.3 6.4 5.1 3.3 4.9
de-fr 15.5 16.0 16.2 15.9 5.1 5.7 3.8 4.9 18.8 17.6 18.9 18.4 5.9 4.7 5.2 5.3
de-nl 19.4 15.9 18.8 18.0 12.4 8.9 8.6 10.0 21.4 20.4 20.7 20.8 9.1 7.1 7.7 8.0 de-ru 6.0 6.1 5.8 6.0 5.0 5.6 3.7 4.8 9.4 9.2 9.0 9.2 6.4 4.5 3.8 4.9 de-zh 7.6 9.5 8.8 8.6 15.6 12.4 11.9 13.3 14.4 12.8 13.2 13.5 16.4 4.1 6.0 8.8 fr-ar 9.5 7.5 8.5 8.5 15.5 16.2 13.2 15.0 15.4 13.1 14.4 14.3 18.5 16.5 15.8 16.9 fr-de 10.4 10.6 11.6 10.9 6.3 7.2 4.9 6.1 14.1 11.0 15.2 13.4 4.5 4.6 4.0 4.4 fr-nl 17.5 13.7 18.0 16.4 16.0 12.5 13.2 13.9 20.5 19.9 20.2 20.2 11.1 9.1 8.6 9.6 fr-ru 8.8 8.4 9.3 8.8 12.1 12.8 10.9 11.9 13.3 9.2 12.0 11.5 16.5 7.4 8.4 10.8 fr-zh 14.3 13.1 15.3 14.2 31.2 30.0 28.0 29.7 27.9 21.0 25.8 24.9 34.1 16.0 27.9 26.0
nl-ar 2.6 2.0 1.7 2.1 5.2 5.6 5.3 5.4 4.3 5.8 4.2 4.8 5.5 5.0 5.0 5.2 nl-de 14.3 14.4 13.9 14.2 12.8 13.9 11.3 12.7 16.9 14.8 18.3 16.7 13.8 6.9 10.9 10.5 nl-fr 18.3 17.4 18.5 18.1 13.1 16.1 12.4 13.9 21.5 19.9 22.3 21.2 15.0 7.1 13.8 12.0 nl-ru 4.2 4.4 3.4 4.0 9.5 9.8 8.6 9.3 7.2 6.5 7.3 7.0 10.3 6.6 7.3 8.1
nl-zh 2.2 3.2 3.0 2.8 10.8 10.0 10.4 10.4 7.0 8.0 6.3 7.1 11.1 7.5 10.0 9.5
ru-ar 9.7 7.6 7.6 8.3 15.6 16.1 14.6 15.4 15.9 13.3 14.0 14.4 18.6 19.1 18.0 18.6 ru-de 7.7 9.1 7.2 8.0 8.5 10.0 6.0 8.2 10.5 10.0 10.9 10.5 8.4 5.6 6.8 6.9 ru-fr 18.1 17.5 17.4 17.7 18.1 20.5 17.6 18.7 19.9 19.5 20.7 20.0 22.4 17.4 21.1 20.3 ru-nl 10.2 8.6 9.9 9.6 11.5 11.7 9.5 10.9 13.0 13.1 12.4 12.8 12.7 8.2 10.1 10.3
ru-zh 11.3 11.6 12.5 11.8 28.4 28.3 27.6 28.1 25.3 17.7 21.6 21.5 31.9 20.0 30.7 27.5 zh-ar 9.1 7.6 7.2 8.0 15.2 16.6 14.5 15.4 15.6 12.7 15.1 14.5 18.4 18.8 18.7 18.6 zh-fr 16.7 15.6 16.4 16.2 20.1 21.4 18.4 20.0 20.9 19.3 20.6 20.3 23.5 23.3 23.7 23.5 zh-de 4.7 5.8 5.4 5.3 7.8 8.1 7.0 7.6 7.5 6.9 7.1 7.2 8.6 8.6 8.8 8.7 zh-nl 6.9 5.4 6.0 6.1 8.6 8.6 8.2 8.5 8.5 8.0 8.0 8.2 9.1 9.2 8.8 9.0 zh-ru 6.9 8.2 7.8 7.6 13.7 15.7 12.9 14.1 12.8 10.0 11.8 11.5 18.7 19.8 19.7 19.4 avg. 10.3 9.8 10.2 **10.1** 13.5 13.9 12.4 **13.3** 15.0 13.3 14.5 **14.3** 15.1 11.7 13.3 **13.4**
| Pre. Post. |
|--------------|
ar-de 11.4 11.0 10.3 10.9 10.1 10.4 9.9 10.1 10.1 11.9 9.9 10.6 11.0 11.0 10.0 10.7 ar-fr 20.7 23.2 20.3 21.4 16.2 18.7 19.3 18.1 20.7 24.0 19.2 21.3 20.4 21.8 15.9 19.4
ar-nl 13.3 13.7 12.5 13.2 12.8 13.5 13.3 13.2 13.4 14.4 12.5 13.4 13.2 13.9 13.0 13.4 ar-ru 16.9 18.7 16.1 17.2 17.4 17.2 18.6 17.7 13.5 19.1 14.7 15.8 20.4 20.7 18.7 19.9 ar-zh 28.6 29.4 29.2 29.1 29.2 30.4 30.3 30.0 26.1 30.7 27.4 28.1 32.9 32.9 31.9 32.6 de-ar 7.2 7.2 6.6 7.0 5.7 5.6 5.8 5.7 6.9 7.6 7.6 7.4 4.4 4.1 3.1 3.9 de-fr 17.6 19.3 18.2 18.4 5.1 6.6 5.8 5.8 17.3 20.3 17.3 18.3 5.4 7.9 4.1 5.8 de-nl 21.4 21.8 20.4 21.2 9.1 9.5 7.9 8.8 20.0 22.3 20.5 20.9 9.7 11.9 7.1 9.6 de-ru 12.3 13.8 12.8 13.0 6.0 6.3 7.2 6.5 10.1 13.3 10.5 11.3 5.2 4.0 3.7 4.3 de-zh 16.1 16.9 16.5 16.5 8.9 15.3 15.0 13.1 11.2 16.9 13.5 13.9 14.1 11.1 3.1 9.4
fr-ar 17.9 17.8 18.9 18.2 16.4 17.1 16.4 16.6 14.6 19.5 16.3 16.8 16.4 16.6 14.8 15.9
fr-de 15.0 17.3 17.0 16.4 5.4 6.7 6.5 6.2 13.1 17.0 13.5 14.5 4.9 7.0 4.8 5.6 fr-nl 21.4 21.8 20.3 21.2 11.3 13.3 11.6 12.1 20.6 22.7 20.5 21.3 11.6 14.1 10.1 11.9 fr-ru 17.7 19.5 15.9 17.7 16.7 13.3 18.5 16.2 12.9 20.7 13.3 15.6 10.9 15.5 13.3 13.2 fr-zh 30.5 32.0 31.8 31.4 29.8 32.0 31.4 31.1 25.9 32.5 28.4 28.9 31.7 32.0 30.3 31.3
nl-ar 5.3 5.9 5.6 5.6 6.0 5.3 5.8 5.7 5.2 6.1 6.4 5.9 5.0 5.2 4.5 4.9 nl-de 17.9 19.7 19.1 18.9 10.9 12.8 10.5 11.4 16.5 19.8 17.1 17.8 9.0 10.4 10.4 9.9 nl-fr 21.1 22.5 21.2 21.6 13.8 13.4 13.0 13.4 21.2 22.9 19.6 21.2 10.1 12.6 9.5 10.7 nl-ru 10.0 11.2 10.2 10.5 9.7 9.1 8.8 9.2 8.4 10.9 8.6 9.3 8.6 7.6 8.2 8.1
nl-zh 9.6 11.1 9.6 10.1 10.2 10.4 10.0 10.2 5.4 11.1 7.3 7.9 9.9 9.9 7.5 9.1 ru-ar 18.7 18.7 18.2 18.5 16.9 17.9 17.5 17.4 14.8 19.7 16.2 16.9 17.9 18.9 17.0 17.9 ru-de 12.9 12.9 12.9 12.9 8.7 8.1 9.0 8.6 10.8 13.3 10.5 11.5 8.6 9.2 7.9 8.6 ru-fr 21.5 24.0 21.2 22.2 19.4 17.9 19.0 18.8 20.1 24.8 19.0 21.3 16.8 22.0 13.8 17.5
ru-nl 13.0 13.6 12.7 13.1 10.9 11.8 12.4 11.7 13.3 14.2 13.0 13.5 11.0 12.0 9.7 10.9
ru-zh 27.6 29.8 28.6 28.7 30.1 30.4 30.6 30.4 23.6 30.2 24.6 26.1 32.5 32.2 29.0 31.2 zh-ar 18.0 17.4 17.3 17.6 16.9 17.5 17.1 17.2 16.3 19.3 17.0 17.5 19.1 19.8 19.4 19.4 zh-fr 20.2 21.3 20.2 20.6 21.4 22.3 21.5 21.7 20.5 24.1 18.3 21.0 23.1 24.4 24.5 24.0 zh-de 8.6 9.1 8.8 8.8 7.3 7.4 7.1 7.3 8.3 9.9 7.5 8.6 8.7 8.5 8.0 8.4 zh-nl 8.7 8.5 8.1 8.4 8.9 8.7 8.4 8.7 8.9 9.0 8.1 8.7 8.9 9.3 9.0 9.1 zh-ru 15.3 15.8 14.1 15.1 16.7 17.3 17.6 17.2 13.3 17.8 12.8 14.6 20.2 20.5 20.2 20.3 avg. 16.5 17.5 16.5 **16.8** 13.6 14.2 14.2 **14.0** 14.8 18.2 15.0 **16.0** 14.1 14.9 12.8 **13.9**
layer.
Layer Direction S-ENC-T-DEC w/ Res. T-ENC w/ Res. S-ENC-T-DEC w/o Res. T-ENC w/o Res.
Norm
en-ar 23.6 24.1 23.2 23.6 23.7 23.9 24.1 23.9 24.0 23.2 23.1 23.4 22.8 23.8 23.8 23.5
ar-en 37.6 37.1 37.3 37.3 37.5 37.1 37.5 37.4 37.4 37.2 36.9 37.2 36.4 36.7 37.0 36.7 en-de 29.7 30.1 30.4 30.1 30.4 29.6 30.4 30.1 30.1 30.1 30.1 30.1 30.3 30.5 30.7 30.5
de-en 34.3 34.5 34.2 34.3 34.5 34.1 34.3 34.3 35.0 34.7 34.3 34.7 33.8 34.1 34.4 34.1
en-fr 33.5 33.7 33.6 33.6 33.4 33.8 33.6 33.6 33.7 33.1 33.8 33.5 33.0 33.6 33.1 33.2 fr-en 35.6 35.4 35.3 35.4 35.0 35.0 35.5 35.2 35.6 35.2 35.1 35.3 34.4 35.2 35.0 34.9
en-nl 27.7 28.4 28.2 28.1 28.4 27.9 28.3 28.2 27.6 28.0 27.9 27.8 28.1 28.1 28.0 28.1
nl-en 31.3 30.8 31.2 31.1 30.9 30.7 30.8 30.8 31.0 30.8 31.0 30.9 30.4 30.9 30.5 30.6 en-ru 29.2 29.7 29.6 29.5 29.4 29.8 29.8 29.7 29.5 29.1 29.6 29.4 29.4 29.9 29.2 29.5
ru-en 35.2 34.6 35.0 34.9 34.7 34.6 35.0 34.8 35.2 34.8 35.1 35.0 34.3 34.8 34.7 34.6
en-zh 40.7 40.8 40.9 40.8 40.6 40.3 40.7 40.5 40.7 40.4 40.6 40.6 39.6 40.7 40.6 40.3 zh-en 46.2 46.1 45.9 46.1 46.1 46.1 46.2 46.1 46.2 45.9 45.8 46.0 45.6 46.4 46.3 46.1
avg. 33.7 33.8 33.7 **33.7** 33.7 33.6 33.9 **33.7** 33.8 33.5 33.6 **33.7** 33.2 33.7 33.6 **33.5**
| Pre. Post. |
|--------------|
en-ar 23.9 23.4 23.7 23.7 24.6 24.4 24.3 24.4 23.7 23.8 23.8 23.8 24.0 23.8 24.0 23.9 ar-en 37.8 37.3 37.5 37.5 37.8 37.5 37.2 37.5 37.7 37.2 37.6 37.5 37.8 37.3 37.7 37.6 en-de 30.8 31.0 29.3 30.4 31.2 29.9 31.2 30.8 31.1 30.5 31.2 30.9 31.1 30.5 31.5 31.0 de-en 34.6 34.6 34.8 34.7 34.9 34.6 34.7 34.7 34.8 34.6 34.7 34.7 34.4 34.6 34.4 34.5 en-fr 33.9 33.4 34.1 33.8 34.1 33.8 33.9 33.9 33.5 33.5 33.2 33.4 33.7 33.8 33.6 33.7 fr-en 35.5 35.6 35.4 35.5 35.6 35.7 35.4 35.6 35.0 35.5 35.2 35.2 35.3 35.3 35.5 35.4 en-nl 27.8 28.4 28.2 28.1 27.9 28.8 28.3 28.3 28.0 27.9 28.3 28.1 27.7 27.9 28.4 28.0 nl-en 31.5 30.9 31.2 31.2 31.3 30.9 31.4 31.2 30.8 30.8 30.7 30.8 31.1 31.1 30.9 31.0 en-ru 29.4 29.6 29.9 29.6 30.1 29.8 30.0 30.0 29.9 30.0 29.2 29.7 30.0 29.5 29.5 29.7 ru-en 35.1 34.6 35.1 34.9 34.9 34.9 35.2 35.0 34.8 34.9 35.2 35.0 34.8 34.8 35.0 34.9 en-zh 41.2 40.9 40.9 41.0 41.2 40.9 40.8 41.0 40.8 40.5 40.7 40.7 40.7 40.7 41.0 40.8 zh-en 46.4 46.0 46.1 46.2 46.7 46.3 46.2 46.4 46.1 46.3 46.1 46.2 46.7 46.6 46.0 46.4 avg. 34.0 33.8 33.9 **33.9** 34.2 34.0 34.1 **34.1** 33.9 33.8 33.8 **33.8** 33.9 33.8 34.0 **33.9**
| Layer | Direction | S-ENC-T-DEC w/ Res. | T-ENC w/ Res. | S-ENC-T-DEC w/o Res. | T-ENC w/o Res. | | | | | | | | | | | | |
|---------|-------------|-----------------------|-----------------|------------------------|------------------|------|------|------|------|------|------|------|------|------|------|------|------|
| Norm | 1 | 10 | 20 | avg. | 1 | 10 | 20 | avg. | 1 | 10 | 20 | avg. | 1 | 10 | 20 | avg. | |
| it-nl | 5.2 | 3.7 | 4.3 | 4.4 | 13.4 | 14.4 | 14.0 | 13.9 | 6.4 | 3.6 | 13.8 | 7.9 | 16.3 | 17.7 | 17.2 | 17.1 | |
| nl-it | 5.5 | 4.3 | 4.3 | 4.7 | 13.9 | 14.7 | 14.4 | 14.3 | 6.1 | 4.6 | 10.8 | 7.2 | 15.5 | 17.0 | 17.1 | 16.5 | |
| it-ro | 5.5 | 5.7 | 5.1 | 5.4 | 13.4 | 13.5 | 14.4 | 13.8 | 7.8 | 7.4 | 14.2 | 9.8 | 16.0 | 16.6 | 16.9 | 16.5 | |
| ro-it | 7.2 | 5.5 | 5.3 | 6.0 | 14.9 | 15.1 | 15.4 | 15.1 | 7.1 | 4.3 | 11.4 | 7.6 | 17.8 | 18.1 | 18.4 | 18.1 | |
| nl-ro | 4.5 | 4.9 | 4.2 | 4.5 | 12.1 | 12.5 | 12.4 | 12.3 | 6.1 | 7.1 | 11.8 | 8.3 | 12.8 | 14.1 | 14.1 | 13.7 | |
| ro-nl | 4.4 | 4.3 | 3.9 | 4.2 | 12.1 | 13.4 | 12.5 | 12.7 | 5.6 | 3.1 | 12.4 | 7.0 | 15.1 | 16.1 | 15.6 | 15.6 | |
| avg. | 5.4 | 4.7 | 4.5 | 4.9 | 13.3 | 13.9 | 13.9 | 13.7 | 6.5 | 5.0 | 12.4 | 8.0 | 15.6 | 16.6 | 16.6 | 16.2 | |
| Pre. | it-nl | 13.7 | 11.8 | 13.1 | 12.9 | 15.9 | 16.3 | 17.0 | 16.4 | 17.7 | 18.3 | 17.4 | 17.8 | 18.4 | 18.0 | 18.6 | 18.3 |
| nl-it | 14.5 | 12.8 | 12.2 | 13.2 | 15.7 | 17.0 | 16.1 | 16.3 | 18.0 | 18.5 | 18.4 | 18.3 | 17.9 | 18.3 | 18.3 | 18.2 | |
| it-ro | 12.3 | 11.2 | 12.4 | 12.0 | 14.8 | 14.3 | 15.8 | 15.0 | 17.0 | 17.3 | 17.0 | 17.1 | 17.9 | 17.8 | 18.2 | 18.0 | |
| ro-it | 14.6 | 13.7 | 13.0 | 13.8 | 17.2 | 16.8 | 17.5 | 17.2 | 19.5 | 20.0 | 20.0 | 19.8 | 19.2 | 19.8 | 20.8 | 19.9 | |
| nl-ro | 11.1 | 10.4 | 10.2 | 10.6 | 13.5 | 13.4 | 13.6 | 13.5 | 14.9 | 14.9 | 14.7 | 14.8 | 15.4 | 15.2 | 15.5 | 15.4 | |
| ro-nl | 12.3 | 10.9 | 12.2 | 11.8 | 14.5 | 15.0 | 15.2 | 14.9 | 16.5 | 16.6 | 16.0 | 16.4 | 16.9 | 16.2 | 17.1 | 16.7 | |
| avg. | 13.1 | 11.8 | 12.2 | 12.4 | 15.3 | 15.5 | 15.9 | 15.5 | 17.3 | 17.6 | 17.3 | 17.4 | 17.6 | 17.6 | 18.1 | 17.8 | |
| Post. | | | | | | | | | | | | | | | | | |
| Layer | Direction | S-ENC-T-DEC w/ Res. | T-ENC w/ Res. | S-ENC-T-DEC w/o Res. | T-ENC w/o Res. | | | | | | | | | | | | |
|---------|-------------|-----------------------|-----------------|------------------------|------------------|------|------|------|------|------|------|------|------|------|------|------|------|
| Norm | 1 | 10 | 20 | avg. | 1 | 10 | 20 | avg. | 1 | 10 | 20 | avg. | 1 | 10 | 20 | avg. | |
| en-it | 33.9 | 33.8 | 33.6 | 33.8 | 33.7 | 33.4 | 33.7 | 33.6 | 33.6 | 32.9 | 33.3 | 33.3 | 32.4 | 33.3 | 33.4 | 33.0 | |
| it-en | 37.5 | 37.1 | 37.1 | 37.2 | 37.4 | 37.2 | 37.0 | 37.2 | 35.8 | 36.3 | 36.5 | 36.2 | 35.8 | 36.7 | 36.5 | 36.3 | |
| en-nl | 29.6 | 29.5 | 29.4 | 29.5 | 29.6 | 29.5 | 29.6 | 29.6 | 29.2 | 29.7 | 29.5 | 29.5 | 29.0 | 29.2 | 29.2 | 29.1 | |
| nl-en | 31.9 | 32.4 | 32.0 | 32.1 | 32.0 | 32.1 | 31.9 | 32.0 | 30.9 | 31.3 | 31.7 | 31.3 | 31.2 | 31.5 | 31.5 | 31.4 | |
| en-ro | 24.4 | 25.1 | 25.1 | 24.9 | 25.2 | 25.1 | 25.4 | 25.2 | 24.4 | 24.6 | 24.4 | 24.5 | 24.6 | 24.7 | 24.6 | 24.6 | |
| ro-en | 31.3 | 31.6 | 31.3 | 31.4 | 32.1 | 31.6 | 31.4 | 31.7 | 30.3 | 30.7 | 30.9 | 30.6 | 30.3 | 31.2 | 31.2 | 30.9 | |
| avg. | 31.4 | 31.6 | 31.4 | 31.5 | 31.7 | 31.5 | 31.5 | 31.6 | 30.7 | 30.9 | 31.1 | 30.9 | 30.6 | 31.1 | 31.1 | 30.9 | |
| Pre. | en-it | 33.9 | 33.3 | 33.5 | 33.6 | 33.8 | 34.0 | 33.5 | 33.8 | 33.1 | 33.2 | 32.6 | 33.0 | 32.4 | 32.6 | 33.4 | 32.8 |
| it-en | 37.1 | 36.9 | 37.0 | 37.0 | 37.1 | 37.1 | 36.9 | 37.0 | 35.7 | 35.4 | 36.1 | 35.7 | 36.4 | 35.7 | 35.8 | 36.0 | |
| en-nl | 29.6 | 30.1 | 30.1 | 29.9 | 30.4 | 30.4 | 30.0 | 30.3 | 29.2 | 29.0 | 29.0 | 29.1 | 29.2 | 29.0 | 29.5 | 29.2 | |
| nl-en | 31.9 | 32.0 | 31.6 | 31.8 | 31.3 | 31.9 | 31.8 | 31.7 | 31.0 | 31.1 | 31.7 | 31.3 | 30.9 | 30.7 | 31.3 | 31.0 | |
| en-ro | 25.4 | 25.2 | 24.6 | 25.1 | 25.3 | 25.2 | 25.5 | 25.3 | 24.7 | 25.0 | 24.6 | 24.8 | 24.4 | 24.4 | 25.0 | 24.6 | |
| ro-en | 31.5 | 31.6 | 31.6 | 31.6 | 30.8 | 31.4 | 31.1 | 31.1 | 30.4 | 29.6 | 30.8 | 30.3 | 30.4 | 30.1 | 30.4 | 30.3 | |
| avg. | 31.6 | 31.5 | 31.4 | 31.5 | 31.5 | 31.7 | 31.5 | 31.5 | 30.7 | 30.6 | 30.8 | 30.7 | 30.6 | 30.4 | 30.9 | 30.6 | |
| Post. | | | | | | | | | | | | | | | | | |
| Pre. Post. |
|--------------|
| Layer | Direction | S-ENC-T-DEC w/ Res. | T-ENC w/ Res. | S-ENC-T-DEC w/o Res. | T-ENC w/o Res. | | | | | | | | | | | |
|---------|-------------|-----------------------|-----------------|------------------------|------------------|------|------|------|------|------|------|------|------|------|------|------|
| Norm | 1 | 10 | 20 | avg. | 1 | 10 | 20 | avg. | 1 | 10 | 20 | avg. | 1 | 10 | 20 | avg. |
| es-de | 23.2 | 22.0 | 16.1 | 20.4 | 26.7 | 26.9 | 27.3 | 27.0 | 6.2 | 14.1 | 11.2 | 10.5 | 24.9 | 28.5 | 28.3 | 27.2 |
| de-es | 30.3 | 30.0 | 27.6 | 29.3 | 32.4 | 32.0 | 32.3 | 32.2 | 15.5 | 25.7 | 18.7 | 20.0 | 32.9 | 33.1 | 33.4 | 33.1 |
| es-fr | 35.0 | 35.6 | 34.0 | 34.9 | 38.8 | 38.8 | 39.3 | 39.0 | 27.8 | 29.8 | 28.2 | 28.6 | 39.9 | 39.8 | 39.9 | 39.9 |
| fr-es | 36.0 | 35.5 | 32.8 | 34.8 | 38.6 | 38.7 | 38.7 | 38.7 | 18.7 | 30.7 | 22.3 | 23.9 | 39.7 | 39.7 | 40.0 | 39.8 |
| es-nl | 22.7 | 23.0 | 14.2 | 20.0 | 26.4 | 26.3 | 26.3 | 26.3 | 7.0 | 12.8 | 15.0 | 11.6 | 23.2 | 27.7 | 27.5 | 26.1 |
| nl-es | 27.2 | 27.1 | 24.9 | 26.4 | 29.1 | 29.1 | 29.1 | 29.1 | 13.9 | 23.0 | 16.9 | 17.9 | 29.6 | 29.7 | 29.8 | 29.7 |
| de-fr | 28.6 | 28.1 | 26.9 | 27.9 | 31.4 | 31.3 | 31.7 | 31.5 | 21.9 | 23.0 | 22.5 | 22.5 | 31.9 | 32.3 | 32.2 | 32.1 |
| fr-de | 23.5 | 22.0 | 15.9 | 20.5 | 26.3 | 26.5 | 26.8 | 26.5 | 6.3 | 14.3 | 11.5 | 10.7 | 25.0 | 28.1 | 28.2 | 27.1 |
| de-nl | 23.2 | 23.4 | 15.0 | 20.5 | 26.3 | 26.2 | 26.0 | 26.2 | 7.0 | 12.8 | 16.2 | 12.0 | 22.5 | 27.5 | 27.2 | 25.7 |
| nl-de | 21.4 | 20.3 | 14.3 | 18.7 | 23.2 | 23.8 | 23.5 | 23.5 | 6.4 | 13.3 | 11.9 | 10.5 | 21.6 | 24.6 | 24.6 | 23.6 |
| fr-nl | 22.9 | 23.3 | 14.1 | 20.1 | 26.0 | 25.9 | 25.8 | 25.9 | 6.8 | 12.2 | 15.3 | 11.4 | 21.6 | 27.4 | 27.1 | 25.4 |
| nl-fr | 26.0 | 25.9 | 25.0 | 25.6 | 28.1 | 28.3 | 28.2 | 28.2 | 19.9 | 20.9 | 19.9 | 20.2 | 28.9 | 28.8 | 28.7 | 28.8 |
| avg. | 26.7 | 26.4 | 21.7 | 24.9 | 29.4 | 29.5 | 29.6 | 29.5 | 13.1 | 19.4 | 17.5 | 16.7 | 28.5 | 30.6 | 30.6 | 29.9 |
| es-de | 26.0 | 26.9 | 26.8 | 26.6 | 28.2 | 28.4 | 28.7 | 28.4 | 26.1 | 26.3 | 26.1 | 26.2 | 28.7 | 28.7 | 28.7 | 28.7 |
| de-es | 32.3 | 32.6 | 32.1 | 32.3 | 33.2 | 33.7 | 33.5 | 33.5 | 32.7 | 31.9 | 32.1 | 32.2 | 33.5 | 33.3 | 33.5 | 33.4 |
| es-fr | 37.7 | 38.8 | 37.5 | 38.0 | 40.2 | 40.0 | 40.1 | 40.1 | 37.9 | 37.8 | 37.7 | 37.8 | 40.1 | 39.9 | 40.5 | 40.2 |
| fr-es | 37.8 | 38.5 | 38.2 | 38.2 | 40.0 | 39.9 | 40.1 | 40.0 | 38.4 | 37.7 | 38.0 | 38.0 | 39.7 | 39.7 | 40.1 | 39.8 |
| es-nl | 25.6 | 26.0 | 26.2 | 25.9 | 27.9 | 27.7 | 27.8 | 27.8 | 26.0 | 25.7 | 25.5 | 25.7 | 27.8 | 28.0 | 27.9 | 27.9 |
| nl-es | 29.3 | 29.3 | 29.1 | 29.2 | 29.8 | 30.0 | 29.6 | 29.8 | 29.4 | 29.0 | 29.2 | 29.2 | 29.7 | 29.8 | 29.8 | 29.8 |
| de-fr | 30.6 | 31.7 | 30.8 | 31.0 | 32.8 | 32.8 | 33.1 | 32.9 | 31.0 | 30.7 | 30.8 | 30.8 | 32.9 | 32.4 | 33.3 | 32.9 |
| fr-de | 25.9 | 26.4 | 26.6 | 26.3 | 27.8 | 28.6 | 28.8 | 28.4 | 26.3 | 26.0 | 25.1 | 25.8 | 28.2 | 28.5 | 28.3 | 28.3 |
| de-nl | 25.8 | 26.0 | 25.9 | 25.9 | 27.5 | 27.7 | 27.5 | 27.6 | 25.7 | 25.6 | 25.5 | 25.6 | 27.8 | 27.6 | 27.5 | 27.6 |
| nl-de | 23.5 | 23.4 | 23.9 | 23.6 | 24.2 | 24.6 | 24.4 | 24.4 | 23.6 | 23.5 | 23.2 | 23.4 | 24.4 | 24.5 | 24.5 | 24.5 |
| fr-nl | 25.3 | 25.8 | 25.6 | 25.6 | 27.4 | 27.4 | 27.3 | 27.4 | 25.5 | 25.5 | 25.3 | 25.4 | 27.8 | 27.6 | 27.5 | 27.6 |
| nl-fr | 28.1 | 28.4 | 27.9 | 28.1 | 29.3 | 29.0 | 29.3 | 29.2 | 28.3 | 28.0 | 27.9 | 28.1 | 29.2 | 29.1 | 29.3 | 29.2 |
| avg. | 29.0 | 29.5 | 29.2 | 29.2 | 30.7 | 30.8 | 30.9 | 30.8 | 29.2 | 29.0 | 28.9 | 29.0 | 30.8 | 30.8 | 30.9 | 30.8 |
Layer Direction S-ENC-T-DEC w/ Res. T-ENC w/ Res. S-ENC-T-DEC w/o Res. T-ENC w/o Res.
Norm
en-de 28.0 28.0 28.3 28.1 28.2 28.2 28.4 28.3 28.0 28.1 28.4 28.2 28.5 28.5 28.3 28.4 de-en 35.2 35.1 35.3 35.2 35.1 35.0 35.1 35.1 34.9 35.0 35.0 35.0 34.8 35.1 35.0 35.0 en-es 37.6 37.4 37.4 37.5 37.5 37.4 37.7 37.5 37.5 37.5 37.4 37.5 37.5 37.5 37.3 37.4 es-en 39.3 38.9 39.0 39.1 39.0 39.0 38.9 39.0 38.8 39.0 39.1 39.0 38.6 39.0 38.9 38.8 en-fr 36.2 36.6 36.5 36.4 36.5 36.4 36.8 36.6 36.3 36.4 36.5 36.4 36.7 36.7 36.2 36.5 fr-en 38.2 38.2 38.0 38.1 38.0 38.2 38.0 38.1 38.0 37.9 38.2 38.0 37.8 38.2 38.0 38.0 en-nl 28.5 28.8 28.7 28.7 28.8 28.7 28.6 28.7 28.5 28.6 28.6 28.6 28.3 28.6 28.3 28.4 nl-en 31.7 31.6 31.5 31.6 31.5 31.7 31.9 31.7 31.6 31.3 31.6 31.5 31.3 31.7 31.6 31.5 avg. 34.3 34.3 34.3 **34.3** 34.3 34.3 34.4 **34.4** 34.2 34.2 34.4 **34.3** 34.2 34.4 34.2 **34.3**
| Pre. Post. |
|--------------|
en-de 28.4 28.4 28.7 28.5 28.6 28.7 29.0 28.8 28.5 28.2 28.4 28.4 28.7 28.5 28.3 28.5
de-en 35.2 35.0 35.5 35.2 34.8 35.1 34.9 34.9 35.2 35.2 35.0 35.1 35.1 35.1 34.7 35.0 en-es 37.6 37.8 37.5 37.6 37.6 37.7 37.6 37.6 37.6 37.5 37.6 37.6 37.3 37.4 37.5 37.4
es-en 39.4 39.0 39.0 39.1 39.0 39.3 38.8 39.0 39.2 38.9 39.1 39.1 39.0 39.1 39.1 39.1
en-fr 36.8 36.8 36.4 36.7 36.8 36.7 37.0 36.8 36.6 36.5 37.1 36.7 36.9 36.8 36.7 36.8 fr-en 38.3 38.2 38.4 38.3 38.2 38.2 38.4 38.3 38.2 38.1 38.2 38.2 38.1 38.3 37.9 38.1
en-nl 28.8 28.8 28.6 28.7 28.7 28.7 28.9 28.8 28.6 28.6 28.9 28.7 28.7 28.7 28.5 28.6
nl-en 31.5 31.6 31.7 31.6 32.1 31.7 31.7 31.8 31.7 31.9 31.5 31.7 31.7 31.4 31.4 31.5 avg. 34.5 34.5 34.5 **34.5** 34.5 34.5 34.5 **34.5** 34.5 34.4 34.5 **34.4** 34.4 34.4 34.3 **34.4**
| # | Layer | Language | Res. | Zero-shot | Supervised | | | | |
|------|----------|-------------|--------|-------------|--------------|-------|----------|------|------|
| Norm | Tag | OPUS | IWSLT | Europarl | OPUS | IWSLT | Europarl | | |
| 0 | Pivot | 55.8 | 64.6 | 73.8 | - | - | - | | |
| 1 | PreNorm | S-ENC-T-DEC | w/ | 35.9 | 34.6 | 66.5 | 63.8 | 70.6 | 74.9 |
| 2 | PostNorm | S-ENC-T-DEC | w/ | 49.1 | 51.2 | 73.0 | 64.1 | 70.6 | 75.0 |
| 3 | PreNorm | T-ENC | w/ | 42.5 | 53.0 | 73.0 | 63.7 | 70.6 | 74.9 |
| 4 | PostNorm | T-ENC | w/ | 43.8 | 56.0 | 73.8 | 64.0 | 70.7 | 75.0 |
| 5 | PreNorm | S-ENC-T-DEC | w/o | 44.5 | 41.7 | 50.3 | 63.7 | 70.0 | 74.8 |
| 6 | PostNorm | S-ENC-T-DEC | w/o | 47.6 | 60.8 | 72.9 | 64.0 | 69.7 | 74.9 |
| 7 | PreNorm | T-ENC | w/o | 42.5 | 57.1 | 72.5 | 63.6 | 69.9 | 74.8 |
| 8 | PostNorm | T-ENC | w/o | 43.1 | 60.2 | 73.8 | 64.0 | 69.7 | 74.9 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section "Limitations"
✓ A2. Did you discuss any potential risks of your work?
Section "Ethical Considerations"
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✓ A4. Have you used AI writing assistants when working on this paper?
Only Grammarly was utilized for grammar correction, and there is no originally machine-generated text in the paper.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2.1 And 2.2 Appendix A And B
✓ B1. Did you cite the creators of artifacts you used?
Section 2.1 and 2.2 Appendix A and B
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix A and B
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Following previous work of neural machine translation, we directly used OPUS, IWSLT, and Europarl datasets without further checking filtering in order to conduct fair comparisons.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 2.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2.1 and Appendix A and B
## C ✓ **Did You Run Computational Experiments?** Section 2.1, 2.2, 2.3, And 2.4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 2.1 and Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 2.1 Appendix A and B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 2.2, 2.3, and 2.4 Appendix C, E, F, and G
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 2.1 Appendix A and B
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
kung-peng-2023-models | Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning | https://aclanthology.org/2023.acl-short.113 | Recent works on instruction tuning (IT) have achieved great performance with zero-shot generalizability to unseen tasks. With additional context (e.g., task definition, examples) provided to models for fine-tuning, they achieved much higher performance than untuned models. Despite impressive performance gains, what models learn from IT remains understudied. In this work, we analyze how models utilize instructions during IT by comparing model training with altered vs. original instructions. Specifically, we create simplified task definitions by removing all semantic components and only leaving the output space information, and delusive examples that contain incorrect input-output mapping. Our experiments show that models trained on simplified task definition or delusive examples can achieve comparable performance to the ones trained on the original instructions and examples. Furthermore, we introduce a random baseline to perform zeroshot classification tasks, and find it achieves similar performance (42.6{\%} exact-match) as IT does (43{\%} exact-match) in low resource setting, while both methods outperform naive T5 significantly (30{\%} per exact-match). Our analysis provides evidence that the impressive performance gain of current IT models can come from picking up superficial patterns, such as learning the output format and guessing. Our study highlights the urgent need for more reliable IT methods and evaluation. | # Do Models Really Learn To Follow Instructions? An Empirical Study Of Instruction Tuning
Po-Nien Kung, Nanyun Peng University of California, Los Angeles
{ponienkung,violetpeng}@cs.ucla.edu
## Abstract
Recent works on instruction tuning (IT) have achieved great performance with zero-shot generalizability to unseen tasks. With additional context (e.g., task definition, examples) provided to models for fine-tuning, they achieved much higher performance than untuned models.
Despite impressive performance gains, what models learn from IT remains understudied. In this work, we analyze how models utilize instructions during IT by comparing model training with altered vs. original instructions.
Specifically, we create *simplified task definitions* by removing all semantic components and only leaving the output space information, and *delusive examples* that contain incorrect input-output mapping. Our experiments show that models trained on *simplified task definition* or *delusive examples* can achieve comparable performance to the ones trained on the original instructions and examples. Furthermore, we introduce a random baseline to perform zeroshot classification tasks, and find it achieves similar performance (42.6% exact-match) as IT does (43% exact-match) in low resource setting, while both methods outperform naive T5 significantly (30% per exact-match). Our analysis provides evidence that the impressive performance gain of current IT models can come from picking up superficial patterns, such as learning the output format and guessing. Our study highlights the urgent need for more reliable IT methods and evaluation.
## 1 Introduction
Recently, instruction tuning(IT) has drawn much attention in the NLP communities, with the rapid growth of new models (Sanh et al., 2021; Wei et al.,
2021; Ouyang et al., 2022) and datasets (Wang et al., 2022; Gupta et al., 2022; Finlayson et al., 2022; Mishra et al., 2021; Ye et al., 2021; Bach et al., 2022). Models trained with task instructions demonstrate impressive zero-shot cross-task generalization ability. Despite the remarkable results,
| Generalize to | Generalize to | | | | |
|--------------------------|-----------------|------------------|------|---------------|-----|
| IT Models | Unseen Tasks | Unseen Instruct. | | | |
| models → | TK-Inst | T0 | FLAN | Alpaca Vicuna | |
| Training # of tasks | 756 | 39 | 38 | | |
| # of instructions | 756 | 390* | 380 | 52K | 70K |
| Testing # of tasks | 119 | 11 | 24 | | |
| # of instructions | 119 | 110* | 240 | 252 | 252 |
| ✔ | ✔ | ✔ | ✗ | ✗ | |
| - – | | | | | |
| Testing on unseen tasks? | | | | | |
Table 1: Comparison between two types of instruction tuning models. Noted that we reported an estimated number of instructions for T0 during training and testing since they have 5 to 10 instructions for each task. Our analysis focuses on the
"generalize to unseen task" type.
how models utilize the instructions during training and inference time remains an open question.
Prior works have raised the question of whether models really learn to follow the instructions or just capture spurious correlations. Jang et al. (2022),
Webson and Pavlick (2021) showed that the current large language models (LLMs) can achieve similar performance with misleading instructions(prompts)
in in-context learning(ICL) and few-shot learning scenarios. Min et al. (2022) analyze how model utilize examples in ICL. They observed that (1)
Input-output mapping in examples is not important and(2) Output space information is crucial.
Besides ICL and few-shot prompt-tuning, some works raise concerns about instruction following in the instruction tuning field (Finlayson et al., 2022; Gupta et al., 2022; Gu et al., 2022), with a focus on test-time analysis. In contrast, we focus on analyzing how the models utilize instructions during the training process. We compare our analyzing methods and observation with prior works in Appendix A.1.
In this work, we conduct controlled experiments on NatInst-V2 (Wang et al., 2022), the largest opensource instruction learning dataset includes 800+
English tasks with diverse task types, to study how models utilize instructions during IT. Note that existing research on IT can be categorized into two 1317
![1_image_0.png](1_image_0.png)
learning superficial patterns, such as the output space and format. We suggest future research on IT more carefully analyze their performance gains and benchmark against trivial baselines.
## 2 Background
Figure 1: The left sub-figure demonstrates a two-stage pipeline where the model first trains on a set of tasks and then evaluates other unseen tasks. The model inputs task definition, *examples*, and *instance input* together to make a prediction. The two right sub-figures show how we create *Simplified task definition* and *Delusive task example* for ablation studies. We also demonstrate the results at the bottom with *T5 w/o IT*(Untuned models) results. It is shown that models can still achieve significant performance gain compared to *T5 w/o IT* while training on *Simplified* task definition and *Delusive examples*.
major camps: **generalize to unseen tasks** and **generalize to unseen instructions**, based on their objectives. Table 1 shows the comparison. Our analysis focuses on the former with more background and justifications provided in section 2. We strategically alter the instructions and compare them with original instructions for IT. Specifically, for task definition, we create *simplified versions* by removing all semantic components in the instructions and only leaving the output space information. For task examples, we create *delusive examples* with incorrect input-output mapping, where the examples' input and output spaces are correct, but the inputoutput mappings are wrong. Figure 1 demonstrates specific examples of these altered instructions.
Our experiments show that models trained with simplified task definitions achieve performances on par with the original IT models with different numbers of training examples ranging from 10 to 800 per task. We also observe that instructiontuned models are sensitive to input-output mapping during the testing ICL stage, but not during the instruction-tuning (training) stage, especially in low resource settings (i.e., ≤ 50 training instance per task). To further understand why instruction tuning improves performance for zero-shot test tasks, we establish a random baseline that only knows the correct output format (label space) for classification and multi-choice tasks. We discover that the random baseline can get 30% absolute exact-match score improvement over an untuned model, almost comparable to some IT models in low resource settings.
Our results suggest that the impressive performance gains of IT may just come from models Recently, many instruction tuning work train and test the models with instructions to achieve better zero-shot generalizability toward unseen tasks/instructions. We categorize these works by their objectives: **generalize to unseen tasks** and generalize to unseen instructions, and show the comparison in Table 1.
Instruction tuning to generalize to unseen tasks.
Figure 1 illustrates a two-stage instruction tuning pipeline used in many IT models, such as T0 (Sanh et al., 2021), FLAN (Wei et al., 2021), and TKInstruct (Wang et al., 2022). In the first stage, the models are trained on a set of training tasks with instructions (task-definition and task-examples). After training, the models are evaluated on a set of unseen testing tasks for zero-shot generalizability. By incorporating instructions during training, the models are shown to significantly improve performance over untuned models. The impressive performance gains led people to believe that models learned to follow instructions via instruction tuning. The goal of our analysis is to verify this belief.
Instruction tuning to generalize to unseen instructions. Different from T0, FLAN, and TK-Instruct training and testing the model with clear task boundaries and focusing on cross-task generalizability, Instruct-GPT (Ouyang et al.,
2022), Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023) focus more on instruction generalizability, which they train their model without clear task boundary but with diverse instructions, and further test on user-oriented instructions. These models show very different behavior compared with instruction tuning models that aim to generalize to unseen tasks.
Since Instruct-GPT is not open-sourced and distilled IT models such as Alpaca and Vicuna come up after our submission, we focus our analysis on the first category using the TK-instruct model and NatInst-V2 dataset. However, we also conduct additional experiments and discuss the Alpaca model's instruction following ability in Table 2.
## 3 Analysis Method
Task definition manipulation. To analyze whether models really "understand" and utilize the semantic meaning of task definitions, we conduct controlled experiments to remove semantic information in task definitions. Specifically, we conduct instruction-tuning with task definitions at 3 levels of granularity: Original, **Simplified**, and Empty. The **Original** version uses human-crafted human-readable task definitions provided in NatInst-V2 (Wang et al., 2022). The **Simplified**
task definitions remove all semantic components in the original task definition and only leave the output space information. Specifically, we only provide possible output labels as task definitions for classification tasks, and completely remove task definitions for other tasks (mostly generative tasks) during IT. Figure 1 shows an example of Simplified task definition. More details can be found in Appendix A.2. For **Empty**, we don't provide task definition during instruction-tuning.
Task example manipulation. Finlayson et al.
(2022) show that by providing a few task examples, both humans and models can guess and perform a task. We thus design a controlled experiment to study whether models learn the input-output mapping from task examples. Specifically, we compare models trained with 3 types of task examples:
Original, **Delusive**, and **Empty**. For the **Original**
setup, we provide one positive example in NatInstV2 (Wang et al., 2022). For **Delusive** examples, we sample negative examples from NatInst-V2, which have correct input and output formats, but incorrect input-output mappings. For **Empty**, we do not provide task examples during training.
![2_image_0.png](2_image_0.png)
Dataset. We conduct experiments on the NatInstV2 (Wang et al., 2022), the largest open-source instruction learning dataset, including over 800+
English tasks with diverse task types. The instructions include human-crafted human-readable *Task* Definition, Positive Task Examples, Negative Task Examples, and *Explanation*. We focus on studying task definition and task examples, which were shown to be most useful in the original paper.
Model. we conduct experiments on TK-Instruct, the current SOTA model provided in NatInst-V2 paper. The model significantly outperformed previous SOTA models, such as T0 (62.0 v.s. 32.3 rouge-L for 11B model). We follow the seq-to-seq instruction-tuning method used in TK-Instruct, and train a T5-large-lm-adapt (770M parameters)
model (Raffel et al., 2020) with performance comparable to the larger model (3B parameters)
reported in Wang et al. (2022).1 Evaluation Metrics. For task definition, we separately evaluate *Classification* and *Generative* tasks using exact match and rouge-L respectively. For 1For task definition experiment, we follow the best performance settings from Wang et al. (2022) to use task definition and two examples as instructions. For task examples experiments, due to the lack of negative examples, we conduct ablation studies using task definition and one example.
![3_image_0.png](3_image_0.png)
Figure 3: Controlled experiments for task examples. The left sub-figure shows the main results, where **Original** task examples are used for testing (in-context learning). **Original**,
Delusive, and **Empty** represent what type of task examples are used for training and the **T5 w/o IT** is the baseline T5large model. The right sub-figure shows supplementary results using **Delusive** examples for testing. The faint dashed lines are copied from the left sub-figure for comparison purposes.
task examples, we follow Wang et al. (2022) to report the overall rouge-L score for both classification and generative tasks. To understand the impact of training examples, we report model performances with varying numbers of training instances per task (i.e., 10, 20, 50, 200, 800).
## 5 Results
Task Definition Experiments. Figure 2 shows experimental results for task definitions. In the top sub-figures, we can see that the models trained with **Simplified** instructions achieve almost the same results as models trained with **Original** definitions both on Classification and Generative tasks.
Note that **Simplified** task definitions remove all semantic components in task definitions and only retain output space information for Classification tasks and remove task definitions altogether for Generative tasks. This indicates that models may only utilize output space information during instruction tuning. The bottom-left sub-figure in Figure 2 shows the overall rouge-L score for classification tasks, where models trained on the **Original** task definition slightly outperform the **Simplified** ones. A closer examination reveals that models trained on the **Original** task definitions are more likely to predict partially correct answers that help with the ROUGE-L score in some tasks. We provide further details in Appendix A.5. In addition, we also observe that training with **Simplified**
prompts can yield comparable performance to the T0 model trained with **Original** prompts on T0 dataset. Please refer to Appendix A.6 for details.
Task Examples Experiments. Figure 3 shows the experimental results for task examples. The
![3_image_1.png](3_image_1.png)
Figure 4: Results for the **Random Guessing** baseline which randomly guesses an answer from the output space (labels).
The left figure shows the format correctness, which calculates the accuracy of model predictions lied in the label space for classification (CLS) tasks. The right figure shows the average exact-match score of CLS tasks.
left sub-figure shows overall ROUGE-L scores. It shows that models trained with **Delusive** task examples can achieve almost the same performance as **Original** task examples when the number of training instances per task is small (≤ 50). When the data per task goes to 200, the **Original** models started to outperform **Delusive** ones slightly.
Combined with the previous results for task definition, we observe that comparing to the untuned models(*T5 w/o IT*), the IT models may achieve significant performance gain (Rouge-L from 22 to 46)
with (1)*Simplified* task definition and (2)*Delusive* task example, indicating that the current impressive improvement of IT models can come from the models learning superficial patterns without utilizing
(following) the instructions like human do.
For the right sub-figure, we show the results using **Delusive** task examples during test time via in-context learning. We see the performance drops for all three models, indicating that the inputoutput mapping matters for in-context learning on instruction-tuned models. This observation seems to misalign with previous work (Min et al., 2022),
which they found input-output mapping is unimportant for in context learning for classification tasks.
However, a closer investigation found that most tasks suffer from significant performance drop are analogical tasks rather than classification tasks as studied in Min et al. (2022).2
## 6 Additional Analysis
Random baseline. While our experiments suggest that models do not utilize most information in the instructions, we still observe huge performance gains via instruction tuning. To understand where the gains come from, we introduce a **Random** baseline that simply guesses within the cor2See examples of analogical tasks in Appendix A.4.
| CLS | GEN | |
|-------------------------|-----------|-------|
| (EM) | (Rouge-L) | |
| Model / Metric | ∆ | ∆ |
| LLaMA Test w/ Original | 4.40 | 14.31 |
| Train w/ Original | 59.19 | 48.80 |
| Train w/ Simplified | 56.61 | 45.75 |
| Alpaca Test w/ Original | 45.08 | 44.40 |
| Test w/ Simplified | 41.66 | 34.80 |
| Train w/ Original | 59.33 | 48.69 |
| Train w/ Simplified | 56.17 | 45.69 |
| -2.58 | -3.05 | |
| -3.42 | -9.6 | |
| -3.16 | -3 | |
rect output space. Figure 4 shows the results. First, IT improves format correctness from 27% to 97%
by training with only one instance per task, and the exact-match score improves from 12.78% to 43%. Further providing more training instances per task(200) can improve exact-match score to 52%. However, while the performance gains seem impressive, the **Random Guessing** baseline can also achieve 42.6% exact-match score, on par with TK-Instruct trained in low resource setting (less than five instances per task). This suggests that the majority of score improvement from IT may come from model learning the output format, especially in low-resource settings.
Fair comparison for IT models. Existing studies on instruction tuning often introduce changes to both models and datasets simultaneously, which can obscure fair comparisons. To address this issue, we conduct experiments comparing different models (T0, TK-Instruct) on the same dataset
(NatInst-V2) and emphasize the importance of careful evaluation. In Table 3, when evaluating using the NatInst-V2 evaluation method and considering only the overall Rouge-L score, the TK-Instruct model appears to outperform T0 significantly. However, upon closer examination of the classification (CLS) and generative (GEN) tasks separately, we observe that T0's classification score is even lower than the Random baseline, primarily due to its format correctness being only 64%. To ensure a fairer comparison between these models, we employ constrained decoding techniques to align the model's predictions with the label space.
By adopting this approach, we observe a substantial performance improvement for T0 in CLS tasks
(34.03 to 51.31). T0 surpasses both the TK-Instruct model and the random baseline, indicating that it
| Metric → | Format | CLS | GEN | Overall |
|------------|----------|-------|-----------|-----------|
| Model ↓ | (Acc) | (EM) | (Rouge-L) | (Rouge-L) |
| Random | 100 | 42.65 | - | - |
| T0 | 64.61 | 34.03 | 27.36 | 32.28 |
| w/ CD | 100 | 51.31 | 27.36 | 40.7 |
| TK | 96.23 | 44.29 | 42.16 | 45.34 |
| w/ CD | 100 | 47.12 | 42.16 | 45.93 |
## 7 Discussion Do Alpaca Better Follow The Instruction On
NatInst-V2 dataset? After our submission, new instruction tuning models, like Alpaca and Vicuna, are trained on distilled data from Chat-GPT and exhibit behavior closer to it. To investigate their instruction utilization, we conduct the "Altered Task Definition" experiment on LLaMA-7B (Touvron et al., 2023) and Alpaca-7B
models using the NatInst-V2 test set. In Table 2, training the LLaMA model on the NatInst-V2 dataset using the **Original** task definition leads to substantial performance enhancements than zeroshot. However, the **Simplified** task definition also achieves comparable performance, with a minimal decrease of 3 (EM/Rouge-L)scores. This finding is consistent with our previous observations on the TK-Instruct and T0 models. Even without tuning on NatInst-V2, the Alpaca model demonstrates strong performance on the NatInst-V2 test set.
However, when the model is tested using a **simplified** task definition, there is a significant decrease in performance for generative tasks (but not for classification tasks). This highlights the importance of a well-written task definition for the Alpaca model to effectively perform generative tasks.
## 8 Conclusion
We constructed controlled experiments on NatInstV2 to compare model training with altered vs. original instructions (task definitions and examples).
Our findings indicate that some current IT models do not fully utilize instructions, and the impressive performance gains of IT may come from models learning superficial patterns, such as the output space and format. We suggest future research on instruction tuning to analyze their performance gains with more comprehensive evaluation and benchmark against trivial baselines.
## 9 Limitations
While our analysis suggests that IT models do not fully utilize instructions but instead learn superficial patterns from instructions, there are some limitations to our experiments. First, we only analyze a SOTA IT method on the NatInst-V2 dataset and T0 dataset. Though Wang et al. (2022) showed that their model can outperform other large models such as Instruct-GPT (Ouyang et al., 2022) and T0 (Sanh et al., 2021), we did not analyze other IT methods, such as RLHF (Reinforcement Learning from Human Feedback) in Instruct-GPT. Secondly, since our analysis is conducted in the training stage, we cannot analyze private models such as Chat-GPT.
Also, we did not explore models larger than 7B
parameters due to our computation resource limitation. This may miss some emergent abilities of large language models (LLMs) (Wei et al., 2022).
Lastly, while we observe the models do not utilize the majority of the instructions by IT, a certain degree of instruction understanding may already exist in pre-trained LLMs, which we did not study in this work. In conclusion, our work is a concentrated analysis to illuminate the potential vulnerability of the current IT models and evaluation metrics. We encourage future works to conduct more comprehensive studies on larger models and propose more reliable IT methods and evaluation frameworks.
## 10 Ethical Considerations
We will go through the computation resources and models we used to conduct our experiments. All of our models run on 4 48GB NVIDIA A6000 GPUs, along with 48 TB disk storage and AMD EPYC
7413 24-Core Processor. The experiment take around 1200 GPU hours for one 48GB NVIDIA
A6000 GPU. Our experiments do not need to leverage model or data parallelism. For the model, we use Huggingface T5-large-lm-adapt models for our experiments, and will release our code once the paper been accepted.
## Acknowledgements
Many thanks to Zefan Cai for implementing the altered task definition experiment on the T0 dataset and model. We would also like to thank Te-Lin Wu and Da Yin for their valuable insights during discussion, paper reviews, and constructive comments. We thank the anonymous reviewers for their feedback. This work was partially supported by AFOSR MURI
via Grant \#FA9550- 22-1-0380, Defense Advanced Research Project Agency (DARPA) grant
\#HR00112290103/HR0011260656.
## References
Stephen H Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin Raffel, Nihal V Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, et al. 2022. Promptsource: An integrated development environment and repository for natural language prompts. *arXiv preprint arXiv:2202.01279*.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality.
Matthew Finlayson, Kyle Richardson, Ashish Sabharwal, and Peter Clark. 2022. What makes instruction learning hard? an investigation and a new challenge in a synthetic environment. *arXiv preprint* arXiv:2204.09148.
Yuxian Gu, Pei Ke, Xiaoyan Zhu, and Minlie Huang.
2022. Learning instructions with unlabeled data for zero-shot cross-task generalization. *arXiv preprint* arXiv:2210.09175.
Prakhar Gupta, Cathy Jiao, Yi-Ting Yeh, Shikib Mehri, Maxine Eskenazi, and Jeffrey P Bigham. 2022. Improving zero and few-shot generalization in dialogue through instruction tuning. *arXiv preprint* arXiv:2205.12673.
Joel Jang, Seonghyeon Ye, and Minjoon Seo. 2022. Can large language models truly understand prompts? a case study with negated prompts. *arXiv preprint* arXiv:2209.12711.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Cross-task generalization via natural language crowdsourcing instructions.
arXiv preprint arXiv:2104.08773.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. *arXiv preprint* arXiv:2203.02155.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits
of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca:
An instruction-following llama model. https://
github.com/tatsu-lab/stanford_alpaca.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. *arXiv preprint* arXiv:2302.13971.
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, A. Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, M. Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddharth Deepak Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hanna Hajishirzi, and Daniel Khashabi. 2022. Supernaturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks.
Albert Webson and Ellie Pavlick. 2021. Do promptbased models really understand the meaning of their prompts? *arXiv preprint arXiv:2109.01247*.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al.
2022. Emergent abilities of large language models.
arXiv preprint arXiv:2206.07682.
Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren.
2021. Crossfit: A few-shot learning challenge for cross-task generalization in nlp. *arXiv preprint* arXiv:2104.08835.
Fan Yin, Jesse Vig, Philippe Laban, Shafiq Joty, Caiming Xiong, and Chien-Sheng Wu. 2023. Did you read the instructions? rethinking the effectiveness of task definitions in instruction learning. In *Proceedings*
of the 61st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
## A Appendix A.1 Related Analysis.
Min et al. (2022) found input-output mapping in examples is irrelevant for in-context learning (ICL)
on *classification tasks*. However, we observe that it matters to ICL but is irrelevant to IT training on analogical generative tasks. Webson and Pavlick
(2021) analyzed prompt-based models in few-shot learning scenarios and observed that models learn as fast using irrelevant or misleading prompts, which aligned with our findings. For instruction tuning, prior works raised concerns about models not following instructions. Gu et al. (2022); Gupta et al. (2022) analyze how models utilize instructions by removing them during inference stages.
However, they did not address how models use instructions during training. Wei et al. (2021); Wang et al. (2022) observe performance drop when removing task definition during IT and conclude that task definition is helpful, which we found true but only in terms of providing output space information. Additionally, a concurrent study (Yin et al.,
2023) has undertaken a comprehensive analysis of how models employ task definition in the process of instruction tuning on the NatInst-V2 dataset.
They observed that by removing a majority of components from the task definition and retaining only the user intent, the model can attain comparable or even superior performance compared to utilizing the complete task definition.
## A.2 Simplified Task Definition
To remove all semantic components and only leave the output space information within the task definition, we first manually look through all tasks to verify how each task definition describes their output space and further categorize all task definitions into four types: **(1) Exact**
Mentioned, (2) Combined Mentioned, (3)
Keyword Mentioned, and (4) No Mentioned.
For **Exact Mentioned, Combined Mentioned**
and **Keyword Mentioned**, there is a description of output space in the original task definition.
For **No Mentioned**, The original task definition doesn't directly describe the labels or keywords in output space. This includes all the generative tasks and some classification tasks(We observe a few classification tasks in which task definitions do not describe output space information). Further details and examples are shown in Table 4.
## A.3 Hyper-Parameter Tuning Results
Before we conduct analysis, we follow the model settings in Wang et al. (2022) to perform the hyper-parameter search. Prior works trained the TK-Instruct(770M) models from T5-Large-lmadapt(770M) with a learning rate 1e-5, batch size 16, and 100 training instances per task for two epochs. We found out that (1) learning rate 1e4 can converge faster while performance remains;
(2) Higher batch size(≥ 128) leads to much lower loss and better performance; (3) more training instances per task(≥ 200) leads to better performance; and (4) the loss will converge with 4 to 6 epochs. Following the hyper-parameter search results, we conducted our experiment with the following setting: learning rate 1e-4, batch size 128,
[10, 20, 50, 200*, 800] training instance per task, and trained for six epochs. Our best results(200 instances) achieve a 52.8 Rouge-L score, which is better than TK-Instruct-770M(48 Rouge-L) from Wang et al. (2022) and comparable to their TKInstruct-3B(54 Rouge-L) model.
## A.4 Analogical Tasks
We look into a set of models training with *Original* task examples and find out a list of tasks with the most performance drop(Drop more than 20% score)
when using *Delusive* examples during testing(incontext learning). We show the list of tasks in Table 6 and some of their details in Table 5. It is seen that these types of tasks have short input and output lengths, where input and output have direct word-level relations.
## A.5 Performance Gap Between Rouge-L And Exact Match
In the Results section, we observed that there's a slight performance gap on *Classification* tasks between model training with *Original* and *Simplified* task definition. By further examining the data, we observed that this could happen to some Keyword Mentioned tasks we described in Appendix A.2.
Table 4 shows the example tasks in **Keyword Mentioned**. This task is a 7-class classification task with a special label "REFERENCE". The ground truth with "REFERENCE" will be combined with other text in the input, and both *Original* and *Simplified* models struggles(0% exact match) to predict the correct answer for this class. However, while both models failed to predict exactly correct answers, we observed that the *Original* model could achieve better partially correct answers by simply predicting more "REFERENCE". When we look into the testing set, we observe that 94 percent of ground truth is in "REFERENCE" class. Also, when we look into the predictions, we observe Original model will predict 55 percent of "REFERENCE" while *Simplified* only predicts 4 percent, achieving a 33.8 higher rouge-L score. We hypothesized that this happened because the word "reference" has explicitly been mentioned numerous times(8) in the *Original* task definition while mentioning other labels less than twice, leading to *Original* model's tendency to predict "REFERENCE".
## A.6 Simplified Task Definition For T0.
Besides analyzing on NatInst-V2 dataset, we also conduct the simplified task definition experiment on T0 training stages. We follow the T0 training settings and changed the prompts to **Simplified**
prompt, leaving only labels in the prompt for classification tasks and removing the entire prompt for generative tasks. We further train the T0-3B
model using learning rate 1e-4, batch size 1024 for 10000 steps. The T0 model training and testing with **Simplified** prompts achieve a 60.69 overall score, which is comparable to training with **Original Prompt**(61.93) and aligns with our observation on the NatInst-V2 dataset.
| Exact Mentioned | | |
|-----------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|
| Description | For tasks labeled as Exact Mentioned, the task definition describes the finite output space, which means all the labels within the output space are directly written in the definition. | |
| Original Definition | Definition: In this task, you will be shown a short story with a beginning, two potential middles, and an ending. Your job is to choose the middle statement that makes the story incoherent / implausible by indicating 1 or2 in the output. If both sentences are plausible, pick the one that makes less sense. | |
| Output Space | Finite Set: ["1", "2"] | |
| Simplified Definition | "Label: 1. Label: 2." | Combined Mentioned |
| Description | For tasks labeled as Combined Mentioned, the task definition describes a set of keyword labels that construct an infinite output space with all possible combinations of these keyword labels. | |
| Original Definition | Given a command in a limited form of natural language, provide the correct sequence of actions that executes the command to thus navigate an agent in its environment. [...] There are only six actions: , 'I_WALK', 'I_RUN', 'I_JUMP', 'I_TURN_LEFT', and 'I_TURN_RIGHT'. [...] | |
| Output Space | Infinite Set: ["I_LOOK", "I_LOOK I_WALK", "I_JUMP I_RUN", ... ∞] | |
| Simplified Definition | "Combined Label: I_LOOK. Combined Label: I_WALK. Combined Label: I_RUN. Combined Label: I_JUMP. Combined Label: I_TURN_LEFT. Combined Label: I_TURN_RIGHT." Keyword Mentioned | |
| Description | For tasks labeled as Keyword Mentioned, the task definition describes a set of keyword labels that construct an infinite output space combined with the input text. | |
| Original Definition | In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. [...] Options to choose from are: REFERENCE: Some object which is being mentioned in the text before or after the target number. The reference answer has a higher priority than any other. If both Reference and another answer are possible, prioritize the Reference. YEAR: Describing a calendric year AGE: Describing someone's age CURRENCY: Reference to some monetary value e.g dollar, euro etc. PEOPLE: Describing a single/plural persons TIME: Describing a time of the day. Usually you can add the word o'clock after those numbers. OTHER: Some other option, which isn't listed here. | |
| Output Space | Infinite Set: ["YEAR", "AGE", "CURRENCY", "PEOPLE", "TIME", "OTHER", "REFERENCE phone number", "REFERENCE crooler" ... ∞] | |
| Simplified Definition | "Keyword Label: YEAR. Keyword Label: AGE. Keyword Label: CURRENCY. Keyword Label: PEOPLE. Keyword Label: TIME. Keyword Label: OTHER. Keyword Label: REFERENCE." No Mentioned | |
| Description | For tasks labeled as No Mentioned, the task definition does not describe the output space by providing keyword labels. | |
| Original Definition | In this task, you're expected to write answers to questions involving multiple references to the same entity. The answer to the question should be unambiguous and a phrase in the paragraph. Most questions can have only one correct answer. | |
| Output Space | Infinite Set: [∞] | |
| Simplified Definition | "" | |
| Table 4: We describe how we created Simplified task definition from Original task definition for four task definition | | |
Table 4: We describe how we created *Simplified* task definition from *Original* task definition for four task definition types: Exact Mentioned, Combined Mentioned, **Keyword Mentioned**, and **No Mentioned**. For each task definition type, *Description* describes how the task definition provides the output space information; Original Definition shows an example of a task definition within this definition type, which are all retrieved from real tasks in NatInst-V2 dataset; *Output Space* describes the set of the output space; *Simplified Definition* shows an example of how we simplified the Original Task Definition into the simplified version.
| task036_qasc_topic_word_to_generate_related_fact | |
|----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Task Definition | In this task, you need to write a topic word from the given fact. The topic word must have at least one word overlap with the given fact. The topic word often involves adding a new word from a related concept. In your topic word, use at least one word from the given fact. Topic words with two or more words work best. |
| Task Example | Input: Fact: pesticides cause pollution. Output: pollution harms. task1152_bard_analogical_reasoning_causation |
| Task Definition | Two analogies that relate actions with their consequences are given in the form "A : B. C : ?". The phrase "A : B" relates action A to consequence B. Your task is to replace the question mark (?) with the appropriate consquence of the given action C, following the "A : B" relation. Your answer should be a single verb, without further explanation. |
| Task Example | Input: throw : fly. aspire : ? Output: attain task1159_bard_analogical_reasoning_containers |
| Task Definition | Two analogies that relate items to the associated containers is given in the form "A : B. C : ?". "A : B" relates item A to its associated container B. Your task is to replace the question mark (?) with the appropriate container for the given item C, following the "A : B" relation. |
| Task Example | Input: soda : can. water : ? Output: bottle |
Table 5: We provide several examples of these analogical tasks.
task036_qasc_topic_word_to_generate_related_fact task1152_bard_analogical_reasoning_causation task1154_bard_analogical_reasoning_travel task1157_bard_analogical_reasoning_rooms_for_containers task1158_bard_analogical_reasoning_manipulating_items task1159_bard_analogical_reasoning_containers Table 6: List of tasks with the most performance drop when using *Delusive* examples for *Original* model.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank.
✓ B1. Did you cite the creators of artifacts you used?
1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We are using NatInst-V2 which is an open-source dataset open to everyone. Also our code base is based on the their published repository.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We're using a well-known open-source dataset. We've look into the dataset and don not see these issues.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
7 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
oneill-dutta-2023-self | Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models | https://aclanthology.org/2023.acl-short.114 | We investigate the effects of post-training quantization and quantization-aware training on the generalization of Transformer language models. We present a new method called self-distilled quantization (SDQ) that minimizes accumulative quantization errors and outperforms baselines. We apply SDQ to multilingual models XLM-R$_{\text{Base}}$ and InfoXLM$_{\text{Base}}$ and demonstrate that both models can be reduced from 32-bit floating point weights to 8-bit integer weights while maintaining a high level of performance on the XGLUE benchmark. Our results also highlight the challenges of quantizing multilingual models, which must generalize to languages they were not fine-tuned on. | # Self-Distilled Quantization: Achieving High Compression Rates In Transformer-Based Language Models
James O' Neill and **Sourav Dutta**
Huawei Ireland Research Center Georges Court, Townsend St, Dublin 2, Ireland [email protected], [email protected]
## Abstract
We investigate the effects of post-training quantization and quantization-aware training on the generalization of Transformer language models.
We present a new method called self-distilled quantization (SDQ) that minimizes accumulative quantization errors and outperforms baselines. We apply SDQ to multilingual models XLM-RBase and InfoXLMBase and demonstrate that both models can be reduced from 32-bit floating point weights to 8-bit integer weights while maintaining a high level of performance on the XGLUE benchmark. Our results also highlight the challenges of quantizing multilingual models, which must generalize to languages they were not fine-tuned on.
## 1 Introduction
A main aim of neural network quantization is to reduce the size and computational demands of a model while maintaining its performance. There are two main approaches: quantization-aware training (QAT) (Banner et al., 2018; Chin et al., 2020; Faghri et al., 2020; Kim et al., 2020; Wang et al.,
2018) and post-training quantization (PTQ) (Neill, 2020; Bondarenko et al., 2021; Kim et al., 2021; Dettmers et al., 2022). Both of these approaches have limitations in terms of dealing with accumulative quantization errors that are propogated within the layers of a neural network during the forward pass (Zhao et al., 2019; Fan et al., 2020). To address this issue, we propose a method called SelfDistilled Quantization (SDQ) that combines selfattention and output distillation with quantization to compress large language models. SDQ involves injecting quantization noise into the student network during training and distilling knowledge from a fine-tuned teacher network from both its final output and outputs of intermediate self-attention layers. By distilling knowledge of the self-attention layers, as depicted in Figure 1, we further reduce the compounding effect of quantization errors in 1329
![0_image_0.png](0_image_0.png)
the network. We use SDQ for self-attention models and demonstrate its effectiveness in compressing multilingual models XLM-RBase and InfoXLMBase, achieving high compression rates while maintaining performance on the XGLUE benchmark. Lastly, we identify that quantization error is largest at the output of self-attention modules.
## 2 Related Work
Combining quantization and distillation has been previously explored by Mishra and Marr (2017),
who used three different schemes to combine low bit precision and knowledge distillation (KD) using a 4-bit ResNet network. Polino et al. (2018)
used a distillation loss with respect to a quantized teacher network to train a student network, and also proposed differentiable quantization, which optimizes the location of quantization points through SGD. Zhou et al. (2017) used iterative quantization, supervised by a teacher network, to retrain an FP32 model with low precision convolution weights
(binary, ternary, and 4 bits). Kim et al. (2019) used QAT and fine-tuning to mitigate the regularization effect of KD on quantized models. Unlike previous work, I-BERT (Kim et al., 2021) also approximates nonlinear operations (GELU, LayerNorm and Softax) in integer format for pure and faster INT-8 inference i.e no MP.Q8BERT (Zafrir et al., 2019) and fully Quantized Transformer (Prato et al., 2019)
applied QAT with the Straight-Through Estimator to approximate non-differentiable quantization in INT-8 format.
TernaryBERT (Zhang et al., 2020) uses intermediate layer distillation with layerwise and rowwise weight ternarization. At the extremum of compression rates, BinaryBERT (Bai et al., 2020) binarizes the weights by using ternary weight splitting to avoid the difficulties of training binary neural network directly. BinaryBERT too uses knowledge distillation to improve quantization. Unlike, TernaryBERT and BinaryBERT our work quantitatively measures accumulative quantization errors in the network and thus combines distillation to address this with 1) iterative Product Quantization
(iPQ) (Stock et al., 2019) that iteratively quantizes the layer by layer throughout training and 2) QuantNoise (Fan et al., 2020) which injects sub-block quantization noise during training. We now move to describing the methodology of SDQ.
## 3 Methodology
We begin by defining a dataset D := {(Xi, yi)}
D i=1 with samples si = (Xi, yi), where each Xi:=
(x1*, . . . ,* xN ) and xi ∈ R
dis the i-th vector. For structured prediction yi ∈ {0, 1}
N×dy and for single and pairwise sentence classification, yi ∈
{0, 1}
dy, where dy is the number of classes. Let y S = fθ(Xi) be the output prediction (y S ∈ R
dy)
from the student fθ(·) with pretrained parameters θ := {Wl, bl}
L
l=1 for L layers and the outputs of self-attention blocks are denoted as Al. The loss function for standard classification fine-tuning is defined as the cross-entropy loss ℓCE(y S, y).
Self-Distilled Quantization For self-distilled quantization, we also require a fine-tuned teacher network fΘ, that has been tuned from the pretrained state fθ, to retrieve the soft teacher labels y T:= fΘ(x), where y T ∈ R
C and PC
c y T
c = 1.
The soft label y Tcan be more informative than the one-hot targets y used for standard classification as they implicitly approximate pairwise class similarities through logit probabilities. The KullbeckLeibler divergence (KLD) ℓKLD is then used with the main task cross-entropy loss ℓCE to express ℓSDQKLD as shown in Equation 2,
$$\ell_{\rm SDQ_{\rm KLD}}=\ell_{\rm CE}(\mathbf{y}^{S},\mathbf{y})+\alpha\tau^{2}D_{\rm KLD}\big{(}\mathbf{y}^{S},\mathbf{y}^{T}\big{)}\tag{1}$$ where $D_{\rm KLD}(\mathbf{y}^{S},\mathbf{y}^{T})=\mathbb{H}(\mathbf{y}^{T})-\mathbf{y}^{T}\log(\mathbf{y}^{S})$, $\mathbb{H}(\mathbf{y}^{T})=\mathbf{y}^{T}\log(\mathbf{y}^{T})$ is the entropy of the teacher
distribution and τ is the softmax temperature. Following (Hinton et al., 2015), the weighted sum of the cross-entropy loss and the KLD loss ℓSDQKLD =ℓCE(y S, y)+ατ 2DKLDy S, y Tis used as our main SDQ-based KD loss baseline, where α ∈ [0, 1]. However, DKLD only distils the knowledge from the soft targets of the teacher but does not directly reduce accumulative quantization errors of the outputs of successive self-attention layers. This brings us to our proposed attention-based SDQ loss ℓSDQAtt-KLD shown in Equation 2,
$$\begin{array}{c}{{\ell_{\mathrm{SDQ_{\mathrm{An-KLD}}}}=\ell_{\mathrm{CE}}(\mathbf{y}^{S},\mathbf{y})+\alpha\tau^{2}D_{\mathrm{KLD}}\big(\mathbf{y}^{S},\mathbf{y}^{T}\big)}}\\ {{+\beta\frac{1}{L H}\sum_{l=1}^{L}\sum_{h=1}^{H}\ell_{\mathrm{Attention}}\big(\mathbf{A}_{l h}^{S},\mathbf{A}_{l h}^{T}\big)}}\end{array}\tag{2}$$
where α and β are regularization terms and ℓAttention computes the loss between the student and teacher outputs of each self-attention block in L layers and H attention heads per layer. We also consider two baselines, ℓSDQAtt which is the same as Equation 2 without ατ 2DKLD(y S, y T) and ℓSDQHid which applies the Mean Squared Error (MSE) loss between the hidden state outputs instead of the attention outputs. The gradient of DKLD(·, ·) is expressed as
∂DKLD(y S
i
,y T
i
)
∂y S
i
= τ (y S
i
/τ − y T
i
/τ ) and as τ → ∞,
the gradient is approximately 1/(dyy S
i −y T
i
). Similarly, the gradient of the MSE loss on a single self-attention output in layer l and head h is 1/nlh(a S
j −a T
j
) for a single sample input x. Hence, we see the connection between derivatives between the KLD loss and the MSE loss when combining them in a single objective. We now move to describing how SDQ is used in two QAT methods.
Iterative Product Distilled Quantization We first consider using SDQ with iPQ (Stock et al.,
2019). This is achieved by quantizing m subvectors for each k columns of W where a codebook for each k subvectors is learned to map each subvector to its nearest neighbor in the learned codebook C ∈ R
k×d where k is the number of codewords. The codebook is updated by minimizing ||W−W˜ ||22 =Pd i||W[:,i] −ϕ(w[:,i])||22 where ϕ(·)
is the quantization function. This objective can be efficiently minimized with the k-means algorithm and the codewords of each layers are updated with SGD by averaging the gradients of each assigned block of weights. This is done iteratively from the bottom layers to the top layers throughout training where the upper layers are finetuned
![2_image_0.png](2_image_0.png)
while the lower layers are progressively being quantized (Stock et al., 2019). When using iPQ with SDQ, omitting the KLD loss and cross-entropy loss, the objective is ℓSDQiPQ =PL−F
l=1 h||Wl − W˜l||22 +
β L-F
Pd i
(A
S
l,i − A
T
l,i)
2iwhere F is the number of finetuned layers (non-quantized) at that point in training. Hence, SDQ progressively quantizes the layers throughout training when used with iPQ.
Block-Wise Distilled Quantization Noise For the majority of our QAT-based experiments we use Quant-Noise (Fan et al., 2020). Quant-Noise is a SoTA QAT method that applies (fake) blockwise quantization noise at random to each weight matrix. Concretely, blocks of weights bkl in Wl are chosen at random at a rate p and quantization noise is added to the chosen blocks. We can define A
S = SoftmaxW˜ QW˜
√ K
dkW˜⊤
V W˜⊤Q
W˜ QW˜U where W˜ represents (fake) quantized weights and is given as W˜ = ϕINT-8(W) = s(round(W/s + b) − b)
where s and b are scalars learned throughout training and represent the scaling factor and offset respectively. We then pass A
Sand A
Tto Equation 2 to compute the loss.
## 4 Empirical Results
We begin by referring the reader to the supplementary material for the experimental setup in subsection A.2 and subsection A.3. Before discussing the main results on XGLUE, we first analyse the mean absolute quantization error and the Frobenius norm of the elementwise difference in selfattention blocks between an INT-8 dynamically quantized InfoXLMBase and an unquantized FP-32 InfoXLMBase in Figure 2. We see in Figure 2a that the output layer contains the largest mean absolute error across each layer and highest error variance.
In contrast, the query, key and value (QKV) parameters have much smaller error. However, since most of the parameters are found in the QKV layers, the sum of the quantization error is larger, as seen in Figure 2b. This motivates us to focus on the output of the self-attention block when minimizing quantization errors with our proposed loss in Equation 2 as the mean error is higher near the output as it accumulates errors from previous layers in the block. This is also reflected in the parameter distribution of each layer type across all layers in Figure 3, where the x-axis is the mean absolute quantization error and the y-axis is the layer indices.
We see the quantization noise is more apparent on the output layer as the Gaussian distrbutions are non-smooth and have clear jitter effect.
![2_image_1.png](2_image_1.png)
| Student | Teacher | Mem | XNLI | NC | NER | PAWSX | POS | QAM | QADSM | WPR | Avg. |
|---------------|-----------|-------|--------|------|-------|---------|-------|-------|---------|-------|--------|
| X | - | 1.22 | 73.9 | 83.2 | 83.8 | 89.3 | 79.7 | 68.4 | 68.3 | 73.6 | 77.5 |
| I | - | 1.22 | 74.6 | 83.6 | 85.9 | 89.6 | 79.8 | 68.6 | 68.9 | 73.8 | 78.1 |
| X-PTQDynamic | - | 0.52 | 71.4 | 81.5 | 82.9 | 87.1 | 76.1 | 66.3 | 65.8 | 68.2 | 74.9 |
| I-PTQDynamic | - | 0.52 | 72.5 | 81.8 | 83.0 | 87.8 | 75.8 | 66.6 | 66.1 | 68.7 | 75.3 |
| X-QNAT | - | 0.52 | 70.5 | 81.8 | 83.0 | 87.4 | 78.4 | 66.8 | 66.9 | 70.4 | 75.7 |
| I-QNAT | - | 0.52 | 73.0 | 82.1 | 83.1 | 87.8 | 78.0 | 67.2 | 67.2 | 70.8 | 76.2 |
| X-QNATKLD | X | 0.52 | 72.5 | 82.0 | 83.2 | 88.1 | 78.8 | 67.1 | 67.2 | 70.7 | 75.8 |
| X-QNATKLD | I | 0.52 | 73.3 | 82.1 | 82.8 | 88.2 | 78.3 | 67.3 | 67.5 | 70.5 | 75.9 |
| I-QNATKLD | I | 0.52 | 73.6 | 82.6 | 83.1 | 88.4 | 79.5 | 67.6 | 67.9 | 71.8 | 76.8 |
| I-QNATAtt | I | 0.52 | 73.2 | 82.4 | 83.0 | 88.3 | 78.3 | 67.8 | 67.7 | 71.7 | 76.6 |
| I-QNATAtt-KLD | I | 0.52 | 73.8 | 82.8 | 83.4 | 88.8 | 79.5 | 67.9 | 68.0 | 72.4 | 77.1 |
| I-QNATAtt | IQNAT-PTQ | 0.52 | 72.1 | 82.1 | 83.1 | 89.2 | 78.8 | 68.0 | 67.8 | 71.9 | 76.6 |
| I-QNATHid | IQNAT-PTQ | 0.52 | 70.7 | 81.9 | 82.4 | 88.8 | 78.4 | 67.3 | 68.0 | 71.4 | 76.1 |
| I-QNATKLD | IQNAT-PTQ | 0.52 | 73.1 | 82.3 | 83.0 | 88.4 | 79.2 | 67.6 | 67.9 | 72.1 | 76.7 |
| I-QNATAtt-KLD | IQNAT-PTQ | 0.52 | 73.4 | 82.5 | 83.3 | 88.9 | 79.6 | 67.9 | 68.2 | 72.6 | 77.1 |
## 4.1 Quantization Results On **Xglue**.
We show the per task test performance and the *understanding score* (i.e average score) on XGLUE
for quantization baselines and our proposed SDQ
approaches in Table 1 (for brevity we denote InfoXLMBase as I and XLM-RBase as X). Our proposed QNATAtt-KLD achieves the best average
(Avg.) score and per task performance for all tasks, using a fine-tuned InfoXLMBase (XNLI, NC,
NER and QAM) and a fine-tuned InfoXLMBase trained with QuantNoise and dynamically quantized post-training (PAWSX, POS, QAM, QADSM
and WPR). We also find that QNATAtt-KLD improves over QNATKLD, highlighting that the attention loss is improving quantized model performance. In preliminary experiments we found it is better to distil from a fine-tuned teacher that has the same pretrained model type. Lastly, We note, that both of our proposed methods that achieve an 71.1 understanding score are within 1.0 understanding score of the original "I" fine-tuned FP-32 model.
XNLI Per Language Results Table 2 shows the baselines and our SDQ methods applied to XLM-RBase and InfoXLMBase. Here, both models are only trained on the English language and hence the remaining languages in the evaluation set test the zero-shot performance after INT8 quantization (apart from the first 3 rows that show FP32 fine-tuned results). The first row is fine-tuned zero-shot results from the original paper (Con-
| Student | Quant Method | Teacher | Quant Method en Avg. | |
|---------------------------------------------------|----------------|----------------------|------------------------|-----------|
| XLM-RBase Conneau et al. - | - | - | 84.6 74.5 | |
| XLM-RBase | - | - | - | 83.9 73.9 |
| InfoXLMBase | - | - | - | 84.1 74.6 |
| InfoXLMBase | PTQDynamic | - | - | 81.7 71.4 |
| XLM-RBase | PTQDynamic | - | - | 80.1 72.5 |
| XLM-RBase | QNAT | - | - | 82.1 70.5 |
| InfoXLMBase | QNAT | - | - | 83.7 73.0 |
| XLM-RBase | QNATKLD | XLM-RBase | - | 83.4 72.5 |
| XLM-RBase | QNATKLD | InfoXLMBase - | 84.4 73.3 | |
| InfoXLMBase | QNATKLD | InfoXLMBase - | 83.9 73.6 | |
| InfoXLMBase | QNATAtt | InfoXLMBase - | 84.1 73.2 | |
| InfoXLMBase | QNATAtt-KLD | InfoXLMBase - | 84.1 73.8 | |
| InfoXLMBase | QNATAtt | InfoXLMBase QNAT-PTQ | 83.3 72.1 | |
| InfoXLMBase | QNATHid | InfoXLMBase QNAT-PTQ | 81.1 70.7 | |
| InfoXLMBase | QNATKLD | InfoXLMBase QNAT-PTQ | 83.7 73.1 | |
| InfoXLMBase | QNATAtt-KLD | InfoXLMBase QNAT-PTQ | 83.9 73.4 | |
| The best performance obtained are marked in bold. | | | | |
neau et al., 2019). On average, we find that best student networks results are found when distilling using QNATAtt-KLD SDQ with the outputs of an FP-32 teacher for InfoXLMBase at 73.8%
test accuracy points, where the original FP-32 InfoXLMBase achieves 74.6%. Additionally we see that QNATAtt-KLD improves over QNATKLD
distillation, indicating that attention output distillation improves the generalization of the INT-8 student model. We also found that largest performance drops correspond to languages that have less pretraining data and are morphologically rich
(Swahili, Urdu, Arabic), while performance in English for the best INT-8 XLM-RBase (84.4%) is within 0.2% of the original network (84.6%) and the best InfoXLMBase that uses QNATAtt-KLD is on par with the FP-32 results.
![4_image_0.png](4_image_0.png)
## 4.2 Performance Versus Compression Rate
Figure 4 shows how the performance changes for four approaches, including two of our proposed objectives (QNATKLD and QNATAtt-KLD), when training InfoXLMBase. As before, PTQdynamic is a dynamically quantization fine-tuned InfoXLMBase and QNAT-PTQdynamic is the same as PTQdynamic except fine-tuned also using QuantNoise. Unlike our previous results, here we apply fake quantization at inference to achieve compression lower than INT-8 and be comparable to previous work (Fan et al., 2019). We see that performance is generally well maintained up until 8 bits. However, performance significantly degrades for all quantization methods for 4 and 2 bit weights. We find that QNATAtt-KLD maintains higher performance when compared to the baselines and directly quantizing with no QAT (PTQdynamic) leads to the poorest results, also reflected in Table 1 results with real dynamic quantization at inference time.
Student Teacher XNLI NC NER POS **Avg.**
- - 74.6 83.6 85.9 79.7 81.0
iPQScalar - 69.1 79.4 81.9 76.3 76.7
iPQScalar-KLD Standard 70.4 80.1 82.3 76.9 77.4
iPQScalar-KLD iPQScalar 70.8 80.7 82.6 79.4 78.4 iPQScalar-Att-KLD Standard 72.2 80.4 82.5 77.4 78.1 iPQScalar-Att-KLD iPQScalar 71.3 80.4 82.9 79.6 **78.6**
iPQEM - 69.1 79.4 81.9 76.3 76.7
iPQEM-KLD Standard 70.4 80.1 82.3 76.9 77.4 iPQEM-KLD iPQEM 72.8 81.6 82.8 79.8 79.3
iPQEM-Att-KLD Standard 73.2 82.3 82.7 79.1 79.3
iPQEM-Att-KLD iPQEM 73.1 82.5 83.0 79.2 **79.5** QNAT - 70.5 81.8 83.3 78.4 78.5
QNATKLD Standard 73.2 82.6 83.1 79.5 79.6
QNATKLD QNAT 73.1 82.3 83.0 79.2 79.4 QNATAtt-KLD Standard 73.8 82.8 83.4 79.5 **79.9** QNATAtt-KLD QNAT 73.4 82.5 83.3 79.6 79.7
## 4.3 Ablation With Current Qat Methods
Table 3 shows the results from a subset of the XGLUE tasks where the first two columns describe how the student and teacher networks are trained and "Standard" refers to standard FP-32 fine-tuning.
This includes iPQ (Stock et al., 2019) with scalar quantization (iPQScalar), iPQ that uses expectation maximization to create the codebook during training (iPQEM) and previous results of QuantNoise
(QNAT) as a reference point. In this setup, we only apply the attention loss, ℓAttention, to the layers that are quantized during iPQ. When using SDQ, the average score increases by 1.9 points for iPQScalar, 1.9 points for iPQScalar 2.8 points for iPQEM and 1.4 points for QNAT. Moreover, adding SDQ distillation of the logits and the self-attention outputs improves when compared to logit distillation only.
## 5 Conclusion
In this paper we proposed an attention-based distillation that minimizes accumulative quantization errors in fine-tuned masked language models. We identified that most of the quantization errors accumulate at the output of self-attention blocks and the parameter distribution of the output layer is effected more by quantization noise. The proposed distillation loss outperforms baseline distillation without the attention loss and the resulting INT8 models are within 1 understanding score points on the XGLUE benchmark with *real* quantization post-training. Moreover, fine-tuning the teacher network with quantization-aware training can further improve student network performance on some of the tasks. Further compression can be achieved up to 4-bit and 2-bit weights but performance steeply degrades as the network capacity is drastically reduced coupled with the models having to generalize to multiple languages it was not trained on.
## 6 Limitations
Dataset and Experimental Limitations. The datasets and tasks we focus on are from the XGLUE benchmark (Liang et al., 2020). The structured prediction tasks, namely Named Entity Recognition (NER) and Part of Speech (PoS) Tagging, both have a limited number of training samples at 15k and 25.4k samples respectively. This is due to the difficulty in annotating on the token level, however it can still be viewed as a limitation when compared to the remaining sentence-level tasks the majority of tasks have at least 100k samples.
Methodological Limitations. Below are a list of the main methodological limitations we perceive of our work:
- Our method requires a teacher model that is already trained on the downstream task which can then be used to perform knowledge distillation. This is limiting when there are constraints on the computing resources required to produce the quantized model.
- We have focused on the problem of reducing accumulative qunatization errors which become more apparent the deeper a network is.
However, this problem is intuitvely lessened when the model is shallow (e.g 3-4 layers) but perhaps wider. Hence the results may be less significant if the model is shallower than what we have experimented in this work.
- By introducing the distillation loss we require an additional regualrization term β to be optimally set, relative to the main distillation loss α. This can be viewed as a potential limitation has it introduced an additional hyperparameter to be searched to obtain best results on a given task.
- Lastly, since intermediate layer outputs of the teacher network are required for self-attention distillation, we have to perform two forward passes during training. Since standard KLD
distillation only requires the output logits, it is common to store the training data teacher logits, eliminating the need to perform two forward passes at training data. However, this is not an option with self-atttention outputs as the storage required offline scales with the number of self-attention heads, number of layers and the size of the training data.
## 7 Ethics Statement
Here we briefly discuss some ethical concerns of using such compressed models in the real world, specifically the two techniques used in this work, quantization and knowledge distillation. Hooker et al. (2020) have found that compressed models can amplify existing algorithmic bias and perform very poorly on a subset of samples while the average out-of-sample accuracy is maintained close to the uncompressed model. This general finding for pruning and quantization may be also extrapolated to our work (including distillation), hence it is important to recognize that our work, much like the remaining literature on compression, may have ethical concerns with regards to algorithmic bias and how that effects downstream tasks. However, smaller models are more cost-efficient and thus become more widely available to the general public. To summarize, it is important to analyse any aforementioned bias amplification for subsets of samples for downstream tasks compressed models are used for.
## References
Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. 2020. Binarybert: Pushing the limit of bert quantization. *arXiv preprint arXiv:2012.15701*.
Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry. 2018. Scalable methods for 8-bit training of neural networks. *Advances in neural information* processing systems, 31.
Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. 2021. Understanding and overcoming the challenges of efficient transformer quantization.
arXiv preprint arXiv:2109.12948.
Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2020. Infoxlm: An information-theoretic framework for crosslingual language model pre-training. *arXiv preprint* arXiv:2007.07834.
Ting-Wu Chin, Pierce I-Jen Chuang, Vikas Chandra, and Diana Marculescu. 2020. One weight bitwidth to rule them all. In European Conference on Computer Vision, pages 85–103. Springer.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale. Advances in Neural Information Processing Systems, 35:30318–
30332.
Asit Mishra and Debbie Marr. 2017. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. *arXiv preprint* arXiv:1711.05852.
Fartash Faghri, Iman Tabrizian, Ilia Markov, Dan Alistarh, Daniel M Roy, and Ali Ramezani-Kebrya. 2020.
Adaptive gradient quantization for data-parallel sgd.
Advances in neural information processing systems, 33:3174–3185.
Angela Fan, Edouard Grave, and Armand Joulin. 2019.
Reducing transformer depth on demand with structured dropout. *arXiv preprint arXiv:1909.11556*.
Angela Fan, Pierre Stock, Benjamin Graham, Edouard Grave, Rémi Gribonval, Herve Jegou, and Armand Joulin. 2020. Training with quantization noise for extreme model compression. *arXiv preprint* arXiv:2004.07320.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015.
Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. 2020. Characterising bias in compressed models. arXiv preprint arXiv:2010.03058.
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In *Proceedings of* the IEEE conference on computer vision and pattern recognition, pages 2704–2713.
Jangho Kim, Yash Bhalgat, Jinwon Lee, Chirag Patel, and Nojun Kwak. 2019. Qkd: Quantizationaware knowledge distillation. arXiv preprint arXiv:1911.12491.
Jangho Kim, KiYoon Yoo, and Nojun Kwak. 2020.
Position-based scaled gradient for model quantization and sparse training. *arXiv preprint* arXiv:2005.11035.
Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W
Mahoney, and Kurt Keutzer. 2021. I-bert: Integeronly bert quantization. In International conference on machine learning, pages 5506–5518. PMLR.
Raghuraman Krishnamoorthi. 2018. Quantizing deep convolutional networks for efficient inference: A
whitepaper. *arXiv preprint arXiv:1806.08342*.
Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, et al. 2020. Xglue: A new benchmark datasetfor cross-lingual pre-training, understanding and generation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008–6018.
James O' Neill. 2020. An overview of neural network compression. *arXiv preprint arXiv:2006.03669*.
Tesla NVIDIA. 2017. Nvidia tesla v100 gpu architecture. *Tesla NVIDIA*.
Antonio Polino, Razvan Pascanu, and Dan Alistarh.
2018. Model compression via distillation and quantization. *arXiv preprint arXiv:1802.05668*.
Gabriele Prato, Ella Charlaix, and Mehdi Rezagholizadeh. 2019. Fully quantized transformer for machine translation. arXiv preprint arXiv:1910.10485.
Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, and Hervé Jégou. 2019. And the bit goes down: Revisiting the quantization of neural networks.
arXiv preprint arXiv:1907.05686.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *arXiv preprint arXiv:1706.03762*.
Naigang Wang, Jungwook Choi, Daniel Brand, Chia-Yu Chen, and Kailash Gopalakrishnan. 2018. Training deep neural networks with 8-bit floating point numbers. Advances in neural information processing systems, 31.
Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8bert: Quantized 8bit bert. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition
(EMC2-NIPS), pages 36–39. IEEE.
Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu. 2020. Ternarybert: Distillation-aware ultra-low bit bert. arXiv preprint arXiv:2009.12812.
Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Chris De Sa, and Zhiru Zhang. 2019. Improving neural network quantization without retraining using outlier channel splitting. In International conference on machine learning, pages 7543–7552. PMLR.
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. 2017. Incremental network quantization: Towards lossless cnns with low-precision weights. *arXiv preprint arXiv:1702.03044*.
## A Supplementary Material A.1 Self-Attention In Transformers
Consider a dataset D = {(Xi, yi)}
m i=1 for D ∈ D
and a sample s := (*X, y*) where the sentence X := (x1*, . . . x*n) with n being the number of words x ∈ X. We can represent a word as an input embedding xw ∈ R
d, which has a corresponding target vector y. In the pre-trained transformer models we use, Xiis represented by 3 types of embeddings; word embeddings (Xw ∈ R
n×d), segment embeddings (Xs ∈ R
n×d) and position embeddings (Xp ∈ R
n×d), where d is the dimensionality of each embedding matrix. The self-attention block in a transformer mainly consists of three sets of parameters: the query parameters Q ∈ R
d×l, the key parameters K ∈ R
d×land the value parameters V ∈ R
d×o. For 12 attention heads (as in XLMRBase and InfoXLMBase), we express the forward pass as follows:
$$\overrightarrow{\mathbf{X}}=\mathbf{X}_{w}+\mathbf{X}_{s}+\mathbf{X}_{p}\tag{3}$$ $$\overrightarrow{\mathbf{Z}}:=\bigoplus_{i=1}^{12}\text{softmax}(\overrightarrow{\mathbf{X}}\mathbf{Q}_{(i)}\mathbf{K}_{(i)}^{T}\overrightarrow{\mathbf{X}}^{T})\overrightarrow{\mathbf{X}}\mathbf{V}_{(i)}\tag{4}$$ $$\overrightarrow{\mathbf{Z}}=\text{Feedforward}(\text{LayerNorm}(\overrightarrow{\mathbf{Z}}+\overrightarrow{\mathbf{X}}))\tag{5}$$ $$\overrightarrow{\mathbf{Z}}=\text{Feedforward}(\text{LayerNorm}(\overrightarrow{\mathbf{Z}}+\overrightarrow{\mathbf{X}}))\tag{6}$$
The last hidden representations of both directions are then concatenated Z′:=
←−
ZL−→
Z′and projected using a final linear layer W ∈ R
dfollowed by a sigmoid function σ(·) to produce a probability estimate yˆ, as shown in (7). Words from (step-3)
that are used for filtering the sentences are masked using a [PAD] token to ensure the model does not simply learn to correctly classify some samples based on the association of these tokens with counterfacts. A linear layer is then fine-tuned on top of the hidden state, hX,[CLS] emitted corresponding to the [CLS] token. This fine-tunable linear layer is then used to predict whether the sentence is counterfactual or not, as shown in Equation 7, where B ⊂ D is a mini-batch and Lce is the cross-entropy loss.
$${\mathcal{L}}_{c e}:={\frac{1}{|{\mathcal{B}}|}}\sum_{(X,y)\in{\mathcal{B}}}y\log\left(\sigma({\boldsymbol{h}}_{X,\,[\,\mathrm{CLS}]}\cdot\mathbf{W})\right)\ (7)$$
Configurations We use XLM-RBase and InfoXLMBase, which uses 12 Transformer blocks, 12 self-attention heads with a hidden size of 768.
The default size of 512 is used for the sentence length and the sentence representation is taken as the final hidden state of the first [CLS] token.
## A.2 Experimental Setup And Hardware Details
Below describes the experimental details, including model, hyperparameter and quantization details. We choose modestly sized cross-lingual language models as the basis of our experiments, namely XLM-RBase (Conneau et al., 2019) and InfoXLMBase (Chi et al., 2020), both approximately 1.1GB in memory and these pretrained models are retrieved from the huggingface model hub.
We choose both XLM-RBase and InfoXLMBase because they are relatively small Transformers and are required to generalized to languages other than the language used for fine-tuning. Hence, we begin from a point that model are already relatively difficult to compress and are further motivated by the findings that larger overparameterized networks suffer less from PTQ to 8-bit integer format and lower (Jacob et al., 2018; Krishnamoorthi, 2018).
For both XLM-RBase and InfoXLMBase the hyper-parameters are set as follows: 768 hidden units, 12 heads, GELU activation, a dropout rate of 0.1, 512 max input length, 12 layers in encoder. The Adam Optimizer with a linear warm-up (Vaswani et al., 2017) and set the learning rate to 2e-5 for most tasks. For all sentence classification tasks the batch size is set to 32 and we fine-tune with 10 epochs. For POS Tagging and NER, we fine-tune with 20 epochs and set the learning rate to 2e-5. We select the model with the best average results on the development sets of all languages. For SDQ-based models, we report the best performing model for α ∈ [0.1, 0.2, 0.5, 0.8] and β ∈ [10, 100, 200, 500].
All experiments are carried out on Tesla V100-
SXM2 32 Gigabyte GPUs (NVIDIA, 2017) with no constraint on GPU hours used on these machines.
In all reported results, we report the best (max) result from 8-16 different runs when searching for α and β depending on each particular task.
## A.3 Model Configuration And Hyperparameter Settings
XLM-RBase and InfoXLMBase uses 12 Transformer blocks, 12 self-attention heads with a hidden size of 768. The default size of 512 is used for the sentence length and the sentence representation is taken as the final hidden state of the first [CLS] token. A
fine-tuned linear layer W is used on top of both models, which is fed to through a softmax function σ as p(c|h) = σ(Wh) where c is used to calibrate the class probability estimate and we maximize the log-probability of correctly predicting the ground truth label.
Table 4 shows the pretrained model configurations that were already predefined before our experiments. The number of (Num.) hidden groups here are the number of groups for the hidden layers where parameters in the same group are shared.
The intermediate size is the dimensionality of the feed-forward layers of the the Transformer encoder.
The 'Max Position Embeddings' is the maximum sequence length that the model can deal with.
| Hyperparameters | XLM-RBase | InfoXLMBase |
|-------------------------------|-------------|---------------|
| Vocab Size | 250002 | 250002 |
| Max Pos. Embeddings | 514 | 514 |
| Hidden Size | 3072 | 3072 |
| Encoder Size | 768 | 768 |
| Num. Hidden Layers | 12 | 12 |
| Num. Hidden Groups | 1 | 1 |
| Num. Attention Heads | 12 | 12 |
| Hidden Activations | GeLU | GeLU |
| Layer Norm. Epsilon | 10−12 | 10−12 |
| Fully-Connected Dropout Prob. | 0.1 | 0.1 |
| Attention Dropout Prob. | 0 | 0 |
Table 4: Model Hyperparameter Settings We now detail the hyperparameter settings for transformer models and the baselines. We note that all hyperparameter settings were performed using a manual search over development data.
## A.3.1 Transformer Model Hyperparameters
We did not change the original hyperparameter settings that were used for the original pretraining of each transformer model. The hyperparameter settings for these pretrained models can be found in the class arguments python documentation in each configuration python file in the https://github.com/huggingface/ transformers/blob/master/src/transformers/
e.g configuration_.py.For fine-tuning transformer models, we manually tested different combinations of a subset of hyperparameters including the learning rates {50−4, 10−5, 50−5},
batch sizes {16, 32, 128}, warmup proportion
{0, 0.1} and ϵ which is a hyperparameter in the adaptive momentum (adam) optimizer. Please refer to the huggingface documentation at https://github.com/huggingface/transformers for further details on each specific model e.g at https:
//github.com/huggingface/transformers/blob/
master/src/transformers/modeling_roberta.py, and also for the details of the architecture that is used for sentence classification and token classification.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5
✓ A2. Did you discuss any potential risks of your work?
We discuss some risks related to compression effects on algorithmic bias in Section 7.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Yes, We Discuss The Experiments In Section 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Yes, in Section A.2 and A.3.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Yes, we also detail this in Section A.2 and A.3.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Yes, we mention this at the end of Section A.2.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Yes, we heavily rely on the huggingface platform and software which we discuss in the supplementary material in section A.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
han-etal-2023-modality | Modality Adaption or Regularization? A Case Study on End-to-End Speech Translation | https://aclanthology.org/2023.acl-short.115 | Pre-training and fine-tuning is a paradigm for alleviating the data scarcity problem in end-to-end speech translation (E2E ST). The commonplace {''}modality gap{''} between speech and text data often leads to inconsistent inputs between pre-training and fine-tuning. However, we observe that this gap occurs in the early stages of fine-tuning, but does not have a major impact on the final performance. On the other hand, we find that there has another gap, which we call the {''}capacity gap{''}: high resource tasks (such as ASR and MT) always require a large model to fit, when the model is reused for a low resource task (E2E ST), it will get a sub-optimal performance due to the over-fitting. In a case study, we find that the regularization plays a more important role than the well-designed modality adaption method, which achieves 29.0 for en-de and 40.3 for en-fr on the MuST-C dataset. | # Modality Adaption Or Regularization? A Case Study On End-To-End Speech Translation
Yuchen Han1, Chen Xu1, Tong Xiao1,2∗
, Jingbo Zhu1,2 1School of Computer Science and Engineering, Northeastern University, Shenyang, China 2NiuTrans Research, Shenyang, China [email protected],[email protected]
{xiaotong,zhujingbo}@mail.neu.edu.cn
## Abstract
Pre-training and fine-tuning is a paradigm for alleviating the data scarcity problem in endto-end speech translation (E2E ST). The commonplace "modality gap" between speech and text data often leads to inconsistent inputs between pre-training and fine-tuning. However, we observe that this gap occurs in the early stages of fine-tuning, but does not have a major impact on the final performance. On the other hand, we find that there has another gap, which we call the "capacity gap": high resource tasks (such as ASR and MT) always require a large model to fit, when the model is reused for a low resource task (E2E ST), it will get a sub-optimal performance due to the overfitting. In a case study, we find that the regularization plays a more important role than the well-designed modality adaption method, which achieves 29.0 for en-de and 40.3 for enfr on the MuST-C dataset. Code and models are available at https://github.com/hannlp/TAB.
## 1 Introduction
End-to-end speech translation (E2E ST) employs a direct model to translate source language speech into target language text, which has low latency and can avoid the "error propagation" problem in traditional cascade methods (Weiss et al., 2017).
However, compared to automatic speech recognition (ASR) and machine translation (MT) models used in cascade methods, E2E ST models typically have limited training data (Cattoni et al., 2021),
which can result in sub-optimal performance.
Transferring the knowledge from the related tasks (e.g. ASR and MT) is a widely-used approach for E2E ST to achieve optimal performance (Tang et al., 2021; Zhang et al., 2022a). However, the difference between tasks and data makes the transfer process more challenging (Wang et al., 2020).
The inconsistency of length and representation between speech and text leads to the "modality gap"
∗Corresponding author.
1340
![0_image_0.png](0_image_0.png)
(Liu et al., 2020), which exists in scenarios where the inputs of the model change, such as in the pre-training and fine-tuning (PT-FT) paradigm (Xu et al., 2021) or in the multi-task learning (MTL)
methods (Ye et al., 2021). Thus, the connectionist temporal classification (CTC) (Graves et al.,
2006) based adapters (Liu et al., 2020; Xu et al.,
2021) have been proposed to transform the original speech output into a text-like sequence. Recently, consistency training methods have achieved promising results by using a better branch, such as the mix-up branch (Fang et al., 2022) or the text branch (Cheng et al., 2022), to promote crossmodal learning and support the original speech output. However, we find that the "modality gap" does not exist throughout the training process in Figure 1.
While consistency training methods have some regularization function (Liang et al., 2021; Guo et al.,
2022) to help the model overcome over-fitting and be fully trained. A natural question arises: Are modality adaption methods always effective when an E2E ST model is fully trained?
In this work, we aim to investigate how much of the improvement is due to the modality adaption or regularization methods. To achieve this, we adopt the PT-FT and encoder decouple paradigm and establish a framework that incorporates adjustable modality adaption and consistency training methods. Through extensive experiments on the MuSTC en-de and en-fr benchmarks, we observe that:
- The modality adaption method in PT-FT only accelerates the early phase of fine-tuning, but does not provide a significant improvement for a fully trained model.
- We obtained 29.0 and 40.3 on the MuST-C
en-de and en-fr datasets, but regularization played a major role, which confirming that the "capacity gap" is more severe than the
"modality-gap" in E2E ST.
## 2 Our Case Study: Tab 2.1 Architecture
The E2E ST corpus DST usually consists of three parts: the source language speech s, the corresponding transcripts x, and the target translation y.
The overall framework, as shown in Figure 2, consists of a speech encoder and a shared transformer.
Speech Encoder. The speech encoder encodes the source language speech s into speech output h:
$$\begin{array}{r c l}{{h}}&{{=}}&{{\mathrm{ENC_{speech}}(s;\theta_{s})}}\end{array}$$
$$(1)$$
We employ the CTC loss at the top of the speech encoder (Wang et al., 2020; Liu et al., 2020; Xu et al., 2021) to predict an alignment path π by speech output h based on a conditionally independent assumption p(π|h) = Q|h| tp(πt|ht):
$p(\pi_{t}|h_{t})=$ Softmax(Linear($h_{t}$; $\theta_{ctc}$)) (2)
where πt ∈ V+, which is an extended source language vocabulary by introducing a "blank" token.
The path π can be mapped to the transcript x by removing all repeated labels and blanks, such an operation is called β. The CTC loss is defined as the negative log probability of all possible alignment paths β−1(x) between h and x:
$${\mathcal{L}}_{C T C}\quad=\quad-\sum_{\pi\in\beta^{-1}(x)}\log p(\pi|h)\quad(3)$$
Shared Transformer. The shared transformer accepts two inputs: the original branch o and the auxiliary branch a, both of equal length. We leave the details of these to the Section 2.2. The encoder and decoder of the shared transformer are utilized to obtain the final predictions Pj = p(yj |y<j , o; θt) and Qj = p(yj |y<j , a; θt), respectively. The cross-entropy losses are then calculated as follows:
$$\begin{array}{r c l}{{{\mathcal L}_{C E_{o}}}}&{{=}}&{{-\sum_{j=1}^{|y|}\log{\mathcal P}_{j}}}\\ {{{\mathcal L}_{C E_{a}}}}&{{=}}&{{-\sum_{j=1}^{|y|}\log{\mathcal Q}_{j}}}\end{array}$$
(4) $\binom{5}{4}$ .
logPj (4)
logQj (5)
## Tuning with Auxiliary Branch (TAB) $\phantom{\rule{0ex}{0ex}}$
The speech encoder and shared transformer are initialized with pre-trained ASR and MT models.
During fine-tuning, we aim to build a text-like auxiliary branch which includes some textual representations like Fang et al. (2022); Zhang et al. (2022b)
for modality adaption, and provide an adjustable probability to control the degree of it. To obtain the textual embedding, we utilize the ability of the CTC alignment, where πt = argmax(p(πt|ht)) is an id of V
+ that denotes the corresponding CTCpredicted token of the speech feature ht.
Shrink. To eliminate the effect of too many "blank" positions in the sequence, we first average the consecutively repeated features (e.g. πi = ... = πj =
ci→j ) in the speech output h to obtain the original branch o, where ok = (hi + ... + hj ) ·1 j−i
.
Copy & Replace. We copy o to a new sequence a to ensure that the auxiliary branch has the same length as the original branch. Each position ak in the new sequence is then replaced with its CTC
predicted embedding Embedding(ci→j ) with a probability p∗if ci→j is not a "blank". Here, Embedding is an embedding matrix initialized by the pre-trained MT model. The replacement probability p∗can be adjusted to control the degree of modality adaption, which can be a fixed or a dynamic value, as discussed in Section 4.3. It is important to note that the auxiliary branch provides a regularization function due to the replacement operation or dropout (Liang et al., 2021). This effect will be further discussed in Section 4.4.
Fine-tuning strategy. To utilize the auxiliary branch, a consistency loss is introduced to enforce consistency between two output distributions:
$$\begin{array}{r c l}{{{\mathcal L}_{C o n s}}}&{{=}}&{{\sum_{j=1}^{|y|}{\mathcal D}({\mathcal P}_{j},\,{\mathcal Q}_{j})}}\end{array}$$
$$(6)$$
$\mathfrak{m}$, The final loss
$$(T)$$
D(Pj , Qj ) (6)
where D denotes the loss term. The final loss used in TAB is formulated as follows:
$${\mathcal{L}}\;=\;{\mathcal{L}}_{C E e_{o}}+{\mathcal{L}}_{C E a}+\lambda{\mathcal{L}}_{C T C}+\alpha{\mathcal{L}}_{C o n s}$$
![2_image_0.png](2_image_0.png)
## 3 Experimental Setup
Datasets and Pre-processing. We conducted our experiments on the MuST-C (Gulati et al., 2020)
dataset for two language directions: En-De and EnFr. The dev set was used for validation, and the tstCOMMON set was used for reporting our results.
For training the English ASR model, we used the LibriSpeech (Panayotov et al., 2015) dataset. The WMT16 En-De and WMT14 En-Fr datasets were used to train the MT models. Table 1 presents the statistics of all the datasets used in our pre-training and fine-tuning processes.
| Dataset | ASR(H) | MT(S) | ST(H/S) |
|-----------|----------|---------|-----------|
| En-De | 960 | 4.5M | 408/234K |
| En-Fr | 960 | 36M | 492/280K |
We preprocessed the speech input by extracting 80-dimensional log-mel filterbank features and removing utterances with more than 3,000 frames. The vocabulary, which has a size of 10k, is shared between the source and target languages and was trained using the SentencePiece (Kudo and Richardson, 2018) model from the MuST-C
dataset.
Model Configuration. All experiments were implemented using the fairseq toolkit (Ott et al., 2019).
Two convolutional layers with a stride of 2 were introduced to downsample the input speech features.
We used the Conformer (Gulati et al., 2020) as our speech encoder, which consists of 12 layers. Both the text encoder and decoder in the shared transformer have 6 layers. Each layer in our model has 512 hidden units, 2048 feed-forward size, and 8 attention heads. The ASR and MT models were pre-trained with external data and fine-tuned with the MuST-C dataset.
Training and Inference. We used the Adam optimizer with β1 = 0.9 and β2 = 0.997 in MT,
while β2 = 0.98 in ASR and ST, respectively.
During ASR pre-training, each batch has up to 800k frames, and the learning rate and warmup were 1.4e-3 and 10000. During MT pre-training, each batch has up to 33k tokens, and the learning rate and warmup were 1e-3 and 8000. During ST
fine-tuning, each batch has up to 320k frames, and the learning rate and warmup were 7e-4 and 4000.
The hyper-parameter λ was set to 0.3 for both pretraining and fine-tuning. We used dropout with a ratio of 0.1 during pre-training and 0.15 during fine-tuning, and label smoothing with a value of 0.1. All training was stopped early if the loss (ASR
and MT) or BLEU (E2E ST) on the dev set did not improve for twenty epochs. During inference, we averaged the model parameters of the best 10 checkpoints for evaluation. We used a beam search with a beam size of 5 for all models. We reported the case-sensitive SacreBLEU (Post, 2018). All models were trained on 4 Titan RTX GPUs.
## 4 Results And Discussion 4.1 Which Type Of Consistency Loss Is Best?
The choice of an appropriate consistency loss is crucial in leveraging the knowledge from the auxiliary branch, whether it is due to modality adaption or regularization. We conducted experiments with different loss terms with α = 1 and p∗ = 0.2.
| Loss term | BLEU | | |
|--------------|------------|-------|-------|
| dev | tst-COMMON | avg. | |
| None (α = 0) | 27.69 | 27.99 | 27.84 |
| JSD | 27.76 | 28.49 | 28.13 |
| KLorig→aux | 28.47 | 28.49 | 28.48 |
| KLaux→orig | 28.26 | 28.78 | 28.52 |
| bi-KL | 28.43 | 28.78 | 28.61 |
![3_image_0.png](3_image_0.png)
As shown in Table 2, the results indicate that a consistency loss is necessary to improve performance. The Jensen-Shannon Divergence (JSD)
loss and the unidirectional-KL loss were found to be slightly worse than the bidirectional-KL (bi-KL) loss. Therefore, we selected the bi-KL loss for the subsequent experiments.
## 4.2 Whether & When The Modality Gap Exist?
In Figure 3, we present the ℓaux/ℓ*orig* curve of finetuning, which represents the ratio of the auxiliary branch loss ℓaux to the original branch loss ℓ*orig*.
The only difference between the two branches is the replaced textual embeddings for modality adaption in the auxiliary branch. We investigate the effect of this operation during fine-tuning under different replacement probabilities.
Our results show that, in the early fine-tuning phase, the ℓaux always lower than ℓ*orig* when p∗ > 0, indicating that the model expects some textual representations like the input in pre-training.
However, in the later fine-tuning process, the ℓaux is slightly higher than the ℓ*orig*, suggesting that the model starts to get confused about replacement operations due to the introduction of some kind of noise by the destruction of the original sequence.
Moreover, we find that the maximum p∗ = 1.0 always has the lowest ratio at the beginning and the highest ratio at other times. This confirms that the modality gap exists but not throughout the entire fine-tuning process.
## 4.3 Does The Modality Adaption Help?
We experimented with different fixed replacement ratios p∗ over the range [0.0, 0.2, 0.6, 1.0] under α = 1.0 in Figure 3. Our results showed that the method with modality adaption p∗ > 0 consistently outperformed p∗ = 0. Furthermore, as observed in Section 4.2, there exists a "tipping point" in the finetuning process where the modality gap disappears.
Before this point, we should use a higher p∗, while a lower p∗should be more effective after this point, rather than using a fixed value. We discovered that the uncertainty of the original branch υ*orig*, which is defined by the normalized entropy of Pj 1, is strongly related to this point, as shown in Figure 3:
$$v_{o r i g}\quad=\quad-\frac{1}{\log V}\cdot\frac{1}{|y|}\sum\O_{j=1}^{|y|}{\mathcal{P}}_{j}\mathrm{log}{\mathcal{P}}_{j}\quad\mathrm{(8)}$$
where V is the vocabulary size. We then proposed a dynamic replacement probability derived from υ*orig* at each step: p∗ = γ·υ*orig*, where γ is a hyperparameter set to 0.5 in all experiments. When we use a dynamic replacement ratio of p∗ = 0.5·υ*orig*,
we denote it as p∗ = υ. By adopting this dynamic replacement ratio, we achieved a BLEU score of 28.87 on the MuST-C en-de dataset.
## 4.4 Is Modality Adaption Always Effective?
The consistency loss has been demonstrated to have a regularization effect on two branches with different outputs (Liang et al., 2021; Guo et al.,
2022). When p∗ = 0, there is no modality adaption through the replacement operation, but the dropout still causes different outputs for the two branches, the TAB degenerates into a pure regularization method. The hyper-parameter α can be used to control the degree of regularization, where a higher α indicates stronger regularization. By increasing α to 5, it is observed that the gap between the modality adaption method (p∗ = 0.2 or p∗ = υ) and the pure regularization method 1More precise definitions are Pj (t) and υ*orig*(t), with the symbol of the training step "t" omitted for brevity.
![4_image_0.png](4_image_0.png)
(p∗ = 0) decreases (29.05 vs 29.01) in Figure 4, although the pure regularization method always required more epochs to converge, which can be attributed to more complete training. These findings confirm that the modality adaption method on PT-FT can accelerate the early phase of fine-tuning, but does not significantly enhance the final performance when a sufficient level of regularization has been applied to the model. This also highlights the issue of over-fitting being a more serious problem in E2E ST than the "modality gap", and that better regularization and a longer time for fine-tuning can help to eliminate the "modality gap" problem in the PT-FT paradigm.
## 4.5 Final Results And Comparison Of Methods
The results for the MuST-C dataset are shown in Table 3. The modality adaption method shows an improvement of 1.0 and 0.7 BLEU points over our CTC baseline at a lower consistency level
(α = 1). However, the pure regularization method with α = 5 slightly outperforms it (+0.1 BLEU),
and outperforms all other methods designed for modality adaption (+0.1 ∼ 1.2 BLEU), except those using HuBERT as the feature extractor. When we combined our modality adaption method with a higher level of consistency (α = 5), further improvement can still be achieved, but not consistently in all languages. Our hypothesis is that the replacement operation in TAB not only releases the modality gap in the early fine-tuning phase but also introduces noise in the later stages. This noise can bring better regularization performance (Guo et al., 2022; Gao et al., 2022) when a higher con-
| Methods | BLEU | |
|-------------------------------|--------|------|
| En-De | En-Fr | |
| SATE (Xu et al., 2021) | 28.1 | - |
| STEMM† (Fang et al., 2022) | 28.7 | 37.4 |
| ConST† (Ye et al., 2022) | 28.3 | 38.3 |
| M3ST†† (Cheng et al., 2022) | 29.3 | 38.5 |
| WACO† (Ouyang et al., 2022) | 28.1 | 38.1 |
| AdaTrans (Zeng et al., 2022) | 28.7 | 38.7 |
| CRESS† (Fang and Feng, 2023) | 28.9 | - |
| CRESS†† (Fang and Feng, 2023) | 29.4 | 40.1 |
| baseline (CTC) | 27.9 | 39.1 |
| TAB (p ∗ = υ, α = 1) | 28.9 | 39.8 |
| TAB (p ∗ = 0, α = 5) | 29.0 | 39.9 |
| TAB (p ∗ = υ, α = 5) | 29.0 | 40.3 |
sistency level is given. In general, regularization brings more improvement in TAB than modality adaption, and a better regularization is helpful for E2E ST models to be fully trained.
## 5 Conclusion
Through a case study, we have demonstrated that the "modality gap" in the PT-FT paradigm for E2E ST is only present in the early stages of fine-tuning.
Although a modality adaption method can accelerate the convergence speed, it does not significantly improve the final performance of a fully trained model. However, the over-fitting and "capacity gap" are more critical issues in E2E ST, and a good regularization technique can help in fully training the model.
## Acknowledgement
The authors would like to thank anonymous reviewers for their insightful comments. This work was supported in part by the National Science Foundation of China (No. 62276056), the National Key R&D Program of China, the China HTRD Center Project (No. 2020AAA0107904), the Natural Science Foundation of Liaoning Province of China
(2022-KF-16-01), the Yunnan Provincial Major Science and Technology Special Plan Projects (No.
202103AA080015), the Fundamental Research Funds for the Central Universities (Nos. N2216016, N2216001, and N2216002), and the Program of Introducing Talents of Discipline to Universities, Plan 111 (No. B16009).
## Limitations
Although our work has achieved high performance, there are still some limitations that need to be addressed in future work:
- Our work was only carried out under the assumption that there is sufficient ASR and MT
data, and relied on transcriptions for CTC loss to perform alignment and predict. This assumption may not hold in real-world scenarios, and we plan to investigate the performance of our approach under more diverse data conditions in the future.
- We only attempted to feed two branches into the shared transformer in order to ensure a fair comparison between the pure regularization method and the modality adaption method.
However, this approach may have resulted in sub-optimal regularization performance compared to methods that feed all branches into the whole model, as demonstrated by Liang et al. (2021); Guo et al. (2022); Gao et al.
(2022).
## References
Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2021. Mustc: A multilingual corpus for end-to-end speech translation. *Comput. Speech Lang.*, 66:101155.
Xuxin Cheng, Qianqian Dong, Fengpeng Yue, Tom Ko, Mingxuan Wang, and Yuexian Zou. 2022. M3ST: mix at three levels for speech translation. *CoRR*,
abs/2212.03657.
Qingkai Fang and Yang Feng. 2023. Understanding and bridging the modality gap for speech translation.
CoRR, abs/2305.08706.
Qingkai Fang, Rong Ye, Lei Li, Yang Feng, and Mingxuan Wang. 2022. STEMM: self-learning with speechtext manifold mixup for speech translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7050–7062. Association for Computational Linguistics.
Pengzhi Gao, Zhongjun He, Hua Wu, and Haifeng Wang. 2022. Bi-simcut: A simple strategy for boosting neural machine translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 3938–
3948. Association for Computational Linguistics.
Alex Graves, Santiago Fernández, Faustino J. Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In *Machine* Learning, Proceedings of the Twenty-Third International Conference (ICML 2006), Pittsburgh, Pennsylvania, USA, June 25-29, 2006, volume 148 of ACM
International Conference Proceeding Series, pages 369–376. ACM.
Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang.
2020. Conformer: Convolution-augmented transformer for speech recognition. In Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pages 5036–5040.
ISCA.
Dengji Guo, Zhengrui Ma, Min Zhang, and Yang Feng.
2022. Prediction difference regularization against perturbation for neural machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7665–7675. Association for Computational Linguistics.
Taku Kudo and John Richardson. 2018. Sentencepiece:
A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP
2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pages 66–71. Association for Computational Linguistics.
Xiaobo Liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, and TieYan Liu. 2021. R-drop: Regularized dropout for neural networks. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS
2021, December 6-14, 2021, virtual, pages 10890–
10905.
Yuchen Liu, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. 2020. Bridging the modality gap for speechto-text translation. *CoRR*, abs/2010.14920.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations*, pages 48–53. Association for Computational Linguistics.
Siqi Ouyang, Rong Ye, and Lei Li. 2022. WACO: wordaligned contrastive learning for speech translation.
CoRR, abs/2212.09359.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR
corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2015, South Brisbane, Queensland, Australia, April 19-24, 2015, pages 5206–5210. IEEE.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, WMT 2018, Belgium, Brussels, October 31 - November 1, 2018, pages 186–191. Association for Computational Linguistics.
Yun Tang, Juan Miguel Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. 2021. Improving speech translation by understanding and learning from the auxiliary text translation task. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP
2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4252–4261. Association for Computational Linguistics.
Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, and Ming Zhou. 2020. Bridging the gap between pretraining and fine-tuning for end-to-end speech translation. In *The Thirty-Fourth AAAI Conference on* Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI
2020, New York, NY, USA, February 7-12, 2020, pages 9161–9168. AAAI Press.
Ron J. Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to-sequence models can directly translate foreign speech. In *Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017*, pages 2625–2629.
ISCA.
Chen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, Shen Huang, Qi Ju, Tong Xiao, and Jingbo Zhu. 2021.
Stacked acoustic-and-textual encoding: Integrating the pre-trained models into speech translation encoders. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 2619–2630. Association for Computational Linguistics.
Rong Ye, Mingxuan Wang, and Lei Li. 2021. End-toend speech translation via cross-modal progressive training. In Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August - 3 September 2021, pages 2267–2271. ISCA.
Rong Ye, Mingxuan Wang, and Lei Li. 2022. Crossmodal contrastive learning for speech translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 5099–5113. Association for Computational Linguistics.
Xingshan Zeng, Liangyou Li, and Qun Liu. 2022.
Adatrans: Adapting with boundary-based shrinking for end-to-end speech translation. *CoRR*,
abs/2212.08911.
Yuhao Zhang, Chen Xu, Bojie Hu, Chunliang Zhang, Tong Xiao, and Jingbo Zhu. 2022a. Improving endto-end speech translation by leveraging auxiliary speech and text data. *CoRR*, abs/2212.01778.
Ziqiang Zhang, Long Zhou, Junyi Ao, Shujie Liu, Lirong Dai, Jinyu Li, and Furu Wei. 2022b. Speechut:
Bridging speech and text with hidden-unit for encoder-decoder based speech-text pre-training. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP
2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 1663–1676. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
li-etal-2023-uncertainty | Uncertainty-Aware Bootstrap Learning for Joint Extraction on Distantly-Supervised Data | https://aclanthology.org/2023.acl-short.116 | Jointly extracting entity pairs and their relations is challenging when working on distantly-supervised data with ambiguous or noisy labels. To mitigate such impact, we propose uncertainty-aware bootstrap learning, which is motivated by the intuition that the higher uncertainty of an instance, the more likely the model confidence is inconsistent with the ground truths. Specifically, we first explore instance-level data uncertainty to create an initial high-confident examples. Such subset serves as filtering noisy instances and facilitating the model to converge fast at the early stage. During bootstrap learning, we propose self-ensembling as a regularizer to alleviate inter-model uncertainty produced by noisy labels. We further define probability variance of joint tagging probabilities to estimate inner-model parametric uncertainty, which is used to select and build up new reliable training instances for the next iteration. Experimental results on two large datasets reveal that our approach outperforms existing strong baselines and related methods. | # Uncertainty-Aware Bootstrap Learning For Joint Extraction On Distantly-Supervised Data
Yufei Li1, Xiao Yu2, Yanchi Liu3, Haifeng Chen3**, Cong Liu**1 1University of California, Riverside 2Stellar Cyber 3NEC Labs America 1{yli927,congl}@ucr.edu, [email protected], 3{yanchi,haifeng}@nec-labs.com
## Abstract
Jointly extracting entity pairs and their relations is challenging when working on distantlysupervised data with ambiguous or noisy labels. To mitigate such impact, we propose uncertainty-aware bootstrap learning, which is motivated by the intuition that the higher uncertainty of an instance, the more likely the model confidence is inconsistent with the ground truths. Specifically, we first explore instance-level data uncertainty to create an initial high-confident examples. Such subset serves as filtering noisy instances and facilitating the model to converge fast at the early stage. During bootstrap learning, we propose self-ensembling as a regularizer to alleviate inter-model uncertainty produced by noisy labels. We further define probability variance of joint tagging probabilities to estimate innermodel parametric uncertainty, which is used to select and build up new reliable training instances for the next iteration. Experimental results on two large datasets reveal that our approach outperforms existing strong baselines and related methods.
1 Introduction Joint extraction involves extracting multiple types of entities and relations between them using a single model, which is necessary in automatic knowledge base construction (Yu et al., 2020). One way to cheaply acquire a large amount of labeled data for training joint extraction models is through distant supervision (DS) (Mintz et al., 2009). DS
involves aligning a knowledge base (KB) with an unlabeled corpus using hand-crafted rules or logic constraints. Due to the lack of human annotators, DS brings a large proportion of noisy labels, e.g., over 30% noisy instances in some cases (Mintz et al., 2009), making it impossible to learn useful features. The noise can be either false relations due to the aforementioned rule-based matching assumption or wrong entity tags due to limited coverage over entities in open-domain KBs.
Existing distantly-supervised approaches model noise relying either on heuristics such as reinforcement learning (RL) (Nooralahzadeh et al., 2019; Hu et al., 2021) and adversarial learning (Chen et al., 2021), or pattern-based methods (Jia et al.,
2019; Shang et al., 2022) to select trustable instances. Nevertheless, these methods require designing heuristics or hand-crafted patterns which may encourage a model to leverage spurious features without considering the confidence or uncertainty of its predictions.
In response to these problems, we propose UnBED—Uncertainty-aware Bootstrap learning for joint Extraction on Distantly-supervised data.
UnBED assumes that 1) low data uncertainty indicates reliable instances using a pre-trained language model (PLM) in the initial stage, 2) model should be aware of trustable entity and relation labels regarding its uncertainty after training. Our bootstrap serves uncertainty as a principle to mitigate the impact of noise labels on model learning and validate input sequences to control the number of training examples in each step. Particularly, we quantify data uncertainty of an instance according to its *winning score* (Hendrycks and Gimpel, 2017) and *entropy* (Shannon, 1948). We define averaged maximum probability that is estimated by a joint PLM over each token in a sequence to adapt previous techniques in joint extraction scheme. Instances with low data uncertainty are collected to form an initial subset, which is used to tune the joint PLM tagger and facilitate fast convergence. Then, we define parametric uncertainty in two perspectives—inter-model and innermodel uncertainty. The former is quantified by selfensembling (Wang and Wang, 2022) and serves as a regularizer to improve model robustness against noisy labels during training. The latter is captured by probability variance in MC Dropout (Gal and Ghahramani, 2016) for selecting new confident instances for the next training iteration. Such two1349 Joint extraction as a token classification problem fold model uncertainties reinforce with each other to guide the model to iteratively improve its robustness and learn from reliable knowledge.
## 2 Related Work
```
- Tagging scheme: tag n sequences according to
different query position p following (Dai et al.,
2019)
- Backbone: BERT + self-matching + CRF decoder
- BERT encode a sentence X into token-level
representations h
- For each query p, self-matching is applied to
calculate the position-attention between token
at p and each token at target position t
- CRF considers the correlations between
labels in neighborhoods and jointly decodes
the best chain of labels
```
Joint Extraction Methods Joint extraction detects entities and their relations using a single model, which effectively integrates the information from both sources and therefore achieves better results in both subtasks compared to pipelined methods (Zheng et al., 2017). For example, unified methods tag entities and relation simultaneously, e.g., (Zheng et al., 2017) proposes a novel tagging scheme which converts joint extraction to a sequence labeling problem; (Dai et al., 2019)
introduces query position and sequential tagging to extract overlapping relations. Such methods avoid producing redundant information compared to parameter-sharing neural models (Miwa and Bansal, 2016; Gupta et al., 2016), and require no hand-crafted features that are used in structured systems (Yu et al., 2020).
To address the challenge of learning from DS,
pre-trained transformers (e.g., BERT, GPT-2) have gain much attention. They model strong expressive context-aware representations for text sequence through multiple attention layers, and achieve stateof-the-art performance on various NLP tasks (Radford et al., 2019; Devlin et al., 2019; Li et al., 2022).
They can be cheaply fine-tuned to solve different downstream tasks including NER and RC. Specifically, BERT is trained on large English corpus using masked language modeling. The multi-head attention weights indicate interactions between each pair of words and its hidden states integrate semantic information of the whole sentence, which are used to decode different tagging results.
Uncertainty Methods Uncertainty generally comes from two sources—aleatoric uncertainty and epistemic uncertainty. The former is also referred to as data uncertainty, describing noise inherent in the data generation. Methods mitigating such uncertainty include data interpolation (Dong et al., 2018), winning score, and temperature scale (Guo et al., 2017). The latter is also called model uncertainty, describing whether the structure choice and model parameters best describe the data distribution. One main solution to mitigate model uncertainty is Bayesian Neural Network (BNN) (Klein et al., 2017) that puts a prior distribution on its weights. To save computational cost, Monte Carlo
![1_image_0.png](1_image_0.png)
dropout is proposed as an approximation of variational Bayesian inference (Gal and Ghahramani, 2016), realized by training models with dropout layers and testing with stochastic inference to quantify probability variance. Besides BNN, selfensembling (Wang and Wang, 2022) which measures the outputs variance between models with the same architecture has been shown effective to reduce parametric uncertainty across models.
## 3 Joint Extraction Model
Tagging Scheme For an input sequence X =
{x1*, ..., x*n}, we tag n sequences according to different query position p following (Dai et al., 2019).
If p is the start of an entity (query entity e1), the sequence is an instance. The entity type is labeled at p and other entities e2 which have relationship with the query entity are labeled with relation types re. The rest of tokens are labeled "O" (Outside),
meaning they do not correspond to the query entity.
Accordingly, we convert joint extraction into a token classification task and extract relation triplets
{e1*, re, e*2} in each instance, as shown in Figure 1.
Position-Attentive Encoder we use BERT (Devlin et al., 2019) to encode a sentence X into tokenlevel representations h = {h1*, ..,* hn}, where hi ∈
R
dis a d-dimensional vector corresponding to the i-th token in X . For each query p, self-matching is applied to calculate the position-attention at ∈ R
T
between token at p and each token at target position t, which compares the sentence representations against itself to collect context information (Tan et al., 2018). The produced position-aware representation ct ∈ R
T ×dis an attention-weighted sentence vector ct = a⊤
t h. Finally, we concatenate ht and ctto generate position-aware and contextaware representations ut = [ht|ct].
CRF Decoder (Lafferty et al., 2001) For each position-aware representation ut, we first learn a linear transformation zt = W ut ∈ R
C to represent tag scores for the t-th token. Here C is the number of distinct tags. For an instance with labels y = {y1*, ..., y*n}, the decoding score s(z, y)
is the sum of transition score from tag ytto tag yt+1 plus the input score z yt t
. The conditional probability p(y|z) is the softmax over s(z, y) for all possible label sequences y′. We maximize the loglikelihood of correct tag sequences during training Lc =Plog p(y|z).
## 4 **Uncertainty-Aware Bootstrap Learning**
Motivation One of the main challenges in bootstrap learning is to evaluate the "correctness" of a labeled instance. We consider this problem from an uncertainty perspective and assume instances with lower uncertainty are more likely to be correctly labeled. In this section, we first propose instancelevel data uncertainty which is used to filter noisy examples and build an initial subset. Then, we introduce our two-fold model uncertainties which helps iteratively mitigate DS effect and build up trustable examples during bootstrap learning.
## 4.1 Data Uncertainty
Presenting examples in an easy-to-hard order at different training stages can benefit models (Platanios et al., 2019; Zhou et al., 2020), we propose data uncertainty as a way to quantify the "hardness" of an instance. To better estimate the data uncertainty, we use pre-trained language models (PLMs) to generate tag probability for each token in a sequence.
Our intuition is that higher uncertain inputs are
"harder" to be generated by a PLM, as it already has rationales of language. Accordingly, we propose two data uncertainties, which can be used individually or combined together:
Winning Score (WS) The maximum softmax probability reflects data uncertainty of an input
(Hendrycks and Gimpel, 2017). Given an input instance I = {x1*, ..., x*n}, we define data uncertainty u d(I) as the minus averaged token classification winning score:
$$u^{d}({\mathcal{I}})=-{\frac{1}{n}}\sum_{t=1}^{n}\operatorname*{max}_{c\in[1,C]}P(y_{t}=c|x_{t})\quad(1)$$
Entropy Shannon entropy (Shannon, 1948) is widely used to reflect information uncertainty. We propose data uncertainty u d(I) as the averaged token classification entropy:
$$u^{d}({\mathcal{I}})={\frac{1}{n}}\sum_{t=1}^{n}\sum_{c=1}^{C}P(y_{t}=c|x_{t})\log P(y_{t}=c|x_{t})\tag{2}$$
We filter out examples with high uncertainty scores and build an initial subset with "simple" examples. At the early training stage, a model is not aware of what a decent distribution P(y|x) should be, thus data uncertainty facilitates it to converge fast by tuning on a fairly "simple" subset.
## 4.2 Model Uncertainty
In our bootstrap learning, we define model uncertainty, i.e., epistemic uncertainty (Kendall and Gal, 2017), to measure whether model parameters can best describe the data distribution following (Zhou et al., 2020). A small model uncertainty indicates the model is confident that the current training data has been well learned (Wang et al., 2019).
We adopt Monte Carlo Dropout (Gal and Ghahramani, 2016) to approximate Bayesian inference which captures inner-model parametric uncertainty.
Specifically, we perform K forward passes through our joint model. In each pass, part of network neurons θ are randomly deactivated. Finally, we yield K samples on model parameters {ˆθ1*, ...,*
ˆθK}. We use the averaged token classification **Probability**
Variance (PV) (Shelmanov et al., 2021) over all tags for instance I:
$$u^{m}(\theta)={\frac{1}{n}}\sum_{t=1}^{n}\sum_{c=1}^{C}\operatorname{Var}\left[P(y_{t}=c|x_{t},{\hat{\theta}}_{k})\right]_{k=1\atop(3)}^{K}$$
where Var[.] is the variance of distribution over the K passes following the common settings in
(Dong et al., 2018; Xiao and Wang, 2019). Accordingly, model is aware of its confidence over each instance and how likely the label is noisy.
## 4.3 Training Strategy
Uncertainty-Aware Loss Besides MC Dropout which measures parametric uncertainty within a model, we also consider mitigating parametric uncertainty between models to stabilize the weights during training. Specifically, we use selfensembling (He et al., 2020; Wang and Wang, 2022) to calculate the loss between the same models to improve model robustness and reduce the label noise effect on model performance.
## Algorithm 1 Bootstrap Learning
Input: Original dataset D = {(I
n, yn)}
N
n=1, two joint models f1, f2 with parameters θ1, θ2; 1: Compute data uncertainty u d(I) for each instance I in D;
2: Initial dataset C ← Select data pairs (I
n, yn)
such that u d(I) < τ dfrom D;
3: for *epoch* e = 1*, ...* do 4: Train f1, f2 on C using Eq. (5);
5: Calculate model uncertainty u m(θ1) on D;
6: C ← Select data pairs (I
n, yn) such that u m(I; θ1) < τm from D;
We create another joint model with identical framework, e.g., architecture, loss functions, hyperparameters, and compute a self-ensemble loss Le to minimize the difference between two outputs from the two models regarding the same inputs:
$${\mathcal{L}}_{e}=\sum K L(f({\mathcal{I}};\theta_{1}),f({\mathcal{I}};\theta_{2}))\qquad(4)$$
where KL(.) is the Kullback-Leibler divergence between two probabilistic distributions, θ1, θ2 denote the parameters of first and second models. We formulate our final uncertainty-aware objective L
as the sum of CRF and self-ensemble loss:
$${\mathcal{L}}={\mathcal{L}}_{c}+\alpha{\mathcal{L}}_{e}$$
$$({\mathfrak{H}})$$
L = Lc + αLe (5)
where α denotes the weight of self-ensembling, and Lc means the token classification loss.
Bootstrap Learning Procedure To mitigate the DS effect on model performance, we propose a twofold bootstrap learning strategy (see Algorithm 1).
Specifically, we first apply data uncertainty to filter "harder" examples and redistribute a reliable initial training data M. Then, we iteratively feed examples following an easy-to-hard order to the model. In each training iteration, we regularize the joint model with self-ensembling loss to reduce the impact of noisy labels on model parameters. Then we use probability variance to select new confident training instances D′that can be explained by the model as the next training inputs. The more certain examples are selected, the more likely the model will learn beneficial information and will converge faster. We repeat the above procedure until the F1 score on the validation set converges.
## 5 Experiments 5.1 Setup
We evaluate the performance of UnBED on two datasets, NYT and Wiki-KBP. The NYT (Riedel et al., 2010) dataset collects news from New York Times and its training data is automatically labeled by DS. We use the revised test dataset (Jia et al.,
2019) that is manually annotated to ensure quality.
The Wiki-KBP (Ling and Weld, 2012) dataset collects articles from Wikipedia. Its training data is labeled by DS (Liu et al., 2017), and the test set is manually annotated (Ellis et al., 2013).
We compare UnBED with the following baselines: **ARNOR** (Jia et al., 2019), a pattern-based method to reduce noise for distantly-supervised triplet extraction. **PURE** (Zhong and Chen, 2021),
a pipeline approach that uses pre-trained BERT
entity model to first recognize entities and then employs a relation model to detect underlying relations. FAN (Hao et al., 2021), an adversarial method including a transformers encoder to reduce noise for distantly-supervised triplet extraction.
Evaluation We evaluate the extracted triplets for each sentence based on Precision (Prec.), Recall
(Rec.), and F1. A triplet {e1*, re, e*2} is marked correct if the relation type re, two entities e1, e2 are all correct. We build a validation set by randomly sampling 10% sentences from the test set.
Implementation Details We use Hugging Face bert-large-uncased (Devlin et al., 2019) pre-trained model as backbone. For ARNOR, the hidden vector size is set to 300. In regularization training, we find optimal parameters α as 1 for both datasets. We implement UnBED and all baselines in PyTorch, with Adam optimizer, initial learning rate 10−5, dropout rate 0.1, and batch size 8. For initial subset configuration, we choose data uncertainty threshold 0.5. For bootstrap learning, an empirical model uncertainty threshold is set to 0.6 with the best validation F1.
## 5.2 Overall Results
As shown in Table 1, UnBED significantly outperforms all baselines in precision and F1 metric. Specifically, UnBED achieves 8% F1 improvement on NYT (3% on Wiki-KBP) over denoising approaches—ARNOR and FAN. Our approach also outperforms baselines using pretrained transformers (PURE and FAN), showing that uncertainty-aware bootstrap learning effectively reduces the impact of noisy labels.
Method NYT **Wiki-KBP**
Prec. Rec. F1 **Prec. Rec. F1**
ARNOR (Jia et al., 2019) 0.588 0.614 0.600 0.402 0.471 0.434 PURE (Zhong and Chen, 2021) 0.536 0.664 0.593 0.395 0.433 0.413 FAN (Hao et al., 2021) 0.579 0.646 0.611 0.391 0.467 0.426
UnBED-WS **0.662** 0.730 0.694 **0.429** 0.501 **0.462** UnBED-Entropy 0.651 **0.741** 0.693 0.422 **0.509** 0.461
![4_image_0.png](4_image_0.png)
## 5.3 Further Analysis
We analyze the functionality of different components in Figure 2. We observe that both the entropyPV and vanilla-PV outperform the baseline (joint model directly trained on the original DS dataset)
in terms of F1 (5∼7% increase), demonstrating the effect of filtering noisy labels and selecting trustable instance using probability variance. Besides, self-ensembling further enhances the performance in later training stage (2∼4 F1 increase),
proving that mitigating the inter-model uncertainty benefits model robustness against noisy labels.
## 6 Conclusions
In this paper, we propose a novel uncertaintyaware bootstrap learning framework for distantlysupervised joint extraction. Specifically, we define data uncertainty in generally token classification to filter out highly-error-prone instances and build an initial high-confident subset, which is used to tune the joint extraction model for fast convergence. We then propose a two-fold bootstrap learning procedure which iteratively mitigates the DS impact on model robustness and selects new trustable training instances. Experimental results on two benchmark datasets show that UnBED significantly out-
## Limitations
In this work we propose an uncertainty-aware bootstrap learning framework for joint extraction.
Though it achieves state-of-the-art performance compared to other denoising techniques, UnBED requires large training resources considering the ensemble loss calculated between two large PLMs and the probability variance calculated on the PLM joint extraction model. In our future work, we hope to incorporate pruning techniques during training to improve the efficiency. We will also consider more complex relations between entities, e.g., relations beyond the sentence boundary, to fit in real-world information extraction scenarios.
## Acknowledgements
This work was supported by NSF CNS 2135625, CPS 2038727, CNS Career 1750263, and a Darpa Shell grant.
## References
Tao Chen, Haochen Shi, Liyuan Liu, Siliang Tang, Jian Shao, Zhigang Chen, and Yueting Zhuang. 2021. Empower distantly supervised relation extraction with collaborative adversarial training. In *Thirty-Fifth* AAAI Conference on Artificial Intelligence, AAAI
2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 12675–12682. AAAI Press.
Dai Dai, Xinyan Xiao, Yajuan Lyu, Shan Dou, Qiaoqiao She, and Haifeng Wang. 2019. Joint extraction of entities and overlapping relations using positionattentive sequence labeling. In *The Thirty-Third* AAAI Conference on Artificial Intelligence, AAAI
2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii,
USA, January 27 - February 1, 2019, pages 6300–
6308. AAAI Press.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Li Dong, Chris Quirk, and Mirella Lapata. 2018. Confidence modeling for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 743–753, Melbourne, Australia.
Association for Computational Linguistics.
Joe Ellis, Jeremy Getman, Justin Mott, Xuansong Li, Kira Griffitt, Stephanie M. Strassel, and Jonathan Wright. 2013. Linguistic resources for 2013 knowledge base population evaluations. In Proceedings of the Sixth Text Analysis Conference, TAC 2013, Gaithersburg, Maryland, USA, November 18-19, 2013. NIST.
Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 1050–1059. JMLR.org.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1321–1330. PMLR.
Pankaj Gupta, Hinrich Schütze, and Bernt Andrassy.
2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In *Proceedings of COLING 2016, the 26th International* Conference on Computational Linguistics: Technical Papers, pages 2537–2547, Osaka, Japan. The COLING 2016 Organizing Committee.
Kailong Hao, Botao Yu, and Wei Hu. 2021. Knowing false negatives: An adversarial training method for distantly supervised relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9661–9672, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jianfeng He, Xuchao Zhang, Shuo Lei, Zhiqian Chen, Fanglan Chen, Abdulaziz Alhamadani, Bei Xiao, and ChangTien Lu. 2020. Towards more accurate uncertainty estimation in text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8362–8372, Online. Association for Computational Linguistics.
Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Xuming Hu, Chenwei Zhang, Yawen Yang, Xiaohe Li, Li Lin, Lijie Wen, and Philip S. Yu. 2021. Gradient imitation reinforcement learning for low resource relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 2737–2746. Association for Computational Linguistics.
Wei Jia, Dai Dai, Xinyan Xiao, and Hua Wu. 2019.
ARNOR: Attention regularization based noise reduction for distant supervision relation classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1399–
1408, Florence, Italy. Association for Computational Linguistics.
Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in bayesian deep learning for computer vision? In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5574–5584.
Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter. 2017. Learning curve prediction with bayesian neural networks. In *5th International* Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields:
Probabilistic models for segmenting and labeling sequence data. In *Proceedings of the Eighteenth International Conference on Machine Learning*, ICML
'01, page 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Shuyang Li, Yufei Li, Jianmo Ni, and Julian McAuley.
2022. SHARE: a system for hierarchical assistive recipe editing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11077–11090, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Xiao Ling and Daniel S. Weld. 2012. Fine-grained entity recognition. In *Proceedings of the TwentySixth AAAI Conference on Artificial Intelligence, July* 22-26, 2012, Toronto, Ontario, Canada. AAAI Press.
Liyuan Liu, Xiang Ren, Qi Zhu, Shi Zhi, Huan Gui, Heng Ji, and Jiawei Han. 2017. Heterogeneous supervision for relation extraction: A representation learning approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 46–56, Copenhagen, Denmark. Association for Computational Linguistics.
Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In *Proceedings of the* Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011, Suntec, Singapore. Association for Computational Linguistics.
Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics,*
ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.
Farhad Nooralahzadeh, Jan Tore Lønning, and Lilja Øvrelid. 2019. Reinforcement-based denoising of distantly supervised NER with partial annotation. In *Proceedings of the 2nd Workshop on* Deep Learning Approaches for Low-Resource NLP,
DeepLo@EMNLP-IJCNLP 2019, Hong Kong, China, November 3, 2019, pages 225–233. Association for Computational Linguistics.
Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabás Póczos, and Tom M. Mitchell.
2019. Competence-based curriculum learning for neural machine translation. In *Proceedings of the* 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1
(Long and Short Papers), pages 1162–1172. Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Sebastian Riedel, Limin Yao, and Andrew McCallum.
2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD 2010, Barcelona, Spain, September 20-24, 2010, Proceedings, Part III, volume 6323 of Lecture Notes in Computer Science, pages 148–163.
Springer.
Yuming Shang, Heyan Huang, Xin Sun, Wei Wei, and Xian-Ling Mao. 2022. A pattern-aware self-attention network for distant supervised relation extraction. Inf.
Sci., 584:269–279.
Claude E. Shannon. 1948. A mathematical theory of communication. *Bell Syst. Tech. J.*, 27(3):379–423.
Artem Shelmanov, Evgenii Tsymbalov, Dmitri Puzyrev, Kirill Fedyanin, Alexander Panchenko, and Maxim Panov. 2021. How certain is your Transformer? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1833–1840, Online.
Association for Computational Linguistics.
Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2018. Deep semantic role labeling with self-attention. In *Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence* and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI Press.
Hongjun Wang and Yisen Wang. 2022. Self-ensemble adversarial training for improved robustness. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Shuo Wang, Yang Liu, Chao Wang, Huanbo Luan, and Maosong Sun. 2019. Improving back-translation with uncertainty-based confidence estimation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 791–802.
Association for Computational Linguistics.
Yijun Xiao and William Yang Wang. 2019. Quantifying uncertainties in natural language processing tasks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI
2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 7322–7329. AAAI Press.
Bowen Yu, Zhenyu Zhang, Xiaobo Shu, Tingwen Liu, Yubin Wang, Bin Wang, and Sujian Li. 2020. Joint extraction of entities and relations based on a novel decomposition strategy. In ECAI 2020 - 24th European Conference on Artificial Intelligence, 29 August8 September 2020, Santiago de Compostela, Spain, August 29 - September 8, 2020 - Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020), volume 325 of Frontiers in Artificial Intelligence and Applications, pages 2282–
2289. IOS Press.
Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1227–1236, Vancouver, Canada. Association for Computational Linguistics.
Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 50–61, Online. Association for Computational Linguistics.
Yikai Zhou, Baosong Yang, Derek F. Wong, Yu Wan, and Lidia S. Chao. 2020. Uncertainty-aware curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6934–
6944, Online. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✗ A2. Did you discuss any potential risks of your work?
We study open-domain information extraction for researches in this area
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 5
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
chen-etal-2023-text | Text-to-{SQL} Error Correction with Language Models of Code | https://aclanthology.org/2023.acl-short.117 | Despite recent progress in text-to-SQL parsing, current semantic parsers are still not accurate enough for practical use. In this paper, we investigate how to build automatic text-to-SQL error correction models. Noticing that token-level edits are out of context and sometimes ambiguous, we propose building clause-level edit models instead. Besides, while most language models of code are not specifically pre-trained for SQL, they know common data structures and their operations in programming languages such as Python. Thus, we propose a novel representation for SQL queries and their edits that adheres more closely to the pre-training corpora of language models of code. Our error correction model improves the exact set match accuracy of different parsers by 2.4-6.5 and obtains up to 4.3 point absolute improvement over two strong baselines. | # Text-To-Sql Error Correction With Language Models Of Code
Ziru Chen1, Shijie Chen1, Michael White1**, Raymond Mooney**2 Ali Payani3, Jayanth Srinivasa3, Yu Su1**, Huan Sun**1 1The Ohio State University 2The University of Texas at Austin 3Cisco Research
{chen.8336, chen.10216, white.1240, su.809, sun.397}@osu.edu [email protected] {apayani, jasriniv}@cisco
## Abstract
Despite recent progress in text-to-SQL parsing, current semantic parsers are still not accurate enough for practical use. In this paper, we investigate how to build automatic text-to-SQL
error correction models. Noticing that tokenlevel edits are out of context and sometimes ambiguous, we propose building clause-level edit models instead. Besides, while most language models of code are not specifically pre-trained for SQL, they know common data structures and their operations in programming languages such as Python. Thus, we propose a novel representation for SQL queries and their edits that adheres more closely to the pre-training corpora of language models of code. Our error correction model improves the exact set match accuracy of different parsers by 2.4–6.5 and obtains up to 4.3 point absolute improvement over two strong baselines.1
## 1 Introduction
Text-to-SQL parsing is a classic semantic parsing task that finds wide applications (Zelle and Mooney, 1996; Tang and Mooney, 2000). Since the release of Spider (Yu et al., 2018), a cross-database text-to-SQL benchmark, many semantic parsers with decent performance have been developed (Lin et al., 2020; Wang et al., 2020; Deng et al., 2021; Rubin and Berant, 2021; Scholak et al., 2021).
Nonetheless, state-of-the-art semantic parsers are still not accurate enough. As a result, their users need to constantly correct wrongly predicted SQL
queries, which can be as time-consuming and errorprone as writing a SQL query from scratch (Jorgensen and Shepperd, 2007; Weiss et al., 2007).
Therefore, in this paper, we study the problem of automatic text-to-SQL error correction to better assist users in querying complex databases.
We first highlight that it is essential to factor in the *compositional substructures* within SQL
1Our code and data are available at https://github.
com/OSU-NLP-Group/Auto-SQL-Correction.
queries, such as abstract syntax trees (Yin and Neubig, 2017; Guo et al., 2022) and data-flow graphs
(Guo et al., 2021), instead of treating code snippets as string sequences. Compared to individual tokens, substructures (e.g. SQL clauses) include more context of the entire program and are more semantically meaningful. Consequently, edit patterns of such substructures are more intuitive for humans to understand and easier for language models to learn. Moreover, while the pre-training corpora for language models of code, such as CodeT5 (Wang et al., 2021), do not include many SQL queries based on their documentation, they naturally contain *abundant examples of common data structures* like dictionaries. Therefore, we hypothesize that transforming unfamiliar SQL queries into familiar data structures can help language models of code better perform structural editing of SQL queries.
Based on these observations, we develop our error correction model and make two contributions.
First, we propose considering SQL clauses instead of tokens as basic semantic units for editing. Using a context-free grammar, we can decompose a SQL query and identify its clauses by traversing its abstract syntax tree. Second, we propose a new representation of SQL queries and their edits that adheres more closely to common code pre-training corpora, including CodeSearchNet (Husain et al.,
2020), and makes the structures of a SQL query more explicit. With a decomposed SQL query, we pair each clause with its SQL keyword and represent the entire query as a Python dictionary. Then, we format edits on a wrong SQL query as a program that modifies data of the query's corresponding dictionary. Unlike token-level edits in existing work (Zhang et al., 2023), such dictionary operations define all edits unambiguously and can be directly executed with a Python interpreter.
Through comprehensive experiments with different representations, we show that: (1) our proposed representation has the lowest zero-shot perplexity 1359
| Query Representation | Edit Representation | | | | | |
|-------------------------------------------|----------------------------------------------------------------------------------------------------|--------------|--------------|-------------|--------------|----|
| SQL | select tweets.text from tweets order by tweets.text | Token-Level | <ReplaceOld> | tweets.text | <ReplaceNew> | |
| tweets.createdate <ReplaceEnd> | | | | | | |
| Clause-Level | <ReplaceOld> | order | by | tweets.text | <Re | |
| placeNew> order by tweets.createdate <ReplaceEnd> | | | | | | |
| PyDict | sql = { "select": "select tweets.text", "from": "from tweets", "orderBy": "order by tweets.text" } | Clause-Level | <ReplaceOld> | "orderBy": | "order | by |
| tweets.text" | <ReplaceNew> | "orderBy": | | | | |
| "order by tweets.createdate" <ReplaceEnd> | | | | | | |
| Program | sql["orderBy"] = "order by tweets.createdate" | | | | | |
with CodeT5; (2) simply changing token-level edits to clause-level edits can effectively improve the performance of our models; and (3) our method improves the exact set match accuracy of different parsers by 2.4–6.5 and obtains up to 4.3 point absolute improvement over two strong baselines.
## 2 Text-To-Sql Error Correction
Given a natural language utterance u, a database schema s, and a wrong SQL query q− produced by an existing parser, our goal is to develop an error correction model that predicts a sequence of edit actions e and the correct query q+. Following previous work (Zhang et al., 2023), we formulate our task as sequence-to-sequence generation:
## P(Y|X) = Πt T=1P(Yt|X, Y1:T−1) (1)
where x = [u; s; q−] is the concatenation of the given inputs and y = [e; q+] is the concatenation of all edit actions and the resulting correct query. In this section, we study different representations of SQL queries (Section 2.1) and edits (Section 2.2)
to better leverage language models of code.
## 2.1 Query Representation
We consider two representations for a predicted query: (1) the original SQL format and (2) our proposed PyDict (Python Dictionary) representation.
To prepare for editing, we disambiguate each SQL
query following Rubin and Berant (2021), including lower-casing non-value tokens, resolving table references, and formatting punctuation. This preprocessing normalizes SQL queries predicted by different base parsers and the gold annotations into the same format. To build our PyDict representation, we parse a SQL query into its abstract syntax tree (AST) with Spider's context-free grammar. We use depth-first search to traverse through the AST,
find any nested substructures, and construct the dictionary representation bottom-up. Table 1 shows the "SQL" and "PyDict" representations of a SQL
query (more details in Appendix A).
## 2.2 Edit Representation
We first follow Zhang et al. (2023) to use tokenlevel edit representation with special tokens (Table 1), which have unique entries in the tokenizer and the model's embedding layer to describe *Replace, Insert*, and *Delete* edit actions (more examples in Appendix F). However, we realize this representation can sometimes be ambiguous. As shown in Table 1, the span "tweets.text" appears twice in the SQL query. This repetition would confuse the error correction model with which span to replace when generating the corrected query. Also, the ambiguity makes it difficult to implement rules and directly carry out the edit actions on the wrong query. Hence, we change the token-level edit representation to clause-level, which includes more context of the query to make different edits more distinguishable. In our experiments (Section 4.1),
we demonstrate that this simple modification is already effective. Our program representation further improves the performance because it is more similar to the code pre-training corpora and eliminates the need to learn special tokens' representations.
## 3 Experimental Setup 3.1 Data Synthesis For Sql Error Correction
To train a text-to-SQL error correction model, we need to collect a set of wrong SQL parses that reflects a realistic distribution of errors (Section 4.2)
as our training data. We synthesize this dataset by
| CodeT5 | BRIDGEv2 | SmBoP | |
|------------------|------------|---------|--------|
| # of Train | 47,020 | 24,776 | 20,083 |
| # of Dev | 448 | 448 | 448 |
| # of Test | 430 | 392 | 310 |
| Avg. Train Edits | 2.34 | 3.11 | 2.72 |
| Avg. Dev Edits | 2.70 | 3.29 | 3.31 |
| Avg. Test Edits | 1.84 | 1.51 | 1.47 |
Table 2: Summary of data statistics.
performing 5-fold cross-validation on each parser, which approximates the actual evaluation setting.
Following the evaluation setup in Yu et al.
(2018), we split Spider's training set into five roughly equal subsets by different databases. For each cross-validation fold, we train a text-to-SQL
parser (Section 3.2) on four subsets and evaluate it on the remaining one. At inference time, we perform beam search with size 20 for each example and collect grammatical and executable parses in the beam.2If a SQL parse is not an exact set match or execution match to the gold annotation, we label it wrong and include it in our training set for error correction. Having synthesized our training dataset, we randomly sample 8 databases and their associated questions to construct a held-out development set. For development set examples, we only keep incorrect SQL parses with the highest beam confidence. For our error correction test set, we train each parser on the *full* Spider training set and evaluate it on the original Spider's development set without modifications. We similarly keep SQL parses with exact match or execution match errors.
Table 2 summarizes the statistics of our data.
## 3.2 Models
Text-to-SQL base parsers. We choose three textto-SQL parsers with different decoding strategies and levels of performance (Table 3). We elaborate on our selection criteria in Appendix B.
- **CodeT5** (Wang et al., 2021): We fine-tune CodeT5-base following Xie et al. (2022).
This parser represents those using beam search decoding and having a lower accuracy.
- **BRIDGEv2** (Lin et al., 2020): A representative parser with constrained decoding and achieving a medium-level accuracy.
- **SmBoP** (Rubin and Berant, 2021): A representative parser with bottom-up decoding and achieving higher accuracy.
Error correction models. We use two language models of code in all our experiments:
- **CoditT5** (Zhang et al., 2023): A language model pre-trained for code editing tasks by injecting noises to code snippets in CodeSearchNet (Husain et al., 2020) and then denoising with token-level edit representations.
- **CodeT5** (Wang et al., 2021): A language model pre-trained for general code understanding and generation with four different pre-training objectives.
We compare the existing SQL+Token-Level representation with our proposed ones: SQL+ClauseLevel, PyDict+Clause-Level, and PyDict+Program on CodeT5 and the first three on CoditT5.3Implementation details are in Appendix C.
## 3.3 Evaluation
We use the increase in Exact Set Match (EM) and Execution Match (EX) accuracy on our error correction test set to measure each model's performance. Because CoditT5's experiments assume the input program has at least one error, we keep this assumption for fair comparisons. To construct a test set satisfying this assumption, we have to compare parser-generated SQL queries with gold annotations (Section 3.1). Thus, we use the Spider development set as our test set and split the Spider training set to build a held-out development set (Table 2) to select model checkpoints during training. We also include results on our held-out development set in the appendix (Table E.1).
## 4 Results And Analysis 4.1 Main Results
We summarize our main results in this section. To ensure robustness, we repeat all experiments with 3 different random seeds and report the average performances with standard deviations. Our model can also be used in an interactive framework that allows users to select edit actions from the top-k beam candidates. We include more experiments with simulated user interactions in Appendix E.
Our representation's perplexity is the smallest.
We validate that our PyDict+Program representation adheres more closely to the code pre-training corpora by measuring its zero-shot perplexity on CodeT5 using our development set (Section 3.1).
3We did not use CoditT5 for PyDict+Program because it was pre-trained on token-level edit representations. Its decoder may be specialized in generating edits instead of programs.
| Models | Query | Edit | CodeT5 | BRIDGEv2 | SmBoP | | | |
|----------|--------------|--------------|------------|------------|------------|------------|------------|------------|
| EM | EX | EM | EX | EM | EX | | | |
| No Edit | N/A | N/A | 62.7 (-) | 63.6 (-) | 70.1 (-) | 68.2 (-) | 74.6 (-) | 75.3 (-) |
| SQL | Token-Level | 64.3 (0.1) | 64.4 (0.2) | 65.4 (0.5) | 66.6 (0.3) | 74.2 (0.4) | 75.3 (0.1) | |
| CoditT5 | SQL | Clause-Level | 67.0 (0.4) | 65.4 (0.5) | 71.3 (0.5) | 70.9 (0.2) | 76.3 (0.0) | 77.2 (0.3) |
| PyDict | Clause-Level | 67.1 (0.2) | 66.5 (0.4) | 70.6 (0.8) | 70.8 (0.6) | 76.3 (0.3) | 77.0 (0.3) | |
| SQL | Token-Level | 66.7 (0.9) | 65.9 (0.5) | 68.2 (0.4) | 69.4 (0.8) | 75.6 (0.4) | 76.5 (0.6) | |
| CodeT5 | SQL | Clause-Level | 68.3 (0.3) | 68.2+(0.6) | 71.8+(0.4) | 72.5+(0.2) | 76.7 (0.6) | 77.4 (0.3) |
| PyDict | Clause-Level | 66.6 (0.8) | 67.1 (0.8) | 72.0+(0.3) | 72.4+(0.2) | 77.3 (0.6) | 77.8 (0.2) | |
| CodeT5∗ | PyDict | Program | 69.2+(0.4) | 68.4+(0.2) | 72.5+(0.4) | 73.1+(0.2) | 77.3 (0.4) | 77.6 (0.6) |
| CodeT5 | 69.0+(0.2) | 68.2+(0.1) | 72.5+(0.3) | 73.0+(0.6) | 78.0+(0.3) | 78.5+(0.3) | | |
![3_image_0.png](3_image_0.png)
As shown in Figure 1, by representing data in PyDict, we can reduce the perplexity of CodeT5 by 2 orders of magnitude. After augmenting it with our program representation, we further reduce the zero-shot perplexity of CodeT5 to only 5.96 × 102, 3 orders of magnitude less than the SQL+TokenLevel representation (1.26 × 105).
## Clause-Level Editing Is More Effective, Especially
when represented in PyDict+Program. Since CodeT5 consistently outperforms CoditT5 with the same representations, we focus on comparisons among CodeT5 variations. As shown in Table 3, compared to CodeT5-SQL+Token-Level, only CodeT5-PyDict+Program achieves statistically significant improvement on all three parsers, while clause-level models fail McNemar's significance test for some parsers. More concretely, it achieves up to 4.3 point more absolute improvement on EM accuracy (68.2 → 72.5; BRIDGEv2) and 3.7 point more absolute improvement on EX accuracy
(69.4 → 73.1; BRIDGEv2). Overall, CodeT5-
PyDict+Program can boost the parsers' EM accuracy by 2.4–6.5. Thus, both clause-level editing and PyDict+Program representation can better take advantage of language models of code.
## 4.2 Error Analysis
Additionally, we conduct an error analysis (Table 4) by sampling 100 wrong parses from all three parsers and classifying them into five categories:
- *Database Grounding*: A generated SQL
query has the correct structure, but some table/column names or entity values are wrong.
- *Incorrect Structure*: A generated SQL query has missing, wrong, or redundant structures.
- *Syntax & Grammar*: A generated SQL query violates the programming language's syntax.
- *False Negative*: A generated SQL query is semantically correct but not captured by evaluation metrics, or the gold annotation is wrong.
- *Other*: All other errors, such as wrong aggregation functions, besides the above categories.
Since the error distributions for each parser are similar, as an example, we discuss our findings based on the strongest parser, SmBoP:
Database grounding is the major type of error.
Among the 100 samples from SmBoP, we find that 54 of them have database grounding errors.
Particularly, SmBoP predicts wrong table/column names in 34 parses, inaccurate entity values in 9 parses, and incorrect JOIN relations in 11 parses.
Our CodeT5-PyDict+Program model can successfully fix 16 of the 54 erroneous parses, including 10 parses with wrong table/column names, 4 parses with inaccurate entity values, and 2 parses with incorrect JOIN relations. We hypothesize that
| Error Category | CodeT5 | BRIDGEv2 | SmBoP | | | | | | |
|---------------------|------------|------------|----------|------------|-----|----------|------------|-----|----|
| Resolved | Unresolved | All | Resolved | Unresolved | All | Resolved | Unresolved | All | |
| Database Grounding | 15 | 51 | 66 | 14 | 48 | 62 | 16 | 38 | 54 |
| Incorrect Structure | 2 | 15 | 17 | 2 | 12 | 14 | 3 | 23 | 26 |
| Syntax & Grammar | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 4 | 5 |
| False Negative | 0 | 9 | 9 | 0 | 6 | 6 | 0 | 8 | 8 |
| Other | 1 | 7 | 8 | 2 | 16 | 18 | 1 | 6 | 7 |
database grounding is also a major category of errors in our synthesized training set, so our model has learned to resolve similar errors. Nevertheless, it still cannot correct the remaining 38 SQL
parses. We notice that our current representation for database schema is missing critical information, such as column data types and foreign key relations, for our error correction model to fix database grounding errors. Following our PyDict representation for SQL, we suggest designing a code representation for database schema that includes such information to tackle this issue in future work.
Structural errors are hard to edit automatically.
Besides database grounding, 26 of SmBoP's errors belong to another category, incorrect structure. These 26 samples contain 7 parses with incorrect SQL clauses and 19 parses with incorrect subqueries, but our CodeT5-PyDict+Program model only resolves 1 and 2 of them, respectively. We find that correcting such errors usually involves multiple edit steps, which motivates us to incorporate our model into an interactive framework in future work. As our experiments with simulated user interaction (Appendix E.2) show, when our model interacts with the simulated user to correct one clause at a time, it is able to fully correct more SQL parses. Thus, we deem interactive correction would maximize our model's utility in practice.
## 5 Related Work
Since the release of CodeBERT (Feng et al., 2020),
many language models of code have emerged for program understanding and generation (Ahmad et al., 2021; Chen et al., 2021; Guo et al., 2021; Wang et al., 2021; Guo et al., 2022; Fried et al.,
2023; Nijkamp et al., 2023). In addition to programrelated tasks, recent work shows they also excel at processing natural language structures. Using code as meaning representations (MRs), we can leverage language models of code in various tasks, such as commonsense reasoning (Madaan et al., 2022),
action planning (Singh et al., 2022), and event extraction (Wang et al., 2022). In fact, how to design MRs to reduce model learning difficulty is a salient research question in semantic parsing (Guo et al.,
2019; Gan et al., 2021b; Nie et al., 2022).
Our work demonstrates that program-related tasks themselves can also benefit from code-based MRs. Specifically, we apply such MRs to SQL
error correction, a variant of automatic program repair tasks (Tufano et al., 2019; Panthaplackel et al., 2022; Zhang et al., 2023). Although SQL is a code-based MR, it is much harder for models to learn compared to other MRs, such as FunQL and lambda calculus (Li et al., 2022). Consequently, without many SQL queries in their pre-training corpora, language models of code can underperform state-of-the-art text-to-SQL parsers. By converting SQL queries into Python dictionaries, we can explicitly represent their compositional substructures and define edit actions as programs, which reduces the learning difficulty for language models of code and yields better performance.
## 6 Conclusion And Future Work
This paper presents a study on developing a text-toSQL error correction model with clause-level edits and different representations. Our comprehensive experiments demonstrate that *clauses are better semantic units than tokens* for editing SQL queries and *mimicking patterns in code pre-training corpora* helps better leverage language models of code. As a future direction, we plan to incorporate our model into interactive semantic parsing frameworks (Li et al., 2020; Yao et al., 2019, 2020; Zeng et al., 2020) by suggesting possible edits to users once a wrong parse is identified. In this way, users would more efficiently correct parse errors and get better assistance. We also plan to experiment with other language models of code (Fried et al., 2023; Nijkamp et al., 2023) and text-to-SQL datasets
(Zelle and Mooney, 1996; Gan et al., 2021a) to verify the generalizability of our method.
## Limitations
Actual applications of our model. Our work assumes that input SQL queries to our model are always wrong. This assumption is more feasible in an interactive semantic parsing framework, where the users are expected to decide whether a SQL
parse, accompanied by its natural language explanations (Elgohary et al., 2020, 2021; Narechania et al., 2021; Mo et al., 2022), has errors or not. Alternatively, to remove this assumption, it would be interesting for future work to study the performance of our error correction model in combination with an automatic error detection model (Chen et al.,
2023).
## Experiments With More Language Models Of Code.
We have only experimented with two language models of code, CoditT5 and CodeT5, both using T5-base (Raffel et al., 2020) as their underlying model architecture. It would be interesting to test how our conclusions generalize to other language models of code in the future. Based on the strong capabilities of large language models of code, such as Codex (Chen et al., 2021), InCoder (Fried et al.,
2023), and CodeGen (Nijkamp et al., 2023), we believe that these models can better exploit their knowledge about data structures and their operations in Python. These models may perform even better on Text-to-SQL error correction with our proposed representations.
## Acknowledgements
We would like to thank the anonymous reviewers and colleagues from the OSU NLP group for their thoughtful comments. This research was supported in part by a sponsored award from Cisco Research, NSF IIS-1815674, NSF CAREER \#1942980, NSF
OAC-2112606, and Ohio Supercomputer Center
(Center, 1987). The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. government.
The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. Ziru is also supported by The Ohio State University Graduate School through University Fellowship.
## References
Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2655–2668, Online. Association for Computational Linguistics.
Ohio Supercomputer Center. 1987. Ohio supercomputer center.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code.
Shijie Chen, Ziru Chen, Huan Sun, and Yu Su. 2023.
Error detection for text-to-sql semantic parsing.
Xiang Deng, Ahmed Hassan Awadallah, Christopher Meek, Oleksandr Polozov, Huan Sun, and Matthew Richardson. 2021. Structure-grounded pretraining for text-to-SQL. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 1337–1350, Online. Association for Computational Linguistics.
Ahmed Elgohary, Saghar Hosseini, and Ahmed Hassan Awadallah. 2020. Speak to your parser: Interactive text-to-SQL with natural language feedback. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2065–
2077, Online. Association for Computational Linguistics.
Ahmed Elgohary, Christopher Meek, Matthew Richardson, Adam Fourney, Gonzalo Ramos, and Ahmed Hassan Awadallah. 2021. NL-EDIT:
Correcting semantic parse errors through natural language interaction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5599–5610, Online.
Association for Computational Linguistics.
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. CodeBERT: A pre-trained model for programming and natural languages. In *Findings of the Association*
for Computational Linguistics: EMNLP 2020, pages 1536–1547, Online. Association for Computational Linguistics.
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, and Mike Lewis. 2023. Incoder:
A generative model for code infilling and synthesis.
In *The Eleventh International Conference on Learning Representations*.
Yujian Gan, Xinyun Chen, Qiuping Huang, Matthew Purver, John R. Woodward, Jinxia Xie, and Pengsheng Huang. 2021a. Towards robustness of textto-SQL models against synonym substitution. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2505–
2515, Online. Association for Computational Linguistics.
Yujian Gan, Xinyun Chen, Jinxia Xie, Matthew Purver, John R. Woodward, John Drake, and Qiaofu Zhang.
2021b. Natural SQL: Making SQL easier to infer from natural language specifications. In Findings of the Association for Computational Linguistics:
EMNLP 2021, pages 2030–2042, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Daya Guo, Shuai Lu, Nan Duan, Yanlin Wang, Ming Zhou, and Jian Yin. 2022. UniXcoder: Unified crossmodal pre-training for code representation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7212–7225, Dublin, Ireland. Association for Computational Linguistics.
Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. GraphCodeBERT: Pre-training code representations with data flow. In International Conference on Learning Representations.
Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, JianGuang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross-domain database with intermediate representation. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524–4535, Florence, Italy. Association for Computational Linguistics.
Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2020. Codesearchnet challenge: Evaluating the state of semantic code search.
Magne Jorgensen and Martin Shepperd. 2007. A systematic review of software development cost estimation studies. *IEEE Transactions on Software Engineering*, 33(1):33–53.
Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, and Dongmei Zhang. 2020. "what do you mean by that?" a parser-independent interactive approach for enhancing text-to-SQL. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6913–6922, Online. Association for Computational Linguistics.
Zhenwen Li, Jiaqi Guo, Qian Liu, Jian-Guang Lou, and Tao Xie. 2022. Exploring the secrets behind the learning difficulty of meaning representations for semantic parsing. In *Proceedings of the 2022 Conference on* Empirical Methods in Natural Language Processing, pages 3616–3625, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Xi Victoria Lin, Richard Socher, and Caiming Xiong.
2020. Bridging textual and tabular data for crossdomain text-to-SQL semantic parsing. In *Findings* of the Association for Computational Linguistics:
EMNLP 2020, pages 4870–4888, Online. Association for Computational Linguistics.
Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. 2022. Language models of code are few-shot commonsense learners. In *Proceedings* of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1384–1403, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. In *Psychometrika*, volume 12, page 153–157.
Lingbo Mo, Ashley Lewis, Huan Sun, and Michael White. 2022. Towards transparent interactive semantic parsing via step-by-step correction. In *Findings of* the Association for Computational Linguistics: ACL
2022, pages 322–342, Dublin, Ireland. Association for Computational Linguistics.
Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2021. On the stability of fine-tuning BERT: Misconceptions, explanations, and strong baselines. In *International Conference on Learning* Representations.
Arpit Narechania, Adam Fourney, Bongshin Lee, and Gonzalo Ramos. 2021. Diy: Assessing the correctness of natural language to sql systems. In 26th International Conference on Intelligent User Interfaces, IUI '21, page 597–607, New York, NY, USA.
Association for Computing Machinery.
Lunyiu Nie, Shulin Cao, Jiaxin Shi, Jiuding Sun, Qi Tian, Lei Hou, Juanzi Li, and Jidong Zhai. 2022. GraphQ IR: Unifying the semantic parsing of graph query languages with one intermediate representation.
In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 5848–5865, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations.
Sheena Panthaplackel, Milos Gligoric, Junyi Jessy Li, and Raymond Mooney. 2022. Using developer discussions to guide fixing bugs in software. In *Findings of the Association for Computational Linguistics:*
EMNLP 2022, pages 2292–2301, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Ohad Rubin and Jonathan Berant. 2021. SmBoP: Semiautoregressive bottom-up semantic parsing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 311–324, Online. Association for Computational Linguistics.
Torsten Scholak, Nathan Schucher, and Dzmitry Bahdanau. 2021. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 9895–9901, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Noam Shazeer and Mitchell Stern. 2018. Adafactor:
Adaptive learning rates with sublinear memory cost.
In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4596–4604.
PMLR.
Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. 2022. Progprompt: Generating situated robot task plans using large language models. In Workshop on Language and Robotics at CoRL 2022.
Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: Integrating statistical and relational learning for semantic parsing. In *Proceedings of the 2000 Joint SIGDAT*
Conference on Empirical Methods in Natural Language Processing and Very Large Corpora: Held in Conjunction with the 38th Annual Meeting of the Association for Computational Linguistics - Volume 13, EMNLP '00, page 133–141, USA. Association for Computational Linguistics.
Michele Tufano, Jevgenija Pantiuchina, Cody Watson, Gabriele Bavota, and Denys Poshyvanyk. 2019. On learning meaningful code changes via neural machine translation. In Proceedings of the 41st International Conference on Software Engineering, ICSE '19, page 25–36. IEEE Press.
Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL:
Relation-aware schema encoding and linking for textto-SQL parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7567–7578, Online. Association for Computational Linguistics.
Xingyao Wang, Sha Li, and Heng Ji. 2022. Code4struct:
Code generation for few-shot structured prediction from natural language.
Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H.
Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696–8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cathrin Weiss, Rahul Premraj, Thomas Zimmermann, and Andreas Zeller. 2007. How long will it take to fix this bug? In *Fourth International Workshop on Mining Software Repositories (MSR'07:ICSE Workshops* 2007), pages 1–1.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. 2022. UnifiedSKG:
Unifying and multi-tasking structured knowledge grounding with text-to-text language models. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 602–631, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Ziyu Yao, Yu Su, Huan Sun, and Wen-tau Yih. 2019.
Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5447–5458, Hong Kong, China. Association for Computational Linguistics.
Ziyu Yao, Yiqi Tang, Wen-tau Yih, Huan Sun, and Yu Su. 2020. An imitation game for learning semantic parsers from user interaction. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 6883–6902, Online. Association for Computational Linguistics.
Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation.
In *Proceedings of the 55th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 440–450, Vancouver, Canada.
Association for Computational Linguistics.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics.
John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In *Proceedings of the Thirteenth National Conference on Artificial Intelligence - Volume* 2, AAAI'96, page 1050–1055. AAAI Press.
Jichuan Zeng, Xi Victoria Lin, Steven C.H. Hoi, Richard Socher, Caiming Xiong, Michael Lyu, and Irwin King. 2020. Photon: A robust cross-domain textto-SQL system. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 204–214, Online. Association for Computational Linguistics.
Jiyang Zhang, Sheena Panthaplackel, Pengyu Nie, Junyi Jessy Li, and Milos Gligoric. 2023. Coditt5:
Pretraining for source code and natural language editing. In *Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering*, ASE '22, New York, NY, USA. Association for Computing Machinery.
## Appendices
We provide more details omitted in the main text as follows:
- Appendix A: SQL PyDict Representation
- Appendix B: Text-to-SQL Parser Selection
- Appendix C: Implementation Details
- Appendix D: Statistical Significance Test
- Appendix E: Additional Results
- Appendix F: More Representation Examples
## A Sql Pydict Representation
We implement the transformation from any SQL
query to our PyDict representation in three steps
(Section 2.1). First, we use context-free grammar to parse a SQL query and obtain its abstract syntax tree (AST). The AST naturally contains a SQL
decomposition where each clause has its unique subtree. In addition, if a clause contains a nested query, it would be represented as another independent subtree, which is a child of the root node in the clause's AST subtree. With these substructures explicitly represented, we use depth-first search to traverse through the AST to build our PyDict representation bottom-up. In other words, if a clause contains a subquery, we process the subquery tree as an independent SQL AST and build a dictionary for it. Then, we combine it with other substructures of the clause with different dictionary keys. For example, in Table F.1, we first build the dictionary for
"subquery0" and assign this identifier as the key. In the main "clause," we replace the subquery's corresponding span with this identifier. Finally, we use another dictionary to wrap the main "clause" and
"subquery0" together as the final representation of the "where" clause. We repeat this procedure for each clause to incrementally add (key, value) pairs to the dictionary and "store" it to the variable sql, which we refer to in program edit representations.
## B Text-To-Sql Parser Selection
We choose existing text-to-SQL parsers in our experiments according to two principles: the parsers predict database entity values, and they cover different decoding strategies, including grammar-based
(BRIDGEv2), bottom-up (SmBop), and tokenbased (CodeT5). We did not include parsers using top-down decoders because they usually cannot predict entity values in conditional statements, such as RAT-SQL (Wang et al., 2020). Instead, we include BRIDGEv2 because its decoding method mimics the left-to-right CFG derivation of a program, and it uses SQL syntax-based constraints to prevent grammatical errors. In recent work, such decoders, also used in PICARD (Scholak et al., 2021), are more popular than top-down decoders.
## C Implementation Details
Our models (Section 3.2) are implemented in PyTorch (Paszke et al., 2019) using Huggingface
(Wolf et al., 2020) and trained on a single NVIDIA
RTX A6000 GPU (48GB). We use Adafactor
(Shazeer and Stern, 2018) to train all our models with the same hyperparameters adapted from Mosbach et al. (2021):
- Learning rate: 3e − 5
- Batch size: 16
- Epochs: 10
- Scheduler: Linear decay with 10% warmup
## D Statistical Significance Test
To demonstrate the effectiveness of our three clause-level edit representations (Section 4.1), we perform McNemar's Test (McNemar, 1947) to measure the statistical significance of their results in comparison to CodeT5-SQL+Token-Level. For each significance test between two models, we use the median results among our three runs to calculate the comparison matrix. Then, we compute the p-values using statsmodels.
4 When p < 0.05, we reject the null hypothesis. In other words, we consider the accuracy improvement statistically significant when p < 0.05.
## E Additional Results
Results on our development set. We report model performances on our held-out development set (Section 3.1) in Table E.1. During training, we select the best model by evaluating its EX and EM accuracy on the development set (Section 3.3) every 500 steps. Surprisingly, we find that CodeT5-
SQL+Clause-Level sometimes achieves the best performance. For BRIDGEv2, it obtains 35.9 EM
accuracy and 39.3 EX accuracy, while CodeT5- PyDict+Program only obtains 34.5 EM accuracy and 37.1 EX accuracy. A possible explanation is that in comparison to the test set, our development set has SQL structures and databases that are more 4https://www.statsmodels.org/dev/generated/
statsmodels.stats.contingency_tables.mcnemar. html similar to the training set, while the test set has unseen SQL structures and less similar databases. It may also indicate that CodeT5-SQL+Clause-Level overfits the synthetic training data and fails to generalize to realistic test data.
Results for simulated interaction experiments.
To show the potential of using our model in an interactive framework, we extend our main experiments
(Section 4.1) by adding simulated user interactions.
Since our model uses beam search to decode the edit actions e = {e1, e2*, ..., e*n} and the resulting correct SQL query q+ (Equation 1), we simulate user interactions to select one edit action ei at a time from the beam results.
At each time step t, we prompt the decoder with previously selected edit actions e1*, ..., e*t−1 to complete the sequence et*, ..., e*n, q+ using beam search with size 3. Then, we use gold SQL annotations to simulate the user interaction, which selects an edit action et from the three candidates at step t or chooses to skip the current step when all three candidates are wrong. If skipping, the user continues to check the consequent edit actions et+j
(j = 1, 2*, ..., n* − t) until it selects the next edit action. When the interaction finishes, we append the selected edit action to the prompt and let the model regenerate a completion with the new prompt for the next step's interaction. Having simulated interactions for all edit actions, we do not use the generated q+ directly because some edit actions are skipped. Instead, we execute the selected ones on the initial SQL query to derive the final query.
As shown in Table E.2, when collaborating with a simulated user, our error correction model can further improve the base parsers' accuracy. Compared to its performance without using any interactions, our model achieves up to 4.1 point more absolute improvement on EM accuracy (72.5 → 76.6; BRIDGEv2) and 5.0 point more absolute improvement on EX accuracy (73.1 → 78.1; BRIDGEv2).
With these results for simulated interaction experiments, we deem that incorporating our error correction model into an interactive framework is a promising future direction.
| Models | Query | Edit | CodeT5 | BRIDGEv2 | SmBoP | | | |
|----------|--------------|--------------|------------|------------|------------|------------|------------|------------|
| EM | EX | EM | EX | EM | EX | | | |
| SQL | Token-Level | 26.1 (0.4) | 28.6 (1.0) | 25.8 (0.3) | 27.2 (0.6) | 28.1 (0.9) | 30.7 (0.7) | |
| CoditT5 | SQL | Clause-Level | 28.6 (0.4) | 31.3 (0.5) | 28.4 (0.5) | 30.0 (0.2) | 30.2 (0.8) | 33.4 (0.8) |
| PyDict | Clause-Level | 28.9 (0.6) | 32.3 (0.8) | 28.0 (0.1) | 30.1 (0.2) | 27.6 (0.1) | 30.9 (0.4) | |
| SQL | Token-Level | 32.1 (1.1) | 34.1 (1.2) | 31.8 (0.4) | 34.5 (0.8) | 34.2 (0.1) | 37.6 (0.1) | |
| CodeT5 | SQL | Clause-Level | 36.5 (0.6) | 38.6 (0.5) | 35.9 (0.4) | 39.3 (1.3) | 36.1 (0.6) | 38.8 (0.5) |
| PyDict | Clause-Level | 35.6 (0.9) | 37.9 (0.3) | 32.9 (1.0) | 34.8 (0.8) | 33.0 (0.2) | 36.3 (0.3) | |
| CodeT5∗ | PyDict | Program | 35.7 (0.8) | 37.9 (0.3) | 34.8 (0.8) | 38.3 (0.7) | 36.0 (0.3) | 40.2 (0.5) |
| CodeT5 | 36.7 (0.2) | 38.5 (0.6) | 34.5 (0.1) | 37.1 (0.2) | 35.6 (0.8) | 39.0 (0.1) | | |
Table E.1: Exact Set Match (EM) and Execution Match (EX) accuracy on our held-out development set (Section 3.1). The **best performances** are in bold and the second bests are underlined. ∗We fine-tune the model to generate edit programs only (without resulting queries) and use Python interpreter to execute the edit actions.
| Models | Query | Edit | CodeT5 | BRIDGEv2 | SmBoP | | | |
|----------|------------|------------|------------|------------|------------|------------|------------|------------|
| EM | EX | EM | EX | EM | EX | | | |
| No Edit | N/A | N/A | 62.7 (-) | 63.6 (-) | 70.1 (-) | 68.2 (-) | 74.6 (-) | 75.3 (-) |
| CodeT5∗ | PyDict | Program | 69.2 (0.4) | 68.4 (0.2) | 72.5 (0.4) | 73.1 (0.2) | 77.3 (0.4) | 77.6 (0.6) |
| CodeT5 | 69.0 (0.2) | 68.2 (0.1) | 72.5 (0.3) | 73.0 (0.6) | 78.0 (0.3) | 78.5 (0.3) | | |
| CodeT5† | PyDict | Program | 73.0 (0.7) | 72.9 (0.8) | 76.6 (0.4) | 78.1 (0.2) | 80.0 (0.3) | 81.2 (0.6) |
## F More Representation Examples
We provide two more examples in Table F.1 and F.2 to demonstrate how we represent SQL with subqueries and their edits (Section 2.2). We also show different representations for *Insert* and *Delete* edit actions.
| Query Representation | Edit Representation | | | | | |
|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|--------------|-----------|---------|------|
| SQL | select count(*) from cars_data where cars_data.accelerate > ( select max(cars_data.horsepower) from cars_data ) | Token-level | <ReplaceOld> max(cars_data.horsepower) <ReplaceNew> cars_data.accelerate <ReplaceEnd> <Insert> order by cars_data.horsepower desc limit 1 <InsertEnd> | | | |
| Clause-level | <ReplaceOld> | select | | | | |
| max(cars_data.horsepower) | <ReplaceNew> | | | | | |
| select | cars_data.accelerate | <ReplaceEnd> | | | | |
| <Insert> order by cars_data.horsepower desc limit 1 <InsertEnd> | | | | | | |
| PyDict | sql = { "select": "select count(*)", "from": "from cars_data", "where": { "clause": "where cars_data.accelerate > (subquery0)", "subquery0": { "select": " select max(cars_data.horsepower)", "from": "from cars_data" } } } | Clause-level | <ReplaceOld> | "select": | "select | max( |
| cars_data.horsepower)" | <ReplaceNew> | | | | | |
| "select": | "select cars_data.accelerate" <Re | | | | | |
| placeEnd> <Insert> "orderBy": | "order by | | | | | |
| cars_data.horsepower desc", "limit": "limit 1" <InsertEnd> | | | | | | |
| Program | sql["where"]["subquery0"]["select"] = "select cars_data.accelerate" sql["where"]["subquery0"]["orderBy"] = "order by cars_data.horsepower desc" sql["where"]["subquery0"]["limit"] = "limit 1" | | | | | |
Table F.1: Example representations for a wrong SQL query *that contains a nested subquery* and its edit actions
(including *Insert* edits). The corresponding natural language utterance is "What is the number of cars with a greater accelerate than the one with the most horsepower?"
| Query Representation | Edit Representation | | | | | | |
|------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|-------------------------------------------------------------------------------------------------------|------------|--------|----|----|
| SQL | select employee.name from employee join evaluation on employee.employee_id = evaluation.employee_id group by evaluation.employee_id" order by sum(evaluation.bonus) desc limit 1 | Token-level | <Delete> group by evaluation.employee_id <DeleteEnd> <Delete> sum( <DeleteEnd> <Delete> ) <DeleteEnd> | | | | |
| Clause-level | <Delete> group by evaluation.employee_id <DeleteEnd> <ReplaceOld> order by sum(evaluation.bonus) desc <ReplaceNew> order by evaluation.bonus desc <ReplaceEnd> | | | | | | |
| PyDict | sql = { "select": "select employee.name", "from": "from employee join evaluation on employee.employee_id = evaluation.employee_id", "groupBy": "group by evaluation.employee_id", "orderBy": "order by sum(evaluation.bonus) desc", "limit": "limit 1" } | Clause-level | <Delete> | "groupBy": | "group | by | evalua |
| tion.employee_id" <DeleteEnd> <ReplaceOld> "orderBy": "order by sum(evaluation.bonus) desc" <ReplaceNew> "orderBy": "order by evaluation.bonus desc" <ReplaceEnd> | | | | | | | |
| Program | sql.pop("groupBy") sql["orderBy"] = "order by evaluation.bonus desc" | | | | | | |
Table F.2: Example representations for a wrong SQL query and its edit actions (including *Delete* edits). The corresponding natural language utterance is "Find the name of the employee who got the highest one time bonus."
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
3 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
selvam-etal-2023-tail | The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks | https://aclanthology.org/2023.acl-short.118 | How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given model? In this work, we study this question by contrasting social biases with non-social biases that stem from choices made during dataset construction (which might not even be discernible to the human eye). To do so, we empirically simulate various alternative constructions for a given benchmark based on seemingly innocuous modifications (such as paraphrasing or random-sampling) that maintain the essence of their social bias. On two well-known social bias benchmarks (Winogender and BiasNLI), we observe that these shallow modifications have a surprising effect on the resulting degree of bias across various models and consequently the relative ordering of these models when ranked by measured bias. We hope these troubling observations motivate more robust measures of social biases. | # The Tail Wagging The Dog: Dataset Construction Biases Of Social Bias Benchmarks
Nikil Roashan Selvam1 **Sunipa Dev**2 Daniel Khashabi3 Tushar Khot4 **Kai-Wei Chang**1 1University of California, Los Angeles 2Google Research 3Johns Hopkins University 4Allen Institute for AI
{nikilrselvam,kwchang}@ucla.edu, [email protected] [email protected], [email protected]
## Abstract
How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given model? In this work, we study this question by contrasting social biases with non-social biases that stem from choices made during dataset construction (which might not even be discernible to the human eye). To do so, we empirically simulate various alternative constructions for a given benchmark based on seemingly innocuous modifications (such as paraphrasing or random-sampling) that maintain the essence of their social bias. On two wellknown social bias benchmarks (WINOGENDER
and BIASNLI), we observe that these shallow modifications have a surprising effect on the resulting degree of bias across various models and consequently the relative ordering of these models when ranked by measured bias.
We hope these troubling observations motivate more robust measures of social biases.
## 1 Introduction
The omnipresence of large pre-trained language models (Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020) has fueled concerns regarding their systematic biases carried over from underlying data into the applications they are used in, resulting in disparate treatment of people with different identities (Sheng et al., 2021; Abid et al., 2021).
In response to such concerns, various benchmarks have been proposed to quantify the amount of social biases in models (Rudinger et al., 2018; Sheng et al., 2019; Li et al., 2020). These measures are composed of textual datasets built for a specific NLP task (such as question answering) and are accompanied by a metric such as accuracy of prediction which is used as an approximation of the amount of social biases.
These bias benchmarks are commonly used by machine learning practitioners to compare the degree of social biases (such as gender-occupation
![0_image_0.png](0_image_0.png)
bias) in different real-world models (Chowdhery et al., 2022; Thoppilan et al., 2022) before deploying them in a myriad of applications. However, they also inadvertently measure other non-social biases in their datasets. For example, consider the sentence from WINOGENDER in Figure 1. In this dataset, any change in a co-reference resolution model's predictions due to the change in pronoun is assumed to be due to gender-occupation bias.
However, this assumption only holds for a model with near-perfect language understanding with no other biases. This may not often be the case, e.g., a model's positional bias (Murray and Chiang, 2018; Ko et al., 2020) (bias to resolve "she" to a closeby entity) or spurious correlations (Schlegel et al.,
2020) (bias to resolve "he" to the object of the verb
"warned") would also be measured as a genderoccupation bias. As a result, a slightly different template (e.g., changing the verb to "cautioned")
1373 could result in completely different bias measurements.
The goal of this work is to illustrate the extent to which social bias measurements are effected by assumptions that are built into dataset constructions. To that end, we consider several alternate dataset constructions for 2 bias benchmarks WINO-GENDER and BIASNLI. We show that, just by the choice of certain target-bias-irrelevant elements in a dataset, it is possible to discover different degrees of bias for the same model as well as different model rankings1. For instance, one experiment on BIASNLI demonstrated that merely negating verbs drastically reduced the measured bias
(41.64 → 13.40) on an ELMo-based Decomposable Attention model and even caused a switch in the comparative ranking with RoBERTa. Our findings demonstrate the unreliability of current benchmarks to truly measure social bias in models and suggest caution when considering these measures as the gold truth. We provide a detailed discussion
(§5) of the implications of our findings, relation to experienced harms, suggestions for improving bias benchmarks, and directions for future work.
## 2 Related Work
A large body of work investigates ways to evaluate biases carried inherently in language models (Bolukbasi et al., 2016; Caliskan et al., 2017; Nadeem et al., 2021) and expressed in specific tasks (Nangia et al., 2020; Kirk et al., 2021; Schramowski et al., 2022; Prabhumoye et al., 2021; Srinivasan and Bisk, 2021; Kirk et al., 2021; Parrish et al., 2021; Baldini et al., 2022; Czarnowska et al., 2021; Dev et al., 2021a; Zhao et al., 2021).
Alongside, there is also growing concern about the measures not relating to experienced harms (Blodgett et al., 2020), not inclusive in framing (Dev et al., 2021b), ambiguous about what bias is measured (Blodgett et al., 2021), not correlated in their findings of bias across intrinsic versus extrinsic techniques (Goldfarb-Tarrant et al., 2021; Cao et al., 2022), and susceptible to adversarial perturbations (Zhang et al., 2021) and seed word selection (Antoniak and Mimno, 2021).
The concurrent work by (Seshadri et al., 2022)
discusses the unreliability of quantifying social biases using templates by varying templates in a semantic preserving manner. While their findings are consistent with ours, the two works provide complementary experimental observations. Seshadri et al. (2022) study a wider range of tasks, though we focus our experiments on a wider set of models and alternate dataset constructions (with a greater range of syntactic and semantic variability). As a result, we are able to illustrate the effect of the observed variability on ranking large language models according to measured bias for deployment in real world applications.
## 3 Social Bias Measurements And Alternate Constructions
Bias measures in NLP are often quantified through comparative prediction disparities on language datasets that follow existing tasks such as classification (De-Arteaga et al., 2019) or coreference resolution (Rudinger et al., 2018). As a result, these datasets are central to what eventually gets measured as "bias". Not only do they determine the
"amount" of bias measured but also the "type" of bias or stereotype measured. Datasets often vary combinations of gendered pronouns and occupations to evaluate stereotypical associations. It is important to note that these constructs of datasets and their templates, which determine what gets measured, are often arbitrary choices. The sentences could be differently structured, be generated from a different set of seed words, and more. However, we expect that for any faithful bias benchmark, such dataset alterations that are not relevant to social bias should not have a significant impact on the artifact (e.g. gender bias) being measured.
Thus, to evaluate the faithfulness of current benchmarks, we develop alternate dataset constructions through modifications that should not have any effect on the social bias being measured in a dataset. They are minor changes that should not influence models with true language understanding - the implicit assumption made by current bias benchmarks. Any notable observed changes in a model's bias measure due to these modifications would highlight the incorrectness of this assumption. Consequently, this would bring to light the unreliability of current benchmarks to faithfully measure the target bias and disentangle the measurement from measurement of other non-social biases. A non-exhaustive set of such alternate constructions considered in this work are listed below.
![2_image_0.png](2_image_0.png)
$GMHFWLYHEHIRUHRFFXSDWLRQ $GMHFWLYHDIWHURFFXSDWLRQ $GMHFWLYHEHIRUHSDUWLFLSDQW $GMHFWLYHDIWHUSDUWLFLSDQW
![2_image_2.png](2_image_2.png)
![2_image_1.png](2_image_1.png)
Figure 2: An instance ("The engineer informed the client that he would need to make all future payments on time") from WINOGENDER benchmark modified under various shallow modifications (§3). To a human eye, such modifications do not necessarily affect the outcome of the given pronoun resolution problem.
Negations: A basic function in language understanding is to understand the negations of word groups such as action verbs, or adjectives. Altering verbs in particular, such as 'the doctor bought' to
'the doctor did not buy' should typically not affect the inferences made about occupation associations.
Synonym substitutions: Another fundamental function of language understanding is the ability to parse the usage of similar words or synonyms used in identical contexts, to derive the same overall meaning of a sentence. For bias measuring datasets, synonymizing non-pivotal words (such as non-identity words like verbs) should not change the outcome of how much bias is measured.
Varying length of the text: In typical evaluation datasets, the number of clauses that each sentence is composed of and overall the sentence length are arbitrary experimental choices. Fixing this length is common, especially when such datasets need to be created at scale. If language is understood, adding a neutral phrase without impacting the task-specific semantics should not alter the bias measured.
Adding descriptors: Sentences used in real life are structured in complex ways and can have descriptors, such as adjectives about an action, person, or object, without changing the net message expressed by the text. For example, the sentences, "The doctor bought an apple.", and "The doctor bought a red apple." do not change any assumptions made about the doctor, or the action of buying an apple.
Random samples: Since the sentence constructs of these datasets are not unique, a very simple alternate construction of a dataset is a different subsample of itself. This is because the dataset is scraped or generated with specific assumptions or parameters, such as seed word lists, templates of sentences, and word order. However, neither the sentence constructs or templates, nor the seed word lists typically used are exhaustive or representative of entire categories of words (such as gendered words, emotions, and occupations).
See Fig. 2 for example constructions on WINO-GENDER (App. A, B for detailed descriptions).
## 4 Case Studies
We discuss here the impact of alternate constructions on two task-based measures of bias.2
## 4.1 Coreference Resolution
Several different bias measures (Rudinger et al.,
2018; Zhao et al., 2018; Cao and Daumé III, 2021)
for coreference resolution work similar to Winograd Schema (Winograd, 1972) where a sentence has two entities and the task is to resolve which entity a specific pronoun or noun refers to. We work here with WINOGENDER (Rudinger et al., 2018),
popularly used to measure biases. It is worth noting that WINOGENDER was originally intended by its authors to merely be a diagnostic tool that checks for bias in a model; the authors note that it may demonstrate the presence of model bias but not prove the absence of the same. Nonetheless, models developed today are indeed tested and compared for social bias on WinoGender, leading to its usage as a comparative standard or benchmark (Chowdhery et al., 2022; Thoppilan et al., 2022).
The metric used to evaluate bias is the percentage of sentence pairs where there is a mismatch in predictions for the male and female gendered pronouns. For instance, in Fig. 2, if the pronoun
"he" is linked to "engineer" but switches to "client" for the pronoun "she", that would indicate a genderoccupation bias. Higher the number of mismatches, 2We note that throughout this paper, we focus on genderoccupation bias as an illustrative example; however, our discussion can be extended to other aspects of biases too.
(a) WINOGENDER
(b) BIASNLI
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
higher the bias. In particular, note that the metric does not take into account the accuracy of the predictions, but rather only the mismatch between the two pronouns.
We experiment with three alternate constructions of the dataset: addition of clauses, addition of adjectives, and *synonymizing words in templates*.
Each alternate construction is introduced so as to not affect the overall meaning of the sentence.
Experimental Results: We use an end-toend coreference model with SpanBERT embeddings (Lee et al., 2018; Joshi et al., 2020), UnifiedQA (small, base, and large) (Khashabi et al.,
2020) QA model,3and a long-document coreference model with Longformer encodings (Toshniwal et al., 2021). Results of evaluating these models on various WINOGENDER constructions is summarized in Fig. 3a. Small changes to the formulation of dataset templates result in sizable changes to computed bias measures compared to the published baseline constructions. For example, a construction involving added adjectives after occupations would have found the UnifiedQA (large) model to have 10% less bias compared to the default constructions. The sensitivity to the dataset constructions can have a drastic effect on ranking models according to their social bias, as Fig. 3a shows. For example, the SpanBERT model is considered to have less bias than UnifiedQA (small) model in the baseline dataset, but would be considered to be more biased if the templates had clauses after the participants or adjectives before the occupation.
## 4.2 Natural Language Inference
Natural Language Inference (NLI) is the task of determining directional relationships between two sentences (a premise (P) and a hypothesis (H)). Dev et al. (2020)'s measure based on NLI (BIASNLI)
evaluates if stereotypical inferences are made by language models. We use their dataset for genderoccupation stereotypes containing approximately 2 million sentence pairs such as P: "The doctor bought a bagel.", H: "The man bought a bagel.".
The expected prediction for each sentence pair in the dataset is neutral, and therefore the bias metric used is the fraction of neutral inferences on dataset
- the higher the score, the lower the bias.
We experiment with three alternate constructions of the dataset: verb negation, *random sampling*,
and *addition of clauses*. Note that the alternate constructions do not impact the unbiased label (neutral). Any change in construction (say negating a verb) is applied to both the premise and hypothesis.
Refer to App. B for a detailed description.
Experimental Results: We use RoBERTa trained on SNLI (RoBERTa-base-SNLI) (Liu et al., 2019),
ELMo-based Decomposable Attention (ELMoDA) (Parikh et al., 2016), ALBERT (Lan et al.,
2019), distilled version of the RoBERTa-base model (Sanh et al., 2019), and RoBERTa-large finetuned on WANLI (Liu et al., 2022). The bias measured with each model using BIASNLI is recorded in Fig. 3b. The results show how small modifications to the dataset again result in large changes to the bias measured, and also change the bias rankings. For example, adding a negation largely reduces the bias measured (△ = 28.24) for ELMoDA, and also results in a switch in the comparative ranking to RoBERTa-base-SNLI. Furthermore, as seen in Fig. 4, there is a significant overlap in the bias measures of ALBERT, DistilRoBERTa, and ELMo-DA under random sampling,4 which corresponds to high variability in relative model ordering across different sub-samples of the dataset.
## 5 Discussion And Conclusion
Social bias measurements are very sensitive to evaluation methodology. Our empirical evidence sheds light on how the model's non-social biases brought out or masked by alternate constructions can cause bias benchmarks to underestimate or overestimate the social bias in a model. More interestingly, it is important to note that different models respond differently to perturbations. In fact, the same perturbation can result in a higher or lower measured bias depending on the model (as seen in §4.1 and
§4.2), which points to how models might parse information (and thus bias) differently.
While current bias measures do play a role in exposing where model errors have a stereotypical connotation, a lack of sentence construction variability or even assumptions made when creating seed word lists can reduce the reliability of the benchmarks, as we see in this work (§4.2). Even with simple sentences, it is not apparent how to disentangle the biased association of the identity with the verb or the occupation amongst others. This is especially important to note as it highlights that measures can lack concrete definitions of what bi-4Also observed at 25% and 50% samples in Fig. 5(App.)
ased associations they measure. Consequently, the relation between measured bias and experienced harm becomes unclear.
We hope that our troubling observations motivates future work that thoroughly investigates how to construct robust benchmarks that faithfully measure the target bias without being affected by model errors and other non-social biases. As suggested by our subsampling experiments (Appendix F), it might be fruitful to encourage both syntactic and semantic diversity in these benchmarks. Bias benchmarks that provide uncertainty measures (instead of a single number) might enable practitioners to better compare models before deploying them. Furthermore, since the opaqueness of large language models makes it challenging to understand how and to what extent a linguistic change will affect the measured bias, explainable models might indeed facilitate better measurement of their social bias.
Assuming that we can generate faithful explanations for a model's predictions, an exciting future direction is to explore construction of bias benchmarks which operate on the explanations of the predictions rather than the predictions themselves.
Lastly, we also encourage discussions on the complexity of the sentences used in benchmarks and their implications on what gets measured in relation to un-templated, naturally-occurring text (Levy et al., 2021), as an attempt to ground our measurements in experienced harms.
## Limitations
We acknowledge the underlying assumptions of the social bias benchmarks used in our study. While the presented study aims to point out a key limitation of currently accepted methodologies, the presented investigation could benefit from more diversification. First, this study focuses on English. While we expect similar issues with similarly-constructed benchmarks in other languages, we leave it to future work to formally address the same. Also, the bias benchmarks themselves imbibe the notion of fairness with the Western value system (Bhatt et al., 2022), and future explorations of benchmarks should diversify culturally as well. Last but not least, we acknowledge the harm of binary treatment of genders in one of the target benchmarks.
The purpose of this work was to bring light to a broader problem regarding the reliability of social benchmark metrics, with the hypothesis that the main idea of this paper would hold for a wider range of datasets with other assumptions or notions of fairness. We also acknowledge that there are larger models that we were not able to train and evaluate due to the limitations on our computational budget. The current study was focused on benchmarks with templated instances. This is no coincidence: the dominant majority of the social bias benchmarking literature relies on sentences with some degree of known structure, even in those collected from the wild (Levy et al., 2021). Such structural assumptions in datasets are necessary for defining and extracting quantifiable measures of social bias, which as we argue, are the reason behind the brittleness of their decisions. Future work should focus on making our bias benchmarks more diverse and robust to small decisions that go into making them.
## Broader Impact
Bias evaluating benchmarks play a very significant role in helping identify potential risks of language technologies. While a large body of work evolves in this area of work, there is growing concern about the ability of the different benchmarks to accurately quantify and identify social biases. We emphasize these concerns by evaluating how robust the benchmarks are to alternate constructions based on simple linguistic properties. It is important to note how inaccurate measurements of social biases can be problematic by underestimating or misdiagnosing the potential harm from language models. We hope our work helps identify such pitfalls.
## Acknowledgements
We thank the students and colleagues at UCLA,
JHU and AI2 for their insightful feedback towards improving this paper. The authors would also like to thank the anonymous reviewers for their constructive feedback. This project is supported by generous gifts from Allen Institute for AI, CISCO,
Amazon, and a Sloan fellowship.
## References
Abubakar Abid, Maheen Farooqi, and James Zou. 2021.
Persistent anti-muslim bias in large language models.
In *AAAI/ACM Conference on AI, Ethics, and Society*
(AIES), pages 298–306.
Maria Antoniak and David Mimno. 2021. Bad seeds:
Evaluating lexical methods for bias measurement.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the
11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1889–1904, Online. Association for Computational Linguistics.
Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, Moninder Singh, and Mikhail Yurochkin.
2022. Your fairness may vary: Pretrained language model fairness in toxic text classification. In *Annual Meeting of the Association for Computational* Linguistics (ACL) *- Findings*.
Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, and Vinodkumar Prabhakaran. 2022. Recontextualizing fairness in NLP: The case of India. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 727–740, Online only. Association for Computational Linguistics.
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in nlp. In *Annual Meeting of the Association for Computational* Linguistics (ACL).
Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping norwegian salmon: an inventory of pitfalls in fairness benchmark datasets. In Annual Meeting of the Association for Computational Linguistics (ACL).
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In *Advances in* Neural Information Processing Systems, volume 29.
Curran Associates, Inc.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, and et al. 2020. Language models are few-shot learners. In *Advances in Neural* Information Processing Systems (NeurIPS).
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases.
Science, 356(6334):183–186.
Yang Trista Cao and Hal Daumé III. 2021. Toward gender-inclusive coreference resolution: An analysis of gender and bias throughout the machine learning lifecycle. *Computational Linguistics* (CL).
Yang Trista Cao, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta, Varun Kumar, Jwala Dhamala, and Aram Galstyan. 2022. On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 561–570,
Dublin, Ireland. Association for Computational Linguistics.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C.
Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier García, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Díaz, Orhan Firat, Michele Catasta, Jason Wei, Kathleen S.
Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *ArXiv*, abs/2204.02311.
Paula Czarnowska, Yogarshi Vyas, and Kashif Shah.
2021. Quantifying social biases in nlp: A generalization and empirical comparison of extrinsic fairness metrics. *Transactions of the Association for Computational Linguistics* (TACL).
Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting.
In ACM Conference on Fairness, Accountability and Transparency (FAccT).
Sunipa Dev, Tao Li, Jeff M. Phillips, and Vivek Srikumar. 2020. On measuring and mitigating biased inferences of word embeddings. Conference on Artificial Intelligence (AAAI).
Sunipa Dev, Tao Li, Jeff M Phillips, and Vivek Srikumar.
2021a. Oscar: Orthogonal subspace correction and rectification of biases in word embeddings. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Arjun Subramonian, Jeff Phillips, and Kai-Wei Chang.
2021b. Harms of gender exclusivity and challenges in non-binary representation in language technologies. In *Conference on Empirical Methods in Natural* Language Processing (EMNLP).
Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sánchez, Mugdha Pandya, and Adam Lopez. 2021. Intrinsic bias metrics do not correlate with application bias. In *Annual Meeting of the Association for Computational Linguistics* (ACL).
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics (TACL).
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UnifiedQA: Crossing Format Boundaries With a Single QA System. In Conference on Empirical Methods in Natural Language Processing (EMNLP) *- Findings*.
Hannah Rose Kirk, Filippo Volpin, Haider Iqbal, Elias Benussi, Frederic Dreyer, Aleksandar Shtedritski, Yuki Asano, et al. 2021. Bias out-of-the-box: An empirical analysis of intersectional occupational biases in popular generative language models. Advances in Neural Information Processing Systems (NeurIPS).
Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, and Jaewoo Kang. 2020. Look at the first sentence: Position bias in question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2019. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning Representations* (ICLR).
Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018.
Higher-order coreference resolution with coarse-tofine inference. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
Shahar Levy, Koren Lazar, and Gabriel Stanovsky. 2021.
Collecting a large-scale gender bias dataset for coreference resolution and machine translation. In *Conference on Empirical Methods in Natural Language* Processing (EMNLP) *- Findings*.
Tao Li, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Vivek Srikumar. 2020. UnQovering Stereotypical Biases via Underspecified Questions. In *Conference on Empirical Methods in Natural Language* Processing (EMNLP) *- Findings*.
Alisa Liu, Swabha Swayamdipta, Noah A Smith, and Yejin Choi. 2022. WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation.
arXiv preprint arXiv:2201.05955.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Kenton Murray and David Chiang. 2018. Correcting length bias in neural machine translation. In *Conference on Machine Translation* (WMT).
Moin Nadeem, Anna Bethke, and Siva Reddy. 2021.
StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics.
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In *Conference on Empirical Methods* in Natural Language Processing (EMNLP).
Ankur P. Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In *Conference* on Empirical Methods in Natural Language Processing (EMNLP).
Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R Bowman. 2021.
Bbq: A hand-built bias benchmark for question answering. In *Annual Meeting of the Association for* Computational Linguistics (ACL).
Shrimai Prabhumoye, Rafal Kocielnik, Mohammad Shoeybi, Anima Anandkumar, and Bryan Catanzaro. 2021. Few-shot instruction prompts for pretrained language models to detect social biases. arXiv preprint arXiv:2112.07868.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*
(JMLR).
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In *Conference of the North* American Chapter of the Association for Computational Linguistics (NAACL).
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *ArXiv*,
abs/1910.01108.
Viktor Schlegel, Goran Nenadic, and Riza BatistaNavarro. 2020. Beyond leaderboards: A survey of methods for revealing weaknesses in natural language inference data and models. arXiv preprint arXiv:2005.14709.
Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A Rothkopf, and Kristian Kersting. 2022.
Large pre-trained language models contain humanlike biases of what is right and wrong to do. *Nature* Machine Intelligence.
Preethi Seshadri, Pouya Pezeshkpour, and Sameer Singh. 2022. Quantifying social biases using templates is unreliable. arXiv preprint arXiv:2210.04337.
Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2021. Societal biases in language generation: Progress and challenges. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Tejas Srinivasan and Yonatan Bisk. 2021. Worst of both worlds: Biases compound in pre-trained vision-andlanguage models. In *Workshop on Gender Bias in* Natural Language Processing.
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al.
2022. LaMDA: Language Models for Dialog Applications. *arXiv preprint arXiv:2201.08239*.
Shubham Toshniwal, Patrick Xia, Sam Wiseman, Karen Livescu, and Kevin Gimpel. 2021. On generalization in coreference resolution. In Proceedings of the Workshop on Computational Models of Reference, Anaphora and Coreference.
T. Winograd. 1972. Understanding natural language.
Cognitive psychology, 3(1):1–191.
Chong Zhang, Jieyu Zhao, Huan Zhang, Kai-Wei Chang, and Cho-Jui Hsieh. 2021. Double perturbation: On the robustness of robustness and counterfactual bias evaluation. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3899–3916, Online. Association for Computational Linguistics.
Jieyu Zhao, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Kai-Wei Chang. 2021. Ethical-advice taker: Do language models understand natural language interventions? In *Annual Meeting of the Association for Computational Linguistics* (ACL) *- Findings*.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In *Conference of the North American Chapter of the Association for Computational Linguistics*
(NAACL).
# Appendix The Tail Wagging The Dog**: Dataset Construction Biases Of Social Bias** Benchmarks
## A Alternate Constructions Of W**Inogender**
Addition of clauses: For WINOGENDER, we add clauses like "who just returned from the beach" to the different entities in the sentence. For instance, the sentence "The customer left the bartender a big tip because he was feeling generous." becomes
"The customer, who just returned from the beach, left the bartender a big tip because he was feeling generous."
Synonym substitution: We substitute with synonyms such that it does not change the meaning of the sentence. WINOGENDER has 720 sentences generated from 120 templates, making manual substitution of synonyms in the templates feasible.For example, the sentence "The supervisor gave the employee feedback on his stellar performance." is replaced by "The supervisor gave the employee feedback on his amazing performance."
Adding adjectives: As discussed in §3, we add descriptors in the form of adjectives that do not add information about which entity the pronoun or noun would refer to. We do it in four distinct ways, (i)
adding the descriptor to the occupation mentioned, e.g. doctor (e.g., "doctor" to "good doctor"), (ii)
adding it to the occupation as a separate clause
(e.g., "doctor" to "the doctor who was good"), (iii)
adding the descriptor to the participant mentioned, e.g., "client" (similar to (i)), and (iv) adding it to the participant as a separate clause (similar to (ii)).
## B Alternate Constructions Of Biasnli
Negation: We negate the verb in each sentence of the dataset. For example, P: "The doctor bought a bagel.", H: "The man bought a bagel.", and P:
"The doctor did not buy a bagel.", H: "The man did not buy a bagel.", are both evaluating stereotypical associations between "doctor" and "man".
Random sampling: The BIASNLI dataset is generated from templates by populating empty slots (e.g. verbs, objects) with words from corresponding lists. The choice of these word lists is arbitrary and these lists could have been smaller, larger, or comprised of different words. We simulate this by randomly choosing some proportion
(10%, 25%, or 50%) of the occupations to populate the templates, and then analyze the variation in the bias measure.
Addition of clauses: We add a clause after the subject in each sentence. For example, for the sentence pair, P: "The doctor bought a coat." H:
"The man bought a coat." is modified to P: "The doctor, who came in the afternoon, bought a coat."
H: "The man, who came in the afternoon, bought a coat."
## C Descriptors
For WINOGENDER, here is the set of adjectives used to modify either the occupation word or the participant word: aggressive, arrogant, beautiful, brilliant, clean, clever, cruel, deceitful, devious, dirty, dumb, evil, generous, gentle, greedy, hateful, honest, humorless, ignorant, intelligent, intolerant, neat, professional, rude, smart, strong, stupid, terrible, ugly, unclean, unprofessional, weak, wise.
## D Clauses
We use the following clauses in WINOGENDER and BIASNLI to increase the distance between relevant parts of the sentence: who just returned from the restaurant, who came in the afternoon, who just came back, who went to the restaurant, who just returned from the beach.
## E Synonymization Examples
For WINOGENDER, we manually perform synonymization for all 120 templates. Note that while the replacements might not be exact synonyms, they are replacements of non-identity words that do not change the overall meaning of the sentence and hence should not have any notable impact on the gender bias being measured. We report a few characteristic examples of such substitutions here:
- The taxpayer met with the accountant to get help filing his taxes → The taxpayer met with the accountant to get aid filing his taxes.
- The supervisor gave the employee feedback on his *stellar* performance → The supervisor gave the employee feedback on his *amazing* performance.
- The hygienist told the patient that he needed to floss every day to avoid *gum disease* →
The hygienist told the patient that he needed to brush every day to avoid *cavities*.
- WINOGENDER:https://github.com/rudinger/
winogender-schemas
- BIASNLI: https://github.com/sunipa/OnMeasuring-and-Mitigating-BiasedInferences-of-Word-Embeddings
- The broker called the client because he had requested a phone consultation → The broker called the client because he had *asked for* a phone consultation.
All models used are also publicly available.
- ai2spanbert: https://demo.allennlp.org/coref erence-resolution
- The chef came out to apologize to the guest who was *unhappy* with his preparation style
→ The chef came out to apologize to the guest who was *dissatisfied* with his preparation style.
- UnifiedQA: https://github.com/allenai/unified qa
- Longformer: https://github.com/shtoshni/fastcoref
- Albert: https://huggingface.co/docs/trans formers/model_doc/albert
- Elmo-DA:https://demo.allennlp.org/textualentailment/elmo-snli
- Roberta-baseSNLI:https://github.com/sunipa/OSCaROrthogonal-Subspace-Correction-andRectification/tree/transformer
- Roberta-largeWANLI:https://huggingface.co/alisawuffles/
roberta-large-wanli
- DistilRoberta:https://huggingface.co/crossencoder/nli-distilroberta-base
## I Links To Datasets And Code F Subsampling G Tables Of Experimental Results H Computing Resources
to NVIDIA RTX A6000 for selected experiments.
In terms of runtime, compute time for inference on a single test set varied by model, but was limited to 12 hours for WINOGENDER and 72 hours for BIASNLI.
All datasets (original constructions) used are publicly available.
The gender-occupation subset of the original construction of BIASNLI consists of 164 occupation words such as accountant, firefighter, tutor, and model. In each trial, we subsample some proportion (10%, 25%, or 50%) of these occupation words used in the templates to regenerate the dataset and evaluate all models on this alternate construction.
We empirically estimate the distribution of bias scores across samples of a fixed proportion by using 100 independent random trials for that proportion. See Figure 5 for results. Observe that overlap in the distributions serves as a proxy for possible inversions in model ordering (by bias) depending on the subsample of template occupation words used. It is also worth noting that as we use more diverse sets (that is, bigger proportions) of seed words, the variance in the measured bias reduces.
Code and data for the experiments are available at https://github.com/uclanlp/socialbias-datasetconstruction-biases. We provide complete preprocessed datasets that correspond to the various proposed alternate constructions. They can be readily used with the publicly listed models for evaluation, thereby easily reproducing the results of the paper. We provide scripts to help with the same. The alternate dataset constructions can also be independently and flexibly used for new experiments.
See Table 1 and Table 2 for detailed experimental results on alternate constructions for WINOGEN-DER and BIASNLI respectively.
For our experiments, we used a 40-core Intel(R)
Xeon(R) CPU E5-2640 v4 @ 2.40GHz, with access
![10_image_0.png](10_image_0.png)
Perturbation ai2spanbert qa-small qa-base qa-large longformer Baseline (no perturbations) 5.83 5.83 16.66 15.41 9.16 Clause after occupation 4.50 5.50 14.75 23.50 10.08 Clause after participant 10.33 8.00 15.00 15.75 8.83
Adjective before occupation 8.22 5.34 16.12 17.31 6.87 Adjective after occupation 4.92 5.37 15.57 25.45 9.75
Adjective before participant 5.97 5.69 13.84 18.52 10.77
Adjective after participant 8.48 7.49 15.91 18.17 11.69 Synonyms 7.92 7.50 17.92 15.83 12.08
Table 1: Percentage M-F Mismatch on WINOGENDER.
Table 2: Percentage neutral for different alternate constructions of BIASNLI
| Albert | Elmo-DA | Roberta-base-SNLI | Roberta-large-WANLI | DistilRoberta | |
|-----------------------------|-----------|---------------------|-----------------------|-----------------|-------|
| Baseline (no perturbations) | 44.81 | 41.64 | 15.25 | 16.81 | 51.32 |
| Clauses | 60.85 | 40.43 | 30.26 | 15.69 | 60.84 |
| Negation | 45.76 | 13.40 | 20.04 | 10.45 | 62.63 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Page 5 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Section 3 and Appendix J (Bias Datasets and Models used)
✓ B1. Did you cite the creators of artifacts you used?
Section 3 and Appendix J (Datasets and Models used)
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix J (Datasets and Models used are all publicly available)
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3.2 and Appendix F
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix I
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3, Appendix B-G
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3, Appendix H
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
shaib-etal-2023-summarizing | Summarizing, Simplifying, and Synthesizing Medical Evidence using {GPT}-3 (with Varying Success) | https://aclanthology.org/2023.acl-short.119 | Large language models, particularly GPT-3, are able to produce high quality summaries ofgeneral domain news articles in few- and zero-shot settings. However, it is unclear if such models are similarly capable in more specialized domains such as biomedicine. In this paper we enlist domain experts (individuals with medical training) to evaluate summaries of biomedical articles generated by GPT-3, given no supervision. We consider bothsingle- and multi-document settings. In the former, GPT-3 is tasked with generating regular and plain-language summaries of articles describing randomized controlled trials; in thelatter, we assess the degree to which GPT-3 is able to synthesize evidence reported acrossa collection of articles. We design an annotation scheme for evaluating model outputs, withan emphasis on assessing the factual accuracy of generated summaries. We find that whileGPT-3 is able to summarize and simplify single biomedical articles faithfully, it strugglesto provide accurate aggregations of findings over multiple documents. We release all data,code, and annotations used in this work. | # Summarizing, Simplifying, And Synthesizing Medical Evidence Using Gpt-3 (With Varying Success)
Chantal Shaib1 Millicent L. Li1 **Sebastian Joseph**2 Iain J. Marshall3 Junyi Jessy Li2 **Byron C. Wallace**1 1Northeastern University, 2The University of Texas at Austin, 3King's College London
{shaib.c, li.mil, b.wallace}@northeastern.edu [email protected]
{sebaj, jessy}@utexas.edu
## Abstract
Large language models, particularly GPT3, are able to produce high quality summaries of general domain news articles in fewand zero-shot settings. However, it is unclear if such models are similarly capable in more specialized, high-stakes domains such as biomedicine. In this paper, we enlist domain experts (individuals with medical training) to evaluate summaries of biomedical articles generated by GPT-3, given zero supervision. We consider both single- and multi-document settings. In the former, GPT-3 is tasked with generating regular and plain-language summaries of articles describing randomized controlled trials; in the latter, we assess the degree to which GPT-3 is able to *synthesize* evidence reported across a collection of articles. We design an annotation scheme for evaluating model outputs, with an emphasis on assessing the factual accuracy of generated summaries.
We find that while GPT-3 is able to summarize and simplify single biomedical articles faithfully, it struggles to provide accurate aggregations of findings over multiple documents. We release all data and annotations used in this work.1
## 1 Introduction
Large language models have been shown to be capable of producing high-quality and reasonably accurate summaries in *zero-shot* settings (Goyal et al.,
2022; Liang et al., 2022), with GPT-3 besting fully supervised models in generic news summarization, according to human judgments (Goyal et al., 2022).
In this work we evaluate if such models are similarly able to summarize medical literature, a highstakes domain that demands factual accuracy.
Specifically, we use the newest iteration of GPT3 (text-davinci-003; GPT3-D3 from here) to generate summaries of (a) individual articles describing individual randomized controlled trials (RCTs)
1https://github.com/cshaib/
summarizing-medical-evidence
![0_image_0.png](0_image_0.png)
evaluating the efficacy of interventions, and, (b) collections of such articles that describe several trials addressing the same underlying clinical question
(e.g., evaluating the same medication). These constitute single- and multi-document summarization tasks, respectively. In the single-document case, we also evaluate the ability of GPT3-D3 to summarize in *plain language*. We enlist domain experts
(with medical training) to annotate model outputs, and seek to address the following questions.
RQ1 Does GPT3-D3 produce *faithful* summaries of medical articles?
RQ2 Can GPT3-D3 accurately *simplify* while also summarizing such texts?
RQ3 Can GPT3-D3 *synthesize*—aggregate the findings presented in—multiple input articles in a way that accurately reflects the totality of the evidence?
RQ4 What sort of factual mistakes does GPT3-D3 make when performing these tasks (if any), and what are the risks implied by such errors?
Overall, we find that GPT3-D3 performs singledocument summarization and simplification with reasonably good accuracy. However, it is less able to accurately synthesize evidence reported in *collections* of trials (in the multi-document case). We 1387 release all model outputs and accompanying annotations to facilitate additional work on this topic.
## 2 Single Document Summarization
Data We sample 100 articles describing randomized control trials (RCTs) indexed in the Trialstreamer database (Marshall et al., 2020), which also provides automatically extracted "key results"2alongside titles and abstracts. We search for trials published after November 28 2022, following the release date of GPT3-D3, to ensure the model has not seen any of the studies during pre-training.
Experimental Setup Using the RCT data described above, we evaluate the ability of GPT3-D3 to faithfully summarize and simplify biomedical texts in a zero-shot setting. We also compare GPT3-D3 summaries to summaries generated using Flan-T5 (Wei et al., 2021), but qualitatively find that GPT3-D3 summaries are much higher quality.
We provide results of this comparison in Appendix F.3. Specifically, we prompt GPT3-D3 to separately produce: (i) a technical summary, and, (ii) a plain language summary (August et al., 2022). See Appendix C for all prompts.
Study Design We designed an evaluation scheme that captures the sensitivity of medical information. To assess factuality, we collect annotations about omissions and errors with respect to main results, and key components of the trials including populations, interventions, and outcomes ("PICO"
elements; Richardson et al. 1995). Where appropriate, we ask annotators to highlight spans of generated text that are inconsistent with the input—these might be "new" concepts introduced or spans that directly contradict the input. To gauge overall linguistic quality, we solicit assessments regarding the fluency and usefulness of a summary on a Likert scale (1932). We include additional questions about the simplification of technical terms for the plain language summaries. We provide a complete taxonomy of the survey in Appendix H.
Annotations We recruited 3 domain experts with medical training on the Upwork platform,3and task them each with annotating 100 samples. In total, we collect 300 annotations (3 annotations per sample). We use Label Studio4as our interface.
## 3 Multiple Document Summarization And Evidence Synthesis
Data For multi-document summarization, we download meta-analyses from the Cochrane Library (these are reviews of medical evidence, usually RCTs).5 Our final sample contains 50 multidocument studies comprising meta-review titles, reference abstracts (inputs), and target conclusions
(target summaries) written by domain experts, 10 of which were published post-GPT3-D3 release. 6 Experimental Setup Because inputs comprise multiple abstracts, these (together with generated tokens) often exceed the token capacity of GPT3-D3.
In our dataset, about 41% of the samples exceeded this upper-bound. We report information about our data, including average length, in Appendix B. To address the upper-bound problem, we adopt a simple two-phase strategy for multi-document summarization. First, we generate independent summaries for each abstract, using the single-document summarization prompt described in Section 2. Then, we include all the generated single-document summaries in our multi-document synthesis prompt7
(examples in Appendix C).
Study Design Our evaluation rubric asks for assessments of generated outputs as compared to:
(a) inputs, and, (b) target summaries. Specifically, we ask if generated summaries are supported by the *summaries* provided as inputs in the multidocument case, and to what extent they agree with target (reference) summaries. We also ask annotators to highlight spans of text in generated outputs that disagree with paired target summaries. We reproduce the full rubric in Appendix H.
With respect to annotators, we use the same procedure described in Section 2; we recruited 3 new medical experts and tasked them each with annotating 50 samples, for a total of 150 annotations.
![2_image_1.png](2_image_1.png)
![2_image_2.png](2_image_2.png)
## 4 Results
RQ1: Does GPT3-D3 **produce faithful summaries of medical articles?** In the single document setting, we find that GPT3-D3 generates summaries of biomedical abstracts that are fairly highquality. Figure 2 (a) shows that annotators rated a majority of the summaries as being coherent, useful, and capturing "key results".
When GPT3-D3 does err, it tends to make minor mistakes or omit details. The latter is more common than the former, as shown in Figure 3 (a).
RQ2: Can GPT3-D3 **accurately simplify while**
summarizing medical texts? Shown in Figure 2 (b), GPT3-D3 produces simplified summaries that are similarly deemed to be coherent and useful, and which appear to contain key results. Simplified outputs are scored highly in terms of readability, indicating that these summaries would be understood by someone without medical training.
In comparison to the technical summaries, Fig-
![2_image_0.png](2_image_0.png)
ure 3 (b) shows that there are fewer omissions but a slightly higher amount of errors. These may be problematic, but - importantly - some omissions are expected in a simplified summary, as certain details that are important for an accurate summary for a technical audience may not be necessary to convey key information to a more general audience.
RQ3: Can GPT3-D3 *synthesize* **findings presented in multiple input articles in a way that**
accurately reflects the totality of the evidence?
We now evaluate GPT3-D3's performance on multidocument summarization, i.e., its ability to synthesize evidence (Wang et al., 2022). Figure 4 shows that most summaries generated by GPT3-D3 in this setting are supported by the inputs. This is consistent with our findings in RQ1: GPT3-D3 is able to summarize faithfully with respect to given input.
However, we find that generated summaries do not consistently agree with the target summaries. Indeed, Figure 4 shows that generated summaries disagree with the targets in over half of cases. This discrepancy suggests that human-written summaries in the biomedical domain require a level of synthesis that is not captured by GPT3-D3 .
RQ4: What sort of factual mistakes does GPT3-D3 **make and what are the risks?** In RQ1, we reported that GPT3-D3 sometimes omits key information. Figure 5 characterizes the types of omissions and errors made, with respect to PICO
elements. GPT3-D3 tends to underspecify elements in the summary more often than generating inaccuracies. Appendix F provides further details regarding underspecification. In the simplification task, GPT3-D3 capably simplifies most technical terms in the generated output (Figure 6).
Regarding RQ3, we showed that there are often discrepancies between generated and target summaries, despite the former being supported by the inputs. Human-written summaries of trials may be
![3_image_1.png](3_image_1.png)
underspecified
![3_image_2.png](3_image_2.png)
Figure 6: In the simplification case, the model usually replaces complex terms with simpler ones.
more cautious in their conclusions. We measure the evidence strength and direction of both the target and generated summaries, and find that GPT3-D3 tends to recommend marginal or substantive beneficial effects regarding interventions in the majority of the summaries (Figure 7).
Overall, we find that GPT3-D3 copies frequently from inputs. This results in summaries that are often faithful to the input. It may also be one reason that summaries tend to have more omissions (rather than errors) in the single document case, and it may also explain how summaries in the multi-document case often disagree with the reference synopsis while also being supported by (some subset of) the inputs. We calculate the degree of overlap and similarity between inputs and generated summaries from GPT3-D3 for both single-document and multidocument summarization at the sentence level (Fig-
![3_image_3.png](3_image_3.png)
![3_image_0.png](3_image_0.png)
ure 8). GPT3-D3 often copies sentences verbatim.
In other cases, it changes phrasings but only very slightly (see Appendix F for examples).
Further, Figure 8 shows how many sentences in each summary have a BLEU score of ≥ 30; which indicates the sentences are highly aligned. Over 70% of the summaries have at least a quarter of the sentences copied from the input. Appendix F
shows some examples of highly similar summaries and sentence pairs.
5 Related Work More broadly in summarization, several efforts have called for increased emphasis on human
(rather than automated) evaluation of generated texts, increased deployment of human-centered systems for text generation evaluation (Khashabi et al., 2021), and greater focus on building benchmarks that incorporate human preferences (Liang et al., 2022; Fabbri et al., 2021). And indeed, Goyal et al. (2022) find that summaries produced by GPT3-D3 are often preferred by humans over alternative model outputs even when automated metrics disagree. Such findings have motivated the manual analysis we conduct for this work. As far as we know, there has not been any work that assess the degree to which GPT-3 is proficient at summarizing biomedical and clinical data in both single-document and multi-document cases.
Our analysis of summarization in the biomedical space complements recent work analyzing the question answering capabilities of such models in this domain (Singhal et al., 2022; Liévin et al., 2022) and the degree to which they encode medical knowledge implicitly (Sung et al., 2021). Other work has considered using summarization of biomedical texts as assistive tools for reading
(August et al., 2022).
## 6 Conclusions
We evaluate the ability of GPT3-D3 to faithfully summarize and simplify medical literature. The expert annotations we collect indicate that GPT3-D3 performs single-document tasks quite well, but struggles with multi-document summarization.
This highlights the ability to aggregate across documents as a direction for future work. We release all data and annotations to facilitate such work in the medical space going forward.
## Limitations
This evaluation focussed on expert manual assessments of model outputs and their factual accuracy.
Domain expertise (in medicine) was invaluable for this task, but is also expensive and therefore limited the scale of our evaluation. Consequently, all findings are derived over a modest sample (100s)
of triple-annotated instances.
Another limitation here is that we have considered only articles describing *randomized control* trials (RCTs). We focused on such articles because RCTs are the most reliable means of assessing medical interventions, and therefore inform the practice of evidence-based medicine; summarizing such articles is therefore critical to help physicians stay on top of the evidence. Moreover, RCTs provide a natural grounding with respect to factuality, given that all such trials will investigate the relative efficacy of an intervention for a particular condition
(i.e., on a specific population of patients) and with respect to an outcome of interest. That said, this is restrictive by design, and our analysis has therefore excluded large swaths of other types of medical texts.
## Ethical Considerations
In Appendix D, we note the costs of hiring domain experts for annotation.
Large language models (such as GPT3-D3) have been shown capable of generating concise and fluent summaries. But these often contain factual inaccuracies. This poses unique risks in the domain of medicine, where inaccurate summaries of published evidence have the potential to (mis-)inform patient care. This work has attempted to empirically assess the tendency of models to introduce inaccuracies into summaries of medical literature by enlisting domain experts to identify and characterize omissions and errors in model generated summaries. Understanding such issues is a first step toward designing methods to mitigate them.
While we found that GPT3-D3 appears to produce summaries of single biomedical article abstracts that are reasonably factual, relying on such outputs still poses risks, and even in this setting we would caution against trusting model outputs without further verification at present. Moreover, we found that in the multi-document case—i.e., on the task of synthesizing evidence reported across multiple clinical trials—GPT3-D3 struggles to provide synopses that agree with reference (expert written) summaries. In sum, despite their ability to produce consistently plausible outputs, our view is that summaries of medical literature produced by LLMs should not yet be used to directly inform care given the risks of factual inaccuracies. More research is needed to better characterize the kinds of mistakes such models make, and ultimately to mitigate them.
## Acknowledgements
This research was partially supported by National Science Foundation (NSF) grants IIS-2145479 and RI-2211954, and by the National Institutes of Health (NIH) under the National Library of Medicine (NLM) grant 2R01LM012086.
## References
Tal August, Lucy Lu Wang, Jonathan Bragg, Marti A.
Hearst, Andrew Head, and Kyle Lo. 2022. Paper plain: Making medical research papers approachable to healthcare consumers with natural language processing. *ACM Transactions on ComputerHuman Interaction*.
Alexander Richard Fabbri, Wojciech Krysci ´ nski, ´
Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. Summeval: Reevaluating summarization evaluation. *Transactions* of the Association for Computational Linguistics, 9:391–409.
Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022.
News summarization and evaluation in the era of gpt3. *ArXiv*, abs/2209.12356.
Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A.
Smith, and Daniel S. Weld. 2021. Genie: Toward reproducible and standardized human evaluation for text generation. In *Conference on Empirical Methods in Natural Language Processing*.
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. *arXiv preprint arXiv:2211.09110*.
Valentin Liévin, Christoffer Egeberg Hother, and Ole Winther. 2022. Can large language models reason about medical questions? *arXiv preprint* arXiv:2207.08143.
Rensis Likert. 1932. A technique for the measurement of attitudes. *Archives of psychology*.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Iain J Marshall, Benjamin Nye, Joël Kuiper, Anna Noel-Storr, Rachel Marshall, Rory Maclean, Frank Soboczenski, Ani Nenkova, James Thomas, and Byron C Wallace. 2020. Trialstreamer: A living, automatically updated database of clinical trial reports.
Journal of the American Medical Informatics Association, 27(12):1903–1912.
W Scott Richardson, Mark C Wilson, Jim Nishikawa, Robert S Hayward, et al. 1995. The well-built clinical question: a key to evidence-based decisions. Acp j club, 123(3):A12–A13.
Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2022. Large language models encode clinical knowledge. *arXiv preprint arXiv:2212.13138*.
Mujeen Sung, Jinhyuk Lee, Sean Yi, Minji Jeon, Sungdong Kim, and Jaewoo Kang. 2021. Can language models be biomedical knowledge bases? In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 4723–
4734.
Lucy Lu Wang, Jay DeYoung, and Byron Wallace.
2022. Overview of MSLR2022: A shared task on multi-document summarization for literature reviews. In *Proceedings of the Third Workshop on* Scholarly Document Processing, pages 175–180, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M
Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*.
## Appendix A Model Details
We use the following parameters to prompt GPT3-D3: temperature = 0.7, top-p = 1.0, frequency penalty = 0.0, presence penalty = 0.0. We set our maximum token length to 1000 to avoid artificially introducing any omission errors.
## B Dataset Statistics
We provide some basic information about the dataset in Table 2. Because we used GPT3-D3, we do not have a clear idea about how the tokenization is done. To be as transparent as possible, however, we still provide the number of tokens when tokenized with SpaCy8. Since we use GPT3-D3, we opt to use a tokenization scheme that focuses mainly on general English (so we did not use a specialized tokenizer for biomedical texts to replicate as similar a tokenization as possible).
## C Prompts
For single-document summarization, we follow prior work to select our prompts. From (Goyal et al., 2022; August et al., 2022), we use the following prompts for the technical summary and the plain language summary:
- Summarize the above.
- My fifth grader asked me what this passage means: """ [TEXT TO SIMPLIFY] """ I
rephrased it for him, in plain language a fifth grader can understand.
To our knowledge, there is no prior work investigating prompt constructions for multi-document summarization generally (or evidence synthesis specifically). Table 1reproduces prompts we considered for this, but we ultimately used:
- """ [GENERATED INPUT SUMMARIES]
""" What does the above evidence conclude about """ [TITLE] """?
Figure 9 shows an example of the input structure and prompts we provide to GPT3-D3 in the multidocument setting. For the few-shot setting, we evaluate using up to 5 examples in context. Figure 10 shows the input structure for this setting in the second phase.
8https://spacy.io/
Prompts: Write a meta-analysis based on the above evidence.
Summarize the above evidence.
Synthesize the above.
Table 1: Examples of prompts tried for multi-document summarization.
![6_image_0.png](6_image_0.png)
![7_image_0.png](7_image_0.png)
## D Annotation Details
We calculate the inter-annotator agreement score
(Cohen's kappa), which averaged 0.59 amongst all annotators.
We also transparently reveal the cost of annotating on Upwork. The total cost of hiring 3 workers on Upwork was a little more than $3, 700 USD. Because annotations on a more specialized platform cost significantly more, we hired fewer annotators than one would hire on generic crowdworking websites.
Since each Upworker requested different payment amounts (which is the nature of the platform),
we provide the averages per hour for the work. For the single-document case, each annotation took on average 15-20 minutes per sample, and with 100 samples, the upper-bound was 33.3 hours for the entire task per annotator. For the multi-document case, each annotation took on average 10-15 minutes per sample, and with 50 samples, the upperbound was 12.5 hours for the entire task per annotator. Both tasks had three annotators annotating each.
E Survey details For each data point (and for each question in the interface), the annotator first evaluates the standard summary and then evaluates the plain language summary, before completing the survey in its entirety. We reproduce our survey questions and the corresponding answer options. These include the evaluation categories that we care about: For **standard (technical) summaries**, we focus on factuality, linguistic quality, and holistic evaluation; For plain language summaries, we include an additional section on readability because the purpose of these is to simplify technical language such that a layperson might understand the summary. We provide details regarding the structures of the surveys we used and our rationales behind their construction below.
## E.1 Single-Document Summarization
In the single-document summarization case, the inputs comprise study abstracts, **titles**, and we also show to the user **key results**, which were automatically extracted (Marshall et al., 2020). (We do not have reference summaries for these examples.) The goal of expert evaluation was to quantify the extent to which GPT3-D3 accurately summarizes these article inputs. We reiterate that we consider two different types of summarization strategies: standard (technical) summarization and plain-language summarization. We reproduce the questions asked for these summary types below, which vary only slightly in their focus.
Factuality Many of our questions chosen in our taxonomy revolve around factuality since factual accuracy is extremely in domain-specific work.
1. The model summary accurately conveys the key results in the input. Given the model summary, we seek to evaluate whether the key results that are automatically extracted are reflected in the output. This is a matter of degree, so we solicit assessments rated on a Likert scale.
## 2. Highlight Sentences In The Model Summary
(if any) that directly contradict the input (highlight model summary on the right). We collect additional annotations on which portions of the model summary contradict the input. We did not further analyze these highlights here, but do release them as part of the data collected.
3. Highlight any concepts that are new in the model summary that don't appear in the input
| Type of statistic | Single-document | Multi-document |
|-----------------------------------------------------------|-------------------|------------------|
| Average number of tokens per input (all) | 293.06 | 1451.68 |
| Average number of tokens per input (abstract(s) only) | 293.06 | 1353.04 |
| Average number of tokens per input (study title only) | N/A | 10.28 |
| Average number of tokens per input (abstract titles only) | N/A | 88.36 |
Table 2: General dataset statistics for reference. Note that in the single-document case, we only use abstracts in our zero-shot generation, so the remaining rows for anything other than abstracts only are labeled "N/A".
(highlight model summary on the right). Here the idea is to allow the annotator to mark "hallucinated" content in outputs (not supported by the input).
4. How are details about the population described in the summary, relative to the input text? The patient population is a critical component of clinical trials in medicine, and so it is important that summaries accurately describe this element. In particular we ask both whether the population is described (at all), and also the degree to which it is described *accurately*.
5. How are details about the intervention described in the summary, relative to the input text? Another key element of trials is the intervention (e.g., medicine or treatment) being evaluated. Therefore, as for study populations, we collect annotations regarding whether this is captured
(and if it is captured accurately).
6. How are details about the outcome (what was measured) described in the summary, relative to the input text? The outcome measured
(e.g., mortality) is the final foundational component of trials. As in the preceding two cases, we ask annotators to assess whether this is reported upon faithfully.
7. Are there any omission(s) unrelated to the population, intervention, or outcome? We evaluate whether the model omits any information regarding the key trial elements—population, intervention, and outcome—just described. For more details about types of omissions, refer to section F.2.
8. Are there any errors? We also ask whether there are any errors (in general) in the model summary.
## Linguistic Quality
9. The model summary is coherent, fluent, and without grammatical errors. This is intended to capture the readability or fluency of the generated output, independent of its veracity.
Holistic evaluation Finally, we ask for a holistic evaluation of the output.
10. The output is a concise, accurate, and potentially useful summary of the input. Continuing with more holistic questions, this is intended to capture the perceived (potential) utility of generated summaries, according to the domain experts we hired as annotators.
In the case of plain summarization, we ask the annotator to rate whether **10. The simplified text**
is accurate and would be understandable by a
(lay) patient. This effectively conveys the potential utility of automatically produced lay summaries, because the purpose of these outputs would be make medical evidence more accessible to (inexpert) patients.
11. If there was anything not elaborated or covered, feel free to leave a comment in the box.
We conclude with an open-ended text box to collect notes or thoughts not otherwise captured.
Readability For **plain language summaries**,
we include a section on readability, given the focus on making evidence more digestible in this case.
12. The simplified model text is less technical and more approachable, thus making it easier to understand. This question measures the degree to which the annotator judges the model to have successfully simplified the text.
13. Technical terms in the input are being substituted with simpler language in the simplified model text. This is a more focussed question regarding simplification to quantify whether the model consistently swaps jargon terms for more accessible language.
## E.2 Multi-Document Summarization
The inputs in the multi-document case comprises collections of articles describing trials, and the targets are syntheses of these (which put together the findings they report). We sampled these metareviews from previously conducted evidence syntheses, and so in this case we have target summaries, which we provide to the annotator. We not consider simplification in the multi-document setting.
Factuality We again focus on factuality of model outputs.
1. Highlight any spans in the generated summary that disagree with the target summary.
We ask for annotators to mark any explicit contradictions featured in the generated output.
2. The generated summary is supported by putting together the given summaries of the individual articles. The core of multi-document summarization is the piecing together of multiple documents into a coherent summary that accurately reflects the inputs in aggregate. This question is intended to measure the degree to which the model does so.
3. The generated summary agrees with the target summary. Because we have reference
(target) summaries in this case, we directly ask whether and to what degree the model generated synopsis seems to agree with this.
4. Rate the degree to which the *generated* summary shows the extent that there is evidence supporting the effectiveness of the intervention(s) of interest (as indicated in the studies). The *generated* **summary suggests...** Here we aim to assess whether the model output implies that the intervention studied in the constituent trials is supported by the findings reported within them.
5. Rate the degree to which the *target* **summary shows the extent that there is evidence**
supporting the effectiveness of the intervention(s) of interest (as indicated in the studies).
The *target* **summary suggests...** Similarly, we ask whether the reference summary implies that the intervention in question is effective.
Holistic evaluation As above we seek to elicit an overall impression of summary accuracy and quality.
6. If there was anything not elaborated or covered, feel free to leave a comment in the box. Much like for single-document summarization, the survey provides an additional box for annotators to give information about the specific data point that was asked.
## F Additional Evaluation
F.1 Few-shot Few-shot We experimented briefly with few-shot prompting (Appendix G), but qualitatively this did not seem to outperform zero-shot summarization, hence our focus on evaluating the latter.
For few-shot generation, we insert in-context training examples after the first summarization phase by concatenating the summaries and the target conclusions of inputs (see Appendix C). We evaluate using up to 5 shots.
## F.2 Underspecified Elements
Table 3 and Table 4 show the additional options selected when an element (e.g., population) was marked as "underspecified" in the survey for the technical and simplified cases, respectively.
There can be many reasons why an element could be marked underspecified. Because we try to remove as much ambiguity as possible, we opt to identify the reasons under each category (Population, Intervention, *Outcome*) the specific reasoning.
The questions we ask in both the regular and plain summarization case are both different because of the audience we address in either case. In the regular summarization case, the reader is intended to be a domain expert; in the plain summarization case, the reader is intended to be laymen, and so we alter the types of questions we ask as a result.
We find that plain summaries (Table 4) have fewer errors than that of regular summaries (Table 3), whereas regular summaries have a higher number of specific omissions. However, plain summaries seem to have more omissions in areas outside of the scope of what we identify as salient omissions. We can hypothesize that given more complex language, it could be that annotators can more easily identify salient information in the text.
On the other hand, there are nuances in regular summaries that cannot be extrapolated via plain summarization prompts, and instead we must use regular summaries to gather more critical information (in addition to the fact that the questions asked in the plain summarization case tends to be simpler). Although, with regular summaries, summarizing on a deeper level may result in using more convoluted language. Nonetheless, each type of prompt (regular and plain) seem to be well-suited for the task at hand; what matters is the context in
| Type of Error | Number of Articles |
|---------------------------------------------------------------------------|----------------------|
| Population Omits demographic information | 0 |
| Omits sample size | 41 |
| Other | 1 |
| Intervention Does not describe comparator intervention | 2 |
| Omits dosage or other important detail about administration | 1 |
| Other | 0 |
| Outcome Omits description of specific measurements of high-level outcomes | 4 |
| Omits one or more of multiple outcomes | 8 |
| Other | 0 |
Table 3: Types of errors and the number of articles with the corresponding error, for regular summarized articles.
| Type of Error | Number of Articles |
|----------------------------------------------------------------------|----------------------|
| Population Missing completely | 1 |
| Missing key details (patients vs patients with depression) | 2 |
| Inaccurate | 0 |
| Other | 1 |
| Intervention Missing completely | 1 |
| Missing comparator | 2 |
| Inaccurate | 0 |
| Other | 2 |
| Outcome Missing completely | 0 |
| Missing part outcomes | 3 |
| Missing key details that would be important for a lay person to know | 1 |
| Inaccurate | 0 |
| Other | 0 |
which the prompt is used, and what information is needed for the user.
## F.3 Flan-T5
We compared GPT-3 zero-shot results to Flan-T5
(Wei et al., 2021). We find that Flan-T5 produces substantially shorter summaries (2-3 sentences on average). We provide examples of generated summaries in Figure 11. Qualitatively, these seemed far worse than GPT-3 generated outputs, so we did not evaluate these further in this work.
## F.4 Rouge Scores
![10_image_0.png](10_image_0.png)
Table 5: ROUGE scores on **multi-document** biomedical summaries using GPT3-D3 We provide the standard automatic metric of ROUGE (Lin, 2004) to analyze multi-document summarization. We do not have ROUGE scores for singledocument summarization since we lack ground truth data. However, the focus of this work is on the capability of GPT3-D3 to faithfully summarize biomedical literature (i.e., to generate accurate summaries); human experts remain the best judges of factuality. Noting this and prior work by Goyal et al.
(2022) make ROUGE scores (and other automatic metrics) rather unreliable to judge the capabilities of these large language models on summarization.
## F.5 Similarity
We provide additional examples of sentences and summaries with high similarity to the input abstract.
## G Examples Of Generated Summaries
We include examples of generated summaries we annotated, both standard summaries and plain language in the single and multi-document case (Table 14, 13).
We also provide examples of few-shot generations along with the zero-shot and target summaries for comparison (Figure 15). Note that the few-shot examples reflect the same evidence strength and recommendation as the zero-shot examples, thus
| Sentence from Abstracts | Sentence from Generated Summary | BLEU |
|-----------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|--------|
| These findings suggest that access to care and differences in treatment may be responsible for racial disparities in colorectal cancer. | These findings suggest that access to care and differences in treatment may be responsible for racial disparities in colorectal cancer. | 100 |
| After corrections for multiple comparisons, only PFC effects on praise and emotion strategies at post-treatment, and praise and withdrawn/depressed behavior at follow-up, maintained. | After corrections for multiple comparisons, only PFC effects on praise and emotion strategies at post-treatment, and praise and withdrawn/depressed behavior at follow-up, were maintained. | 91.93 |
| AIM To assess the safety and efficacy of hybrid closed-loop (HCL) insulin delivery 24/7 versus only evening and night (E/N), and on extended 24/7 use, in free-living children with type 1 diabetes. | This study aimed to assess the safety and efficacy of hybrid closed-loop (HCL) insulin delivery 24/7 versus only evening and night (E/N), and on extended 24/7 use, in freeliving children with type 1 diabetes. | 91.20 |
| We find that protocol compliance, as measured by correlations between e-cigarette use measures and cotinine levels, was only achieved in the first week of the study and declined thereafter. | The findings showed that protocol compliance, as measured by correlations between e-cigarette use measures and cotinine levels, was only achieved in the first week of the study and declined thereafter. | 90.46 |
| CONCLUSIONS Our findings suggest that the SERT-enriched functional network is dynamically different in ASD during processing of socially relevant stimuli. | The findings suggest that the SERT-enriched functional network is dynamically different in ASD during processing of socially relevant stimuli. | 89.96 |
Table 6: Examples of highly extractive sentence pairs found from generated summaries for single-document summarization.
| Sentence from Abstracts | Sentence from Generated Summary | BLEU |
|--------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|
| CONCLUSIONS: Drug-induced remission of JIA-U did not persist when adalimumab was withdrawn after 1-2 years of treatment. | However, remission of JIA-U did not persist when adalimumab was withdrawn after 1-2 years of treatment. | 84.80 |
| CONCLUSION: This study suggests that increasing the dose of inhaled steroids at the onset of an exacerbation of asthma is ineffective and should not be included in asthma self management plans. | The evidence suggests that increasing the dose of inhaled corticosteroids at the onset of an exacerbation of asthma is ineffective and should not be included in asthma self management plans. | 79.19 |
| RESULTS: Following maternal betamethasone administration (day 2), fetal heart rate variation was reduced by 19% and fetal body and breathing movements by 49% and 85%, respectively. | Dexamethasone had a greater beneficial effect, reducing fetal heart rate variation by 19% and fetal body and breathing movements by 49% and 85%, respectively. | 56.71 |
| OBJECTIVE: This study aimed to investigate the effect of endometrial injury using Pipelle catheter in the follicular phase (cycle day 5, 6, or 7) of the stimulation cycle on pregnancy rates in patients undergoing intrauterine insemination. | The evidence suggests that endometrial injury using a Pipelle catheter in the follicular phase (cycle day 5, 6, or 7) of the stimulation cycle may improve pregnancy rates in women undergoing intrauterine insemination (IUI). | 56.22 |
| CONCLUSION: Based on these results, it is suggested that VAC has advantages when compared to the Bogota bag as a temporary closure method in the management of abdominal compartment syndrome. | Furthermore, the VAC system has advantages compared to the Bogota bag as a temporary closure method in the management of abdominal compartment syndrome. | 54.32 |
Table 7: Examples of highly extractive sentence pairs found from generated summaries for multi-document summarization.
we do not evaluate them at this point.
H Additional figures
| Evaluation Category | Question or Statement | Answer Choices | | |
|-------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|--------------------|-------|
| Factuality | The model summary accurately conveys the | Strongly disagree; disagree; agree; strongly | | |
| key results in the input | agree | | | |
| Factuality | Highlight sentences in the model summary (if | Multiple tokens highlighted | | |
| any) that directly contradict the input (highlight model summary on the right) | | | | |
| Factuality | Highlight any concepts that are new in the model summary that don't appear in the input (highlight model summary on the right) | Multiple tokens highlighted | | |
| Factuality | How are details about the population described in the summary, relative to the input text? | The population is not mentioned (missing) in the model summary; The population is mentioned, but described completely inaccurately; The population is mentioned, but described somewhat inaccurately; The population is mentioned, and described accurately; The population is underspecified; Not applicable (N/A) | | |
| Factuality | How are details about the intervention described in the summary, relative to the input text? | The intervention is not mentioned (missing) in the model summary; The intervention is mentioned, but described completely inaccurately; The intervention is mentioned, but described somewhat inaccurately; The intervention is mentioned, and described accurately; The intervention is underspecified; Not applicable (N/A) | | |
| Factuality | How are details about the outcome (what was measured) described in the summary, relative to the input text? | The outcome is not mentioned (missing) in the model summary; The outcome is mentioned, but described completely inaccurately; The outcome is mentioned, but described somewhat inaccurately; The outcome is mentioned, and described accurately; The outcome is underspecified; Not applicable (N/A) | | |
| Factuality | Are there any omission(s) unrelated to the | No omission; | Minor omission(s); | Major |
| population, intervention, or outcome? | omission(s) | | | |
| Factuality | Are there any errors? | No errors; Minor error; Major error | | |
| Linguistic Quality | The model summary is coherent, fluent, and | Strongly disagree; disagree; agree; strongly | | |
| without grammatical errors | agree | | | |
| Holistic evaluation | The output is a concise, accurate, and potentially useful summary of the input | Strongly disagree; disagree; agree; strongly agree | | |
| Holistic evaluation | If there was anything not elaborated or covered, feel free to leave a comment in the box | Free text | | |
| Table 8: Questions used in our survey for annotators to evaluate standard summaries | | | | |
| Evaluation Category | Question or Statement | Answer Choices | | |
|---------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------|--------------------|-------|
| Factuality | The simplified model text accurately conveys | Strongly disagree; disagree; agree; strongly | | |
| the key results in the input | agree | | | |
| Factuality | Highlight sentences in the input (if any) that directly contradict the simplified model text (highlight input on the right) | Multiple tokens highlighted | | |
| Factuality | Highlight any concepts that are new in the simplified model text that don't appear in the input (highlight model summary on the right) | Multiple tokens highlighted | | |
| Factuality | How are details about the population described in the simplified model text, relative to the input text? | The population is not mentioned (missing) in the simplified model text; The population is mentioned, but described completely inaccurately; The population is mentioned, but described somewhat inaccurately; The population is mentioned, and described accurately; The population is underspecified; Not applicable (N/A) | | |
| Factuality | How are details about the intervention described in the simplified model text, relative to the input text? | The intervention is not mentioned (missing) in the simplified model text; The intervention is mentioned, but described completely inaccurately; The intervention is mentioned, but described somewhat inaccurately; The intervention is mentioned, and described accurately; The intervention is underspecified; Not applicable (N/A) | | |
| Factuality | How are details about the outcome (what was measured) described in the simplified model text, relative to the input text? | The outcome is not mentioned (missing) in the simplified model text; The outcome is mentioned, but described completely inaccurately; The outcome is mentioned, but described somewhat inaccurately; The outcome is mentioned, and described accurately; The outcome is underspecified; Not applicable (N/A) | | |
| Factuality | Are there any omission(s) unrelated to the | No omission; | Minor omission(s); | Major |
| population, intervention, or outcome? | omission(s) | | | |
| Factuality | Are there any errors? | No errors; Minor error; Major error | | |
| Linguistic Quality | The simplified text is coherent, fluent, and | Strongly disagree; disagree; agree; strongly | | |
| without grammatical errors | agree | | | |
| Holistic evaluation | The simplified text is accurate and would be | Strongly disagree; disagree; agree; strongly | | |
| understandable by a (lay) patient | agree | | | |
| Holistic evaluation | If there was anything not elaborated or covered, feel free to leave a comment in the box | Free text | | |
Table 9: Questions used in our survey for annotators to evaluate simplified model summaries
| Evaluation Category | Question or Statement | Answer Choices |
|-------------------------------------------------------------|----------------------------------------------------|------------------|
| and more approachable, thus making it easier to understand. | Strongly disagree; disagree; agree; strongly agree | |
| model text. | None at all; Only a few; Mostly all; All | |
Readability Technical terms in the input are being substituted with simpler language in the simplified
model text.
Table 10: Additional questions used in our survey for annotators to evaluate simplified model summaries
| Flan-T5 Summary | | GPT-3 Summary (One Sentence) | GPT-3 Summary |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|
| This study examined the ability of a health information technology system | | | |
| to facilitate population-based breast cancer screening. The study was a | | | |
| two-year follow-up after a one-year cluster randomized trial of 32,688 | | | |
| women aged 42-69 receiving care within a 12-practice primary care | | | |
| network. The system identified women overdue for mammograms, connected them to primary care providers, generated outreach letters, monitored mammography scheduling and completion, and provided practice delegates with a list of overdue patients for reminder phone calls. Results showed that adjusted completion rates for mammography screening were significantly higher for intervention patients in the prevalent cohort after 3 years (51.7% vs. 45.8%) and for the incident cohort after 2 years (53.8% vs. 48.7%). These results suggest that population-based informatics systems can enable sustained increases in mammography screening rates. | | | |
| The use of population-based informatics | | | |
| systems increased mammography | | | |
| screening rates, with intervention patients | | | |
| having significantly higher completion | | | |
| rates than control patients after 3 years. | | | |
| Population-based informatics systems | | | |
| can enable sustained increases in | | | |
| mammography screening rates beyond | | | |
| rates seen with office-based visit reminders. | The study compared the effects of cognitive-behavioral therapy (CBT) and psychoeducational-supportive therapy (PST) on social anxiety disorder (SAD) before and during the COVID-19 pandemic. Results showed that the CBT group had significantly better outcomes than the PST group at post-intervention, and this effect was maintained during the | | |
| lockdown period. In addition, the CBT group had significantly less fear of COVID-19 than the PST group. The study also found that social anxiety severity was positively correlated with fear of COVID-19, and that individuals with comorbidities had significantly more fear. The hierarchical multiple regression revealed that SPIN post-intervention, COVID-19 fear, and duration of SAD predicted social anxiety severity during the postlockdown period. | | | |
| Cognitive-Behavioral Therapy (CBT) for Social Anxiety Disorder (SAD) was shown | | | |
| to be more effective at reducing social anxiety than PsychoeducationalSupportive Therapy (PST), and was associated with significantly less fear of COVID-19. | | | |
| The result of CBT for SAD was | | | |
| maintained through lockdown period | | | |
| and was associated with significantly | | | |
| less COVID-19 related fear. | = | In this study, 160 older adults with obesity were assigned to three groups: aerobic exercise only, aerobic exercise with moderate caloric restriction, or aerobic exercise with more intensive caloric restriction. Results showed that the greatest improvements in aortic stiffness were found in the aerobic exercise with moderate caloric restriction group, with increased distensibility and decreased pulse wave velocity. Increases in aortic distensibility were correlated with improvements in body weight and body fat distribution, but these associations were not statistically significant after adjusting for multiple comparisons. These results suggest that combining aerobic exercise with moderate caloric restriction may be more effective than exercise alone in improving aortic stiffness in older adults with obesity. | |
| A combination of aerobic exercise and | | | |
| moderate caloric restriction led to greater improvements in proximal aortic stiffness compared to aerobic exercise alone in older adults with obesity. | | | |
| Combining aerobic exercise training | | | |
| with moderate- to high-calorie | | | |
| restriction has greater effects on | | | |
| proximal aortic stiffness than exercise | | | |
| alone in older adults with obesity. | | | |
Figure 11: Sample generated summaries from Flan-T5 and GPT-3
| Evaluation Category | Question or Statement | Answer Choices |
|-----------------------|-------------------------------------------------------------------------------------------------------|----------------------------------------------------|
| Factuality | Highlight any spans in the generated summary that disagree with the target summary | Multiple tokens highlighted |
| Factuality | The generated summary is supported by putting together the given summaries of the individual articles | Strongly disagree; disagree; agree; strongly agree |
| Factuality | The generated summary agrees with the target summary | Strongly disagree; disagree; agree; strongly agree |
| Factuality | Rate the degree to which the *generated* summary shows the extent that there is evidence supporting the effectiveness of the intervention(s) of interest (as indicated in the studies). The *generated* summary suggests. . . | There is not enough evidence to draw any meaningful conclusions; The intervention has a marginal or insignificant comparative benefits; The intervention may have a marginal beneficial effect; The intervention is substantively helpful |
| Factuality | Rate the degree to which the *target* summary shows the extent that there is evidence supporting the effectiveness of the intervention(s) of interest (as indicated in the studies). The *target* summary suggests. . . | There is not enough evidence to draw any meaningful conclusions; The intervention has a marginal or insignificant comparative benefits; The intervention may have a marginal beneficial effect; The intervention is substantively helpful |
| Holistic Evaluation | If there was anything not elaborated or covered, feel free to leave a comment in the box | Free text |
Table 11: Questions used in our survey for annotators to evaluate multi-document model summaries
![16_image_0.png](16_image_0.png)
![17_image_1.png](17_image_1.png)
underwent coronary revascularization) and 8127 participants without CAD. Participants were randomized into two groups (systolic BP target of 140 mm Hg vs. 120 mm Hg). The primary outcome was a composite of cardiovascular events. After a median follow-up of 3.9 years, the hazard ratios (HRs) for the primary outcome were 0.65 (95% confidence interval
(CI) 0.53-0.79) and 1.05 (95% CI 0.76-1.46) among those in the non-CAD
and CAD subgroups, respectively (P value for interaction 0.02). Intensive BP treatment was a protective factor for all-cause death (HR 0.60, 95% CI
0.37-0.96) in the CAD subgroup, compared with standard BP treatment.
The HRs (95% CI) for stroke were 3.57 (1.17-10.85) and 1.03 (0.29-3.62)
among those in the coronary revascularization and non-revascularization subgroups, respectively (P value for interaction 0.13). For safety events, intensive BP treatment increased the risk of hypotension (HR 2.00, 95% CI
1.06-3.79) and electrolyte abnormalities (HR 2.38, 95% CI 1.25-4.56) in the CAD subgroup, while the risk of serious adverse events did not increase
(HR 1.03, 95% CI 0.88-1.20). These results suggest that positive benefits from intensive BP treatment might be attenuated in patients with CAD who are under better secondary prevention. The risk of stroke might increase at the systolic BP target of 120 mm Hg in case of coronary revascularization, although the confidence interval was wide.
![17_image_0.png](17_image_0.png)
Figure 13: An example input and output (technical and simplified summaries) for the single-document summarization task.
![17_image_2.png](17_image_2.png)
![17_image_3.png](17_image_3.png)
Figure 14: An example input, output, and target for the multi-document summarization task.
![18_image_0.png](18_image_0.png)
Generated Technical Summary (0-shot) Generated Technical Summary (5-shot) **Target**
![18_image_1.png](18_image_1.png)
![18_image_2.png](18_image_2.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7 (after conclusion)
✓ A2. Did you discuss any potential risks of your work?
RQ4, section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2, 3
✓ B1. Did you cite the creators of artifacts you used?
Section 2, 3
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 2, 3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix, and will be released with the data
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Appendix and Section 2, 3
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section 2, 3
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No - annotation work like this does not require IRB, and i have discussed this with our folks here before
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
We only hired annotators based on their expertise, demographic/geographic characteristics were not part of this. |
li-etal-2023-prefix | Prefix Propagation: Parameter-Efficient Tuning for Long Sequences | https://aclanthology.org/2023.acl-short.120 | Parameter-efficient tuning aims to mitigate the large memory requirements of adapting pretrained language models for downstream tasks. For example, one popular method, prefix-tuning, prepends trainable tokens to sequences while freezing the rest of the model{'}s parameters. Although such models attain comparable performance with fine-tuning when applied to sequences with short to moderate lengths, we show their inferior performance when modelling long sequences. To bridge this gap, we propose prefix-propagation, a simple but effective approach that conditions prefixes on previous hidden states. We empirically demonstrate that prefix-propagation outperforms prefix-tuning across long-document tasks, while using 50{\%} fewer parameters. To further investigate the proposed architecture, we also show its advantage in calibration, and perform additional study on its relationship with kernel attention. To the best of our knowledge, this work is the first to focus on parameter-efficient learning for long-sequence language tasks. | # Prefix-Propagation: Parameter-Efficient Tuning For Long Sequences
Jonathan Li2***, Will Aitken**1,2**, Rohan Bhambhoria**1,2**, Xiaodan Zhu**1,2†
1Department of Electrical and Computer Engineering, Queen's University 2Ingenuity Labs Research Institute, Queen's University
{jxl, will.aitken, r.bhambhoria, xiaodan.zhu}@queensu.ca
## Abstract
Parameter-efficient tuning aims to mitigate the large memory requirements of adapting pretrained language models for downstream tasks. For example, one popular method, prefix-tuning (Li and Liang, 2021; Liu et al.,
2022), prepends trainable tokens to sequences while freezing the rest of the model's parameters. Although such models attain comparable performance with fine-tuning when applied to sequences with short to moderate lengths, we show their inferior performance when modelling long sequences. To bridge this gap, we propose *prefix-propagation*, a simple but effective approach that conditions prefixes on previous hidden states. We empirically demonstrate that prefix-propagation outperforms prefix-tuning across long-document tasks, while using ∼50% fewer parameters.
To further investigate the proposed architecture, we also show its advantage in calibration, and perform additional study on its relationship with kernel attention. To the best of our knowledge, this work is the first to focus on parameter-efficient learning for long-sequence language tasks.1
## 1 Introduction
The Transformer architecture (Vaswani et al., 2017)
has changed the landscape of recent natural language processing approaches by enabling the pretraining of state-of-the-art large language models
(LLM) (Devlin et al., 2019; He et al., 2020; Brown et al., 2020). However, fine-tuning and storing full copies of LLMs can consume prohibitively large quantities of resources. Parameter-efficient finetuning (PEFT) methods such as prefix-tuning (Li and Liang, 2021; He et al., 2021a; Liu et al., 2022)
address these concerns by reducing the number
*Work done during a student internship at Ingenuity Labs. †Corresponding author.
1Our code is publicly available at https://github.
com/MonliH/prefix-propagation
| Method | 20-newsgroups | Hyperpartisan |
|---------------|-----------------|-----------------|
| Prefix-Tuning | 69.7 | 75.3 |
| Fine-Tuning | 72.3 | 81.5 |
Table 1: Mean F1-Scores of prefix-tuning and finetuning Longformer for common long-document classification tasks.
of trainable parameters. Prefix-tuning can tune 0.01% of parameters and still match the performance of regular fine-tuning (updating all model parameters).
PEFT has been investigated for tasks with inputs consisting of sentences, sentence-pair, or sequences that fit within the typical LLM maximum tokens. However, the performance of PEFT for tasks with longer textual sequences has been overlooked. In this work, we investigate this oversight and provide evidence suggesting that the gap between PEFT and regular fine-tuning is substantial when modelling long sequences. As shown in Table 1, prefix-tuning underperforms fine-tuning on long sequence classification tasks, Hyperpartisan
(Kiesel et al., 2019) and 20-newsgroups (Lang, 1995), when used with the popular long-document model Longformer (Beltagy et al., 2020).
In this paper, we propose a simple and effective method, *prefix-propagation*, which consistently improves the performance of PEFT for long sequence models. Unlike prefix-tuning, prefix-propagation propagates the hidden states corresponding to prefixes through the attention computation. This allows for the prefixes hidden states to dynamically change as the input propagates through each layer.
To further understand prefix propagation, we investigate the reliability of the model's predictions by performing analyses on calibration. Lastly, we conduct study on prefix-based methods in terms of kernel attention to strengthen their theoretical value.
In summary, our contributions are as follows:
![1_image_0.png](1_image_0.png)
![1_image_1.png](1_image_1.png)
- We study PEFT for long documents and show that prefix-tuning is significantly inferior to fine-tuning in this scenario. To the best of our knowledge, this is the first work to focus on PEFT for long documents.
- We introduce prefix-propagation, which consistently improves the performance over prefix turning on the different long document datasets, while using 50% fewer parameters.
- We study the reliability of the predictions by performing analyses on calibration and show that models tuned with prefix-propagation are better calibrated.
- We elucidate the relationship between prefixpropagation and kernel attention and perform an ablation study that utilizes this insight.
## 2 Related Works
Long Sequence Models Numerous methods have been proposed to reduce the complexity of attention from O(n 2) to O(n) such as kernel approximations (Choromanski et al., 2020; Katharopoulos et al., 2020; Peng et al., 2021) and fixed (Child et al., 2019; Beltagy et al., 2020; Zaheer et al.,
2020) or learned (Kitaev et al., 2020) sparse attention patterns. For a broader summary, please refer to Tay et al. (2022). In this work, we use Longformer (Beltagy et al., 2020). To linearize attention complexity, Longformer employs sliding window attention while globally attending to relatively few special tokens.
Parameter-Efficient Tuning Inspired by the success of manual prompting (Brown et al., 2020),
prefix-tuning (Li and Liang, 2021; Liu et al., 2022)
prepends trainable "soft" prompts to an input sequence. Although further PEFT methods have since been introduced (He et al., 2021a; Hu et al.,
2021; Ben Zaken et al., 2022), we focus on adapting prefix-tuning. We note that our adaptation does not violate orthogonality and thus prefixpropagation can still be compounded with other PEFT methods as proposed in the UnifiedPET
framework (He et al., 2021a), likely yielding similar performance gains. We leave the empirical validation of this hypothesis for future work.
Out work also adheres to the key motivation of the recent PEFT method, inducer-tuning (Chen et al., 2022), which is that optimal prefixes should be close to queries within their latent space. We derive queries, keys, and values from the same prefix token, limiting the distance that separates them.
## 3 Prefix Propagation 3.1 Methodology
In this section we introduce prefix-propagation, which, unlike prefix-tuning, propagates the hidden states corresponding to prefixes through the attention computation. This allows for the prefixes hidden states to dynamically change as the input propagates through each layer. Prefix-propagation and its predecessor, prefix-tuning are depicted in Figure 1a and 1b respectively. For the first layer of the transformer, we prepend j trainable prefixes (i.e.,
| Method | % Tuned | WikiHop | ArXiv | 20-newsgroups | Hyperpartisan | | | | | | |
|--------------------|-----------|-----------|---------|-----------------|-----------------|------|------|------|------|------|------|
| Acc | P | R | F1 | P | R | F1 | P | R | F1 | | |
| RoBERTa PT | 0.1 | 11.7 | 79.4 | 79.6 | 79.8 | 67.9 | 67.0 | 68.2 | 70.4 | 59.2 | 64.1 |
| Prefix-Tuning | 0.1 | 38.9 | 81.5 | 81.7 | 82.7 | 68.9 | 68.4 | 69.7 | 78.3 | 73.8 | 75.3 |
| Prefix-Propagation | 0.05 | 42.2 | 83.1 | 83.1 | 83.3 | 70.1 | 69.7 | 71.0 | 86.4 | 77.7 | 81.8 |
| Fine-Tuning | 100 | 74.0 | 83.1 | 82.9 | 83.3 | 71.8 | 71.2 | 72.3 | 87.8 | 76.2 | 81.5 |
embeddings) to the input sequence (blue blocks in top left of Figure 1a). Then, before every subsequent layer, we sum new trainable matrices onto the first j embeddings corresponding to the prefixes
(denoted by the sum operators in Figure 1a). By propagating instead of overwriting, we halve the number of parameters trained while simultaneously improving performance on long-document tasks.
We now formalize prefix-propagation. Multiheaded attention processes query, key, and value matrices derived from a sequence C ∈ R
m×d with length m and embeddings of size d. Our method modifies traditional attention by concatenating a prefix P ∈ R
j×d of length j to the sequence:
$$H_{l,i}=\text{Attn}(D^{(l)}W^{(l,i)}_{q},\tag{1}$$ $$D^{(l)}W^{(l,i)}_{k},D^{(l)}W^{(l,i)}_{v})$$ $$D^{(l)}=\begin{cases}\text{cat}(P^{(l)},C)&\text{if}l=1\\ \text{cat}(P^{(l)}+C[:j,:],C[j:,:])&\text{if}l>1\end{cases}$$
where inputs C are projected through pre-trained weight matrices W
(l,i)
q , W(l,i)
k, W(l,i)
v ∈ R
d×dh per layer l and head i yielding the output of the attention head, H ∈ R
(j+m)×dh . The prefixes are concatenated for the first layer (l = 1) and summed to their corresponding hidden states for the remaining layers (l > 1). We do not continually concatenate new prefixes to the sequence to avoid increasing the sequence length after each layer.
For both prefix-tuning and prefix-propagation, prefixes (keys and values) are globally attended to by all queries. Unlike prefix-tuning however, our method concatenates additional hidden states before the hidden states C are projected by W
(i)
k and W
(i)
v . By doing so, prefix-propagation modifies query matrices, allowing prefixes to attend to other hidden states globally, thereby increasing representation capability. This approach is somewhat analogous to the external global tokens inserted in the BigBird-ETC model (Zaheer et al., 2020).
By attending to other tokens, the prefixes can act as special storage tokens, which is particularly useful in the restricted regime of long-document modelling where relatively few tokens have global context. Conversely, prefix-tuning only concatenates trained key and value matrices, Pk, Pv ∈ R
j×dh ,
statically to the sequence:
$$\begin{array}{c c}{{H_{l,i}=\mathrm{{Attn}}(C W_{q}^{(l,i)},\mathrm{{cat}}(P_{k}^{(l,i)},C W_{k}^{(l,i)}),}}\\ {{}}&{{\mathrm{{cat}}(P_{v}^{(l,i)},C W_{v}^{(l,i)}))}}\end{array}\qquad(2)$$
Since our method has a single prefix matrix, P
instead of separate Pk and Pv matrices, we reduce the number of trained parameters by 50%.
## 3.2 Calibration
We further study the proposed prefix-propagation method to understand the reliability of model's predictions through calibration. Well-calibrated models output confidence scores that closely match the models' accuracy. Either over-confident or underconfident models are undesirable. Calibration has widely been overlooked in PEFT methods. To quantify calibration in our work, we use expected calibration error (ECE), which bins predictions based on model confidence and compares them to accuracy (Pakdaman Naeini et al., 2015; Guo et al.,
2017).
## 3.3 Kernel Decomposition
Traditional attention is analogous to applying a kernel smoother over inputs (Tsai et al., 2019).
Motivated by this insight, we reformulate prefixpropagation as a sum of kernelized attention modules. Separating the modules introduces flexibility in two ways: (1) Their individual kernel forms can be mixed and matched and (2) A hyperparameter scale factor α can be applied to the prefix component to increase or decrease its weighting.
Equation 3 defines kernel decomposition for prefixpropagation2:
$$\begin{array}{l}{{H=\mathrm{{Kern}}(\mathrm{{cat}}(P,C)W_{q},C W_{k},C W_{v})}}\\ {{+\ (\alpha)\mathrm{{Kern}}(\mathrm{{cat}}(P,C)W_{q},P W_{k},P W_{v})}}\end{array}$$
where Kern refers to kernel attention as formulated in (Tsai et al., 2019). The first term results from attending to the original sequence, C, and the second comes from attending to the prefixes, P. We provide the derivation of Equation 3 and the full definition of kernel attention in Appendix A.
Our main motivation for presenting prefix decomposition is to establish foundational knowledge and guide future research. Ergo, we restrict experiments in this initial presentation to using just the default exponential kernel (Appendix A).
## 4 Experiments And Results
Datasets We evaluate our approach on three longdocument classification tasks: ArXiv (He et al.,
2019), an 11-class classification task composed of academic research papers, the 20-newsgroups
(Lang, 1995) classification task consisting of mailing lists that fall into one of 20 classes, and the Hyperpartisan dataset, a binary classification task for extremist news classification (Kiesel et al., 2019).
We also run experiments on WikiHop (Welbl et al.,
2018), a long-document reading comprehension task requiring multi-step reasoning.
Due to compute limitations inherent to working with long documents, with the exception of Hyperpartisan, we only report a single run for each task.
This mimics the original Longformer reporting scheme (Beltagy et al., 2020). For Hyperpartisan, the smallest of the datasets, we report mean metrics averaged over five seeds.
Baselines As a baseline, we fine-tune Longformer-base (approx. 149M parameters) as closely as possible to Beltagy et al.
(2020). For PEFT, we evaluate prefix-tuning on Longformer-base and RoBERTa-base
(approx. 125M parameters) (Liu et al., 2019).
2We omit layer, l and head, i for brevity.
Method ArXiv HY. NG.
RoBERTa PT 0.056 0.228 0.123
Prefix-Tuning 0.075 0.153 0.117 Prefix-Propagation 0.042 0.093 0.122 Fine-Tuning 0.099 0.138 0.212
More details on dataset sizes, pre-processing, and hyperparameters are in Appendix B.
## 4.1 Results And Discussion
Across all tasks, our results in Table 2 verify that prefix-tuning is inferior to fine-tuning long sequences. Conversely, prefix-propagation consistently outperforms prefix-tuning and is comparable to fine-tuning on most tasks. Prefix propagation also performs competitively on Hyperpartisan, a relatively small dataset with only 625 samples.
This is in contrast to prefix-tuning, which is known to underperform in low-data settings (Gu et al.,
2022). Because we ran multiple seeds on Hyperpartisan, we also found that prefix-propagation's better performance relative to prefix-tuning is statistically significant (p < 0.05, using a single-tailed t-test).
We do not have multiple samples to run these tests for larger datasets, but we emphasize that Hyperpartisan likely has the most variance and yet it is still statistically significant. We suspect that prefixpropagation's performance exceeds prefix-tuning because propagated prefixes can transmit global context across multiple layers, possibly modelling more expressive abstractions.
We note one exception where prefix-based methods still leave room for improvement: multiplechoice question answering on WikiHop. We hypothesize that prefix methods have insufficient capacity to properly model complex long-document multi-step question answering.
We also observe that prefix-based methods, and especially prefix-propagation, achieve better calibration than fine-tuning, as shown in Table 3. Unlike prefix-tuning however, prefix-propagation effectively balances calibration with accuracy metrics. The calibration of fine-tuning deteriorates as training progresses (Figure 4 in Appendix C) and we speculate that this may be due to catastrophic
![4_image_0.png](4_image_0.png)
forgetting (Jagielski et al., 2022).
As an initial test for our ongoing prefixpropagation kernel study, we show results on Hyperpartisan in Figure 2. The kernelized version of prefix-propagation achieves the best single-run performance, but has higher variance than fine-tuning and prefix-propagation which necessitates further research.
## 5 Conclusion
Our research focuses on parameter efficient tuning for long documents tasks. We introduce prefix-propagation, which consistently improves performance over prefix-turning on long document datasets, while using 50% fewer parameters. We study the reliability of the predictions by performing analyses on calibration and show that models tuned with prefix-propagation are better calibrated. We lastly explicate prefix-propagation from a kernel perspective, uncovering insights for future PEFT research.
## Limitations Scope
This short paper serves as an initial step toward PEFT for long-document models. As such, our evaluated scope of models, tasks, datasets, and kernel variations is limited. We acknowledge the need to experiment across broader settings and hope our work provides a foundation for others to build on.
Future experiments should analyze the validity and efficacy of using prefix-propagation with other long-sequence models to determine whether the prefix modality is suitable for non-sparse attention approximations. For example, would the projection of prefix vectors using a random feature map as in Choromanski et al. (2020) result in an excessive loss of information for these critical tokens?
Regarding tasks and datasets, the performance degradation in prefix methods for WikiHop deserves significant attention. Verifying whether this extends to other reading comprehension and question-answering tasks will assist in guiding future research efforts. We restricted our research to the encoder-only version of Longformer, but using the encoder-decoder version, LED would enable analysis of sequence-to-sequence tasks. The SCROLLS benchmark (Shaham et al., 2022) would be a good starting point for this analysis since it includes an LED baseline.
Combining prefix and kernel methods is an ongoing research effort and there are several questions we plan to address: (1) What are the effects of swapping the default exponential kernel with other variants such as linear, polynomial, and RBF? (2)
Does making the α scale parameter trainable improve performance? (3) Can we have a separate scale parameter for each query and should they be trainable? (4) Is this approach effective for modalities other than long-document? (5) Can we separate other components of attention into modular kernels
(e.g. local and global kernels for sparse attention)?
## Robustness
The size and nature of long-sequence tasks often resulted in long run times for the larger datasets ArXiv, 20-newsgroup and WikiHop. Consequently, we report results of one seed after doing a hyperparameter search for learning rate. This aligns with the reporting system of the original Longformer paper (Beltagy et al., 2020) but greater assurance in all long-sequence task performance could be achieved by accumulating results over several seeds. The size of datasets and iteration over several epochs somewhat mitigate this concern.
## Ethics Statement
Our work helps to address the environmental and equitable distribution concerns of LLMs (Strubell et al., 2019). All PEFT variants attempt to reduce resource requirements, primarily via GPU memory consumption and storage requirements. By applying prefix-tuning and our variation, prefixpropagation to long-document models we limit carbon emissions and increase accessibility for lowresource groups. We note that prefix-propagation neither exacerbates nor alleviates other ethical risks such as biases regarding gender, race, religion, etc.
that are often embedded in pre-trained LLMs. If such biases exist in the pre-trained model, they will be propagated to downstream tasks regardless of tuning method.
## Acknowledgements
This research is supported by NSERC Discovery Grants. The second author is also supported by the Vector Scholarship in Artificial Intelligence.
## References
Iz Beltagy, Matthew E. Peters, and Arman Cohan.
2020. Longformer: The long-document transformer. arXiv:2004.05150.
Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel.
2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 1–9, Dublin, Ireland. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, and Dilek Hakkani-Tur.
2022. Inducer-tuning: Connecting prefix-tuning and adapter-tuning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers.
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin,
Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. 2020. Rethinking attention with performers.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Jacob R Gardner, Geoff Pleiss, David Bindel, Kilian Q
Weinberger, and Andrew Gordon Wilson. 2018. Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration. In Advances in Neural Information Processing Systems.
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang.
2022. PPT: Pre-trained prompt tuning for few-shot learning. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 8410–8423, Dublin, Ireland. Association for Computational Linguistics.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In *Proceedings of the 34th International Conference on Machine Learning - Volume 70*, ICML'17, page 1321–1330. JMLR.org.
Jun He, Liqun Wang, Liu Liu, Jiao Feng, and Hao Wu.
2019. Long document classification from local word glimpses via recurrent attention learning. *IEEE Access*, 7:40707–40718.
Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2021a. Towards a unified view of parameter-efficient transfer learning.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
Xuehai He, Zhuo Cai, Wenlan Wei, Yichen Zhang, Luntian Mou, Eric Xing, and Pengtao Xie. 2021b. Towards visual question answering on pathology images. In *Proceedings of the 59th Annual Meeting of* the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers),
pages 708–718, Online. Association for Computational Linguistics.
Edward Hu, Yelong Shen, Phil Wallis, Zeyuan AllenZhu, Yuanzhi Li, Lu Wang, and Weizhu Chen. 2021.
Lora: Low-rank adaptation of large language models.
Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, and Chiyuan Zhang. 2022. Measuring forgetting of memorized training examples.
A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret.
2020. Transformers are rnns: Fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning
(ICML).
Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 829–839, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya.
2020. Reformer: The efficient transformer.
Ken Lang. 1995. Newsweeder: Learning to filter netnews. In *Proceedings of the Twelfth International* Conference on Machine Learning, pages 331–339.
Quentin Lhoest, Albert Villanova del Moral, Patrick von Platen, Thomas Wolf, Mario Šaško, Yacine Jernite, Abhishek Thakur, Lewis Tunstall, Suraj Patil, Mariama Drame, Julien Chaumond, Julien Plu, Joe Davison, Simon Brandeis, Victor Sanh, Teven Le Scao, Kevin Canwen Xu, Nicolas Patry, Steven Liu, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Nathan Raw, Sylvain Lesage, Anton Lozhkov, Matthew Carrigan, Théo Matussière, Leandro von Werra, Lysandre Debut, Stas Bekman, and Clément Delangue. 2021. Datasets: A Community Library for Natural Language Processing. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations*, pages 175–184. Association for Computational Linguistics.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597, Online. Association for Computational Linguistics.
Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. P-tuning:
Prompt tuning can be comparable to fine-tuning across scales and tasks. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68, Dublin, Ireland. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692.
Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated proba-
bilities using bayesian binning. *Proceedings of the* AAAI Conference on Artificial Intelligence, 29(1).
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch:
An Imperative Style, High-Performance Deep Learning Library. In *Advances in Neural Information Processing Systems 32*, pages 8024–8035. Curran Associates, Inc.
Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, and Lingpeng Kong. 2021.
Random feature attention.
Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, and Omer Levy. 2022.
Scrolls: Standardized comparison over long language sequences.
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics.
Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2022. Efficient transformers: A survey. ACM
Comput. Surv., 55(6).
Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhutdinov.
2019. Transformer dissection: An unified understanding for transformer's attention via the lens of kernel. In *EMNLP*.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel.
2018. Constructing datasets for multi-hop reading comprehension across documents. *Transactions of* the Association for Computational Linguistics, 6:287– 302.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. *Advances in Neural Information* Processing Systems, 33.
## A Kernel Decomposition Derivation
In the unified framework of He et al. (2021b), we can write the first layer l = 1 attention mechanism of prefix-propagation as:
$$\begin{array}{c}{{H_{l,i}=\mathrm{Attn}(\mathrm{cat}(P^{(l)},C)W_{q}^{(l)(i)},}}\\ {{\mathrm{cat}(P^{(l)},C)W_{k}^{(l)(i)},}}\\ {{\mathrm{cat}(P^{(l)},C)W_{v}^{(l)(i)})}}\end{array}$$
where P is a trained prefix for each downstream task. Omitting layer and head indices and using D = cat(*P, C*) for brevity, we can rewrite Equation 4 as:
$$H=\mathrm{{Attn}}(D W_{q},\mathrm{{cat}}(P,C)W_{k},\mathrm{{cat}}(P,C)W_{v})$$
= softmax(DWqcat(PWk, CWk)) PWv CWv = (1−λ(C))softmax(DWqW⊤ k C ⊤)CWv
+λ(C)softmax(DWqW⊤
k P
⊤)PWv
= (1−λ(C))Attn(DWq, CWk*, CW*v)
+λ(C)Attn(DWq, PWk*, PW*v)
= (1−λ(C))Attn(cat(P, C)Wq, CWk*, CW*v)
+λ(C)Attn(cat(P, C)Wq, PWk*, PW*v)
(5)
where λ(C) is a scalar (dependent on C) to normalize softmax over the sequence and the prefixes and is computed by:
$$\lambda(C)=\frac{\sum_{i}D W_{q}W k^{\top}P^{\top}}{\sum_{i}D W_{q}W_{k}^{\top}P^{\top}+\sum_{j}D W_{q}W_{k}^{\top}C^{\top}}\tag{6}$$
We consider the two terms of Equation 5 as kernelized attention modules which brings us back to the complete kernel decomposition:
$$\begin{array}{l}{{H=\mathrm{{Kern}}(\mathrm{{cat}}(P,C)W_{q},C W_{k},C W_{v})}}\\ {{+\ (\alpha)\mathrm{{Kern}}(\mathrm{{cat}}(P,C)W_{q},P W_{k},P W_{v})}}\end{array}\tag{7}$$
where α is an introduced hyperparameter that replaces the fixed weighting of λ. This change allows us to explicitly increase the weighting of prefixes
| Artifact | Version | License |
|----------------------------------------|-----------|--------------|
| transformers (Wolf et al., 2020) 3 | 4.23.1 | Apache 2.0 |
| datasets (Lhoest et al., 2021) 4 | 2.6.1 | Apache 2.0 |
| GPyTorch (Gardner et al., 2018) 5 | 1.9.0 | MIT |
| RoBERTa (Liu et al., 2019) 6 | base | MIT |
| Longformer (Beltagy et al., 2020) 7 | base | Apache 2.0 |
| P-Tuning (Liu et al., 2022) 8 | 2.0 | Apache 2.0 |
| ArXiv (He et al., 2019) 9 | no_ref | Unspecified |
| Hyperpartisan (Kiesel et al., 2019) 10 | 1.0 | CC BY 4.0 |
| 20-newsgroup (Lang, 1995) 11 | 1.0 | Unspecified |
| WikiHop (Welbl et al., 2018) 12 | 1.1 | CC BY SA 3.0 |
Table 4: Complete list of artifacts used in our experiments along with their versions and licenses.
by scaling the prefix kernel's coefficients. Kern is the kernelized attention variant described in Tsai et al. (2019):
$$\mathrm{Kern}(Q,K,V)_{i}=\sum_{j=1}^{N}{\frac{k(Q_{i},K_{j})}{\sum_{j^{\prime}=1}^{N}k(Q_{i},K_{j^{\prime}})}}V_{j}\ \ \mathrm{(8)}$$
where subscripts (e.g. i) index the rows of a matrix, N is the number of key and value vectors, and k is a kernel function that calculates the similarity score between two vectors. We do not experiment with altering the kernel type since the default exponential kernel inherent to softmax attention already implicitly maps the input vectors to an infinite feature space. Therefore, the kernel function in Equation 8 takes the form:
Form $ k(x_q,x_k)=exp\left(\frac{\left\langle x_q,x_k\right\rangle}{\sqrt{d_k}}\right)\qquad\qquad(9)$ $ \therefore)$ signifies the dot product and $ d_k$ is the ...
where ⟨·, ·⟩ signifies the dot product and dk is the dimension of key projections.
## B Experimental Details
Artifact Notes Table 4 summarizes the complete list of artifacts we used in our experiments along with their licenses and versions. All libraries were used for their intended purpose of open-source development. The ArXiv, Hyperpartisan, and WikiHop datasets were released in research contexts to
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
evaluate and/or develop state-of-the-art algorithms.
The intended use of 20-newsgroups is not explicit, although it is commonly used for natural language processing in research. We therefore believe we have adhered to the intended usages of the datasets we included.
We do not anonymize the data for 20newsgroups as (a) the trained models is not being deployed (only used for evaluation purposes) and
(b) the non-anonymized variant is already publicly available. We chose to use the datasets in the current form for fair comparison with other baselines and therefore did not do a detailed analysis for those artifacts. We refer readers to the cited original works in Table 4 for complete documentation.
Training For our experiments, we use and adapt the prefix-tuning implementation provided in Liu et al. (2022). Training was conducted on 12 NVIDIA GeForce 1080 Ti cards, for an estimated 2300 single GPU hours (including preliminary experiments). All models tested fit on a single card, so we did not use any model parallelism. Throughout experiments, we use gradient accumulation for an effective batch size of 32. We use early stopping for our hyperparameter search, and show results for the run with the best validation F1-score. For learning rate, we search between {1e-2, 5e-2, 1e-3, 5e-3, 5e-4} for prefix-based methods, and {3e-5, 5e-5}
for fine-tuning. For kernelized prefix-propagation, we search for a scale factor (hyperparameter α) of
{1e-2, 4e-2, 1e-3, 3e-3, 5e-3, 7e-3} (after choosing the best learning-rate). Other hyperparameters are listed in Table 5.
Despite seeding random number generators for Hugging Face's transformer library through the set_seed method, slight deviations will propagate if using GPUs due to some non-deterministic CUDA methods that do not respect the seed setting mechanisms of Pytorch (Paszke et al., 2019).
Upon further analysis, we found this issue in nondeterministic algorithms to be widely overlooked in the field, and believe that this area needs further discussion in the research community. However, we note that our results should be reproducible when running across multiple seeds.
Task Details All datasets used have a considerable portion of documents greater than RoBERTa's max sequence limit of 512 tokens, as shown in Figure 3. Number of samples and number of classes for each dataset are in Table 6.
For all classification tasks, we prepend a globally-attended [CLS] token to the start of the sequence and pass the output into a learned classification head. We truncate document lengths to 4096 and 512 tokens for Longformer and RoBERTa, respectively. For Hyperpartisan, we use the same data pre-processing and training split as Beltagy et al. (2020). However, we noticed overlap between training and testing samples, so we instead show validation results. We use the ArXiv dataset from He et al. (2019) that is available on Huggingface datasets (which we reviewed for correctness). The original dataset has labels leaked in the source text, so we use the no_ref version that has those labels filtered. We use the 20-newsgroups and follow preprocessing as recommended by scikit-learn authors, removing headers, quotations, and signatures from each sample to prevent the model from learning spurious correlations.
WikiHop instances include a question, candidate answers, and multiple context documents. For
| Dataset | nsample | nclass | ntrain/dev/test |
|-----------|-----------|----------|-------------------|
| HY. | 645 | 2 | 80/10/10 |
| NG. | 18,846 | 20 | 60/20/20 |
| ArXiv | 33,388 | 11 | 85/7.5/7.5 |
| WikiHop | 48,867 | - | 90/5/5 |
![9_image_0.png](9_image_0.png)
a fair comparison, we follow the WikiHop setup in Beltagy et al. (2020) to the best of our ability. In summary, we pass the dataset fields into the model in the format: [q]
<question> [/q] [ent] <candidate 1> [/ent] ... [ent]<candidate N>
[/ent] [sep] <context 1> [sep] ...
[sep] <context N>. Because the context documents are often longer than the maximum sequence length of Longformer, we split the context documents into chunks of 4096 (or 512 for RoBERTa) and pass them separately through the model while concatenated to the question and candidate pair. We then train a classifier to predict a single logit for each [ent] token, take the average over all chunks, apply softmax, and finally use cross-entropy loss. We also train the new special tokens [ent] and [q] in prefix-based methods to better learn an effective representation
(as they did not appear in pre-training).
## C Impact Of Training Time On Ece
Apparent in Figure 4, prefix-propagation is bettercalibrated relative to other approaches throughout training. Prefix-tuning and fine-tuning however
| Method | Absolute Runtime (s) | Relative Runtime |
|--------------------|------------------------|--------------------|
| No PEFT | 2192 | 0% |
| Prefix-Tuning | 2239 | +2.1% |
| Prefix-Propagation | 2196 | +0.2% |
either start less calibrated or deviate from prefixpropagation as training progresses.
## D Runtime Performance
We test the inference time of the studied methods and show the results in Table 7. We use the same 8000 randomly generated sequences of length 4096 across methods and test on a NVIDIA GTX 1080 Ti. We notice that prefix-propagation is slightly more efficient than prefix-tuning. We theorize that this discrepancy is caused by prefix-propagation only needing to concatenate a matrix in the first layer (and sum on the rest), whereas prefix-tuning concatenates before every layers.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
See the limitations section on page 5.
✓ A2. Did you discuss any potential risks of your work?
See the ethics statement on page 5.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
See the abstract and introduction (Section 1).
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Yes, we develop our own code for our experiments (Section 4.1). The code will be released.
✓ B1. Did you cite the creators of artifacts you used?
We provided a complete list of artifacts along with citations in Appendix B (Table 4).
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
See Appendix B (Table 4).
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
See Appendix B.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
See Appendix B.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We did not create dataset artifacts, so we did not feel it was necessary. We wanted to use the datasets as-is to get a sense of how well our variation performs compared to other models. Doing an in-depth analysis of the dataset was beyond the scope of our paper. See Appendix B for an in-paper explanation.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
See Appendix B (Table 6).
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** See Section 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
See section 4.1 and Appendix B.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
See Appendix B.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In various places, for example, in Table 1, Figure 2, and Limitations.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
See Appendix B.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wu-etal-2023-listener | Listener Model for the {P}hoto{B}ook Referential Game with {CLIPS}cores as Implicit Reference Chain | https://aclanthology.org/2023.acl-short.121 | PhotoBook is a collaborative dialogue game where two players receive private, partially-overlapping sets of images and resolve which images they have in common. It presents machines with a great challenge to learn how people build common ground around multimodal context to communicate effectively. Methods developed in the literature, however, cannot be deployed to real gameplaysince they only tackle some subtasks of the game,and they require additional reference chains inputs, whose extraction process is imperfect. Therefore, we propose a reference chain-free listener modelthat directly addresses the game{'}s predictive task, i.e., deciding whether an image is shared with partner. Our DeBERTa-based listener model reads the full dialogue, and utilizesCLIPScore features to assess utterance-image relevance. We achieve {\textgreater}77{\%} accuracy on unseen sets of images/game themes, outperforming baseline by {\textgreater}17 points. | # Listener Model For The Photobook **Referential Game** With Clipscores As Implicit Reference Chain
Shih-Lun Wu and **Yi-Hui Chou** and **Liangze Li**
{shihlunw, yihuic, liangzel}@andrew.cmu.edu Language Technologies Institute, Carnegie Mellon University, Pittsburgh, PA, USA
## Abstract
PhotoBook is a collaborative dialogue game where two players receive private, partiallyoverlapping sets of images and resolve which images they have in common. It presents machines with a great challenge to learn how people build common ground around multimodal context to communicate effectively. Methods developed in the literature, however, cannot be deployed to real gameplay since they only tackle some subtasks of the game, and they require additional reference chains inputs, whose extraction process is imperfect. Therefore, we propose a reference chain-free listener model that directly addresses the game's predictive task, i.e., deciding whether an image is shared with partner. Our DeBERTa-based listener model reads the full dialogue, and utilizes CLIPScore features to assess utterance-image relevance. We achieve >77% accuracy on unseen sets of images/game themes, outperforming baseline by >17 points.
## 1 Introduction
PhotoBook (Haber et al., 2019) is a collaborative dialogue game of two players. In a game round, each player receives 6 images of an identical themethe two largest objects in all images share the same categories, e.g., dog, car, etc. The players have some of their images in common. Their goal is to communicate through text dialogue, and individually mark 3 privately highlighted images as either common (i.e., shared with partner) or *different*. A
full game lasts 5 rounds. After each round, some of each player's images are replaced with different ones under the same theme. Images may reappear in later rounds after being swapped out. This game setup encourages building and leveraging common ground with multimodal contexts, which humans are known to do to facilitate conversation (Clark and Wilkes-Gibbs, 1986; Brennan and Clark, 1996).
Fig. 1 displays an example of a PhotoBook game.1 1In this case, the game theme is *person & bench*.
Models proposed in past works on the dataset
(Haber et al., 2019; Takmaz et al., 2020) are unable to realistically play the game due to several reasons:
(i) they only address subtasks in the game whose time span is *one utterance*, rendering it unnecessary for the models to keep track of the entire game's, or round's, progress; (ii) the models operate on additional input of *reference chains*, i.e., past utterances referring to each image, whose (rule-based)
extraction process is imperfect and hence complicates learning and evaluation; and, (iii) utterances outside of reference chains, e.g., 'I don't have that one', may also be important pieces of information.
To address the drawbacks above, we propose a full (i.e., able to play real games), reference chainfree listener model, which accepts all dialogue utterances of a round2and the 6 context images, and predicts whether the 3 target (highlighted) images are *common/different*. Our listener model is based on a pretrained DeBERTa Transformer (He et al.,
2021). To incorporate visual context, CLIPScores
(Hessel et al., 2021) between each utterance and the 6 given images are infused with DeBERTa hidden states. We employ CLIPScore as it offers strong prior knowledge about the relevance of an utterance to each of the 6 images, which may serve as a soft, implicit version of reference chain used in previous studies. Also, we chose DeBERTa since it is one of the top performers in the SuperGLUE benchmark
(Sarlin et al., 2020) which provides a reasonablysized (∼100M parameters) version to suit our purpose and computation resources. We further devise a label construction scheme to create dense learning signals. Our model scores a >77% accuracy on the novel listener task and improves by >17% (absolute) over the baseline adapted from (Takmaz et al.,
1420
![1_image_0.png](1_image_0.png)
2020). Our code is available at github.com/
slSeanWU/photobook-full-listener.
## 2 Related Work
In typical collaborative dialogue tasks, two agents
(i.e., players) hold incomplete or partially overlapping information and communicate through text to reach a predefined goal. The task-oriented setup enables simple evaluation for dialogue systems via task success rate, instead of resorting to costly human evaluation. Tasks and datasets proposed in the literature focus either on set logic (He et al.,
2017), image understanding (De Vries et al., 2017; Haber et al., 2019), or spatial reasoning (Udagawa and Aizawa, 2019). They challenge dialogue systems to process multiple modalities, discard irrelevant information, and build common ground. Researchers have utilized graph neural networks (He et al., 2017), vision-and-language Transformers
(Lu et al., 2019; Tu et al., 2021), and pragmatic utterance generation (Frank and Goodman, 2012; Fried et al., 2021) to tackle the tasks.3 To our knowledge, there has not been a system that fully addresses the PhotoBook task. It may be particularly challenging due to the setup with multiple highly similar images and an unbounded set of information (e.g., scene, actions) the images may contain. Previous PhotoBook works targeted two subtasks: *reference resolution* (Haber et al.,
2019; Takmaz et al., 2020) and *referring utterance* generation (Takmaz et al., 2020). The former resolves which of the 6 context images an utterance is referring to, while the latter generates an informative utterance for a pre-selected image. Pro3Table 2 (in appendix) summarizes these tasks & methods.
posed models take in extracted reference chainswhose rule-based extraction processes4try to identify which utterances speak about each of the images. To obtain such chains, Haber et al. (2019)
broke the dialogue into segments using a set of heuristics based on player marking actions. Takmaz et al. (2020), on the other hand, computed each utterance's BERTScore (Zhang et al., 2019) and METEOR (Banerjee and Lavie, 2005) respectively against ground-truth MSCOCO captions (Lin et al.,
2014), and VisualGenome attributes (Krishna et al.,
2017) of each image to match (at most) one utterance per round to an image.
As for the reference resolution task, Haber et al.
(2019) employed LSTM encoders. One (query)
encoder takes a current dialogue segment, while the other (i.e., context encoder) receives the 6 images' ResNet features, and the associated reference chain segments.5 Dot products between query encoder output and 6 context encoder outputs are taken to predict the image the current segment refers to.
Takmaz et al. (2020) largely kept the setup, but they used BERT (Devlin et al., 2019) embeddings and contextualized utterances via weighted averaging instead of LSTMs.
Takmaz et al. (2020) claimed an 85% reference resolution accuracy, but they also reported an 86%
precision6 on reference chain extraction, making it difficult to conclude whether prediction errors are due to model incompetence, or incorrect input data/labels. (We find that some parts of extracted reference chains either point to the wrong image or
![2_image_0.png](2_image_0.png)
provide no information at all.7) Yet, we do agree that keeping track of which images have been referred to is vital for the game. Therefore, we aim to build a full listener model that does not depend on explicit reference chains, but gathers such information from implicit hints given by an image-text matching model, i.e., CLIP (Radford et al., 2021).
## 3 Method 3.1 Functionality Of Clipscore
Based on CLIP vision-and-language Transformer
(Radford et al., 2021), CLIPScore (Hessel et al.,
2021) is a reference-free8 metric to measure semantic image-text similarity. On image captioning, Hessel et al. (2021) showed that CLIPScore correlates better with human judgment than referencedependent metrics like BERTScore (Zhang et al.,
2019) and SPICE (Anderson et al., 2016).
In our pilot study, we find that the CLIPScore of an utterance-image pair is particularly high when the utterance describes the image (see Fig. 1 for example). These score peaks thus form an implicit reference chain for the dialogue, giving strong hints on whether the mentioned images are common/different when seen with subsequent partner feedback (e.g., '*I have that one*'). Also, the ref-7We rerun (Takmaz et al., 2020)'s experiment and show some of the problematic examples in Appendix F & Table 5.
8i.e., does not take ground-truth text as input erence chain extraction method in (Takmaz et al.,
2020) achieves higher precision (86%→93%) and recall (60%→66%) when we simply replace its core scoring metrics9 with CLIPScore. The findings above show that CLIPScore captures well the utterance-image relationships in PhotoBook, and hence should be helpful to our listener model.
Computation-wise, reference chain extraction algorithms in the literature either rely on complex turn-level heuristics (Haber et al., 2019), or compute multiple external metrics (i.e., BERTScore and METEOR) (Takmaz et al., 2020). More importantly, they have to wait until completion of a round to compute the chains. Our utterance-level CLIPScores can be computed on the fly as utterances arrive, and are relatively time-efficient as they involve only one model (i.e., CLIP) and that batch computation may be used to increase throughput.
Modeling-wise, reference chain extraction explicitly selects which utterances the listener model should see, so when it is wrong, the model either sees something irrelevant, or misses important utterances. On the other hand, utterance-level CLIPScores resemble using a highlighter to mark crucial dialogue parts for the model. Even when CLIPScores are sometimes inaccurate, the model could still access the full dialogue to help its decisions
## 3.2 The Full Listener Model 3.2.1 Inputs
An overview of our listener model is depicted in Fig. 2. Our model operates on three types of input features, which collectively represent a game round from one of the players' perspective:
Dialogue tokens: $\mathcal{X}=\{\mathbf{x}_{k}\in\mathcal{W}^{|T_{k}|}\}_{k=1}^{K}$ (1) CLIPScores: $\mathcal{C}=\{\mathbf{c}_{k}\in\mathbb{R}^{6}\}_{k=1}^{K}$ (2) Image features: $\mathcal{V}=\{\mathbf{v}_{j}\in\mathbb{R}^{512}\}_{j=1}^{6}$ (3)
We use k, j to index utterances and images respectively. W is the text token vocabulary, and Tk = {tk,start*, . . . , t*k,end} is the corresponding token timesteps for the k th utterance. To the start of each utterance, we prepend either a [CLS] or
[SEP] token to distinguish whether it comes from the player itself or the partner. All utterances are concatenated to form one text input sequence to our model.10 CLIPScore vectors (ck's) are computed in a per-utterance manner, i.e., between one 9i.e., BERTScore & METEOR. Details in Appendix F.
10Average text length (i.e., Pk|Tk|) is about 120 tokens.
1422 utterance and each of the 6 images. Images are represented by the pooled11 features from SegFormer
(Xie et al., 2021). It is trained on semantic image segmentation (Zhou et al., 2017), and hence should encode crucial visual information for the game, i.e.,
objects in the scene and their spatial relationships.
## 3.2.2 Labels And Output
Rather than training the model to predict just once after seeing the entire dialogue, we construct labels for all timesteps, forming a label sequence yj ∈ LT, where T =Pk|Tk|, for each target image, where L is the label set. As there are only 3 target images out of the 6, we also only have 3 such label sequences (yj 's) for a training instance.
At each timestep t, the label of a target image, yj,t ∈ L, is one of {undecided, common, *different*}.
It always starts as *undecided*, changes to *common* or *different* at the moment of player marking action, and remains there for the rest of the dialogue. Our model's output for a (target) image j at timestep t is hence a distribution yˆj,t ∈ R
3, which is a temporary belief about that image. Also, we apply causal masking on DeBERTa self-attention. Such a labeling and masking scheme creates dense learning signals—our model must judge an image at every timestep based on growing dialogue context.
3.2.3 Model Components The backbone of our model is a pretrained base DeBERTa (He et al., 2021), which takes in concatenated utterances X = {xk ∈ W|Tk|}
K
k=1 = {xt ∈
W}T
t=1, and contextualizes them into hidden states:
$$\mathcal{H}^{(l)}=\{\mathbf{h}_{t}^{(l)}\in\mathbb{R}^{d}\}_{t=1}^{T}\,,\;\;l\in\{1,\ldots,L\}\,,\tag{4}$$ $\mathbf{h}_{t}=\mathbf{h}_{t}^{(l)}\in\mathbb{R}^{d}\}_{t=1}^{T}\,,\;\;l\in\{1,\ldots,L\}\,,$ $\mathbf{h}_{t}=\mathbf{h}_{t}^{(l)}\in\mathbb{R}^{d}\}_{t=1}^{T}\,,\;\;l\in\{1,\ldots,L\}\,,$
where d (= 768) is DeBERTa's hidden size, and l is layer index (\# layers L = 12). We do not adopt vision-and-language Transformers (Lu et al., 2019; Wang et al., 2022) for they are pretrained on 'single image-short text' pairs, which mismatches our scenario. Following Wu and Yang (2022)'s recommendation on feeding time-varying conditions to Transformers, utterance-level CLIPScores (i.e., C)
are projected and summed with DeBERTa hidden states at all layers:12
$${\cal H}^{(l)}\leftarrow\{{\cal H}^{(l)}_{T_{k}}={\mathbf{h}^{(l)}_{t\in T_{k}}}+{\mathbf{W}}_{\rm proj}\,{\mathbf{c}}_{k}\}_{k=1}^{K}\,,\tag{5}$$
| valid | test |
|---------|--------|
where Wproj ∈ R
d×6is a learnable matrix.
To make predictions, we place a 2-layer MLP
(with GELU activation) on top of DeBERTa. It takes in the concatenation of the pooled target image features and the last-layer DeBERTa hidden state, and produces a distribution over the label set L = {undecided, common, *different*}:
$$\hat{\mathbf{y}}_{j,t}=\mathrm{MLP}_{\mathbb{R}^{512+d}\to\mathbb{R}^{3}}([\mathbf{v}_{j};\mathbf{h}_{t}^{(L)}])\,.\tag{6}$$
We add learnable positional embeddings to vj 's to make our model aware of the target image's index.
## 4 Experiments And Results
Our listener model is trained with the maximum likelihood estimation (MLE) loss function:
$$\mathbb{E}(\mathcal{X},\mathcal{C},\mathcal{V},\mathcal{V})\in\mathcal{D}_{\text{train}}\sum_{j,t}-\log p_{\hat{\mathbf{y}}_{j,t}}(y_{j,t}\mid\mathcal{X},\mathcal{C},\mathbf{v}_{j}),\tag{7}$$
where Dtrain is the training split, and Y is the set of label sequences associated with a data instance.
The same images/themes are guaranteed not to appear in multiple dataset splits. We refer readers to Appendix A for more implementation and training details. Evaluation metric adopted here is accuracy measured at the end of dialogue, i.e., at evaluation, we ignore temporary beliefs in the chat. To set a baseline, we modify the reference resolution model in (Takmaz et al., 2020) to suit our listener task.13 Table 1 lists the evaluation results. Our method outperforms baseline by 17∼20 percentage points, closing the gap to human performance by more than half. Examining the ablations, we can observe 13Modification details are in Appendix C.
that both removing CLIPScore inputs and dense learning signals (i.e., having labels at all timesteps, see Sec. 3.2.2) cause serious accuracy degradation, indicating their essentiality in our model, and that a pretrained Transformer does not trivially beat a fully MLP-based baseline. Besides, though adding cross-attention to image features14 (i.e., ablations a. & c.) seems to be a more intuitive way to involve visual context, it leads to more severe overfitting15 and hence does not help in our case. We provide more detailed observations on our best-performing model's behavior and outputs in Appendix G.
## 5 Conclusions And Future Work
In this paper, we first discussed why it is difficult to deploy existing reference chain-dependent PhotoBook models to real gameplay, and demonstrated that CLIPScore's image-text matching capability may provide implicit reference chains to the task.
We then developed a novel listener model that is reference chain-free, and able to realistically play the game given text dialogue and the set of context images, just as what human players see. The model is built on a DeBERTa Transformer backbone, and brings in visual context by infusing utterance-level CLIPScores with its hidden states. On the newly proposed full listener task, i.e., predicting whether an image is shared with partner, our model achieves 77∼84% accuracy on unseen sets of images, surpassing baseline (Takmaz et al., 2020) by over 17 points. Ablation studies also showed that feeding CLIPScores and imposing dense learning signals are both indispensable to our model's success.
Future studies may leverage parameter-efficient transfer learning (He et al., 2022; Houlsby et al.,
2019; Hu et al., 2022; Perez et al., 2018) to cope with image data scarcity of PhotoBook (and potentially other datasets and tasks). It is also interesting to develop a speaker model that uses temporary beliefs from our listener model and takes pragmatics
(Frank and Goodman, 2012; Fried et al., 2021) into account to generate informative responses. Pairing such a model with our listener model may complete the collaborative dialogue task end-to-end.
## 6 Limitations
The PhotoBook dataset has a very limited number of images (i.e., 360) and image combinations (i.e.,
14Cross-attention mechanism explained in Appendix B. 15Likely due to limited dataset size and configuration. More analysis and exploration can be found in Appendix E.
5 per game theme), which may lead to undesirable overfitting behavior as we discuss in Appendix E.
Also, since our model depends heavily on CLIP
(Radford et al., 2021), it is likely to inherit CLIP's biases and weaknesses. For example, Radford et al.
(2021) mentioned that CLIP fails to perform well on abstract or more complex tasks, such as counting or understanding spatial relationships between objects. Finally, whether our listener model can be easily applied/adapted to productive real-world tasks (e.g., automated customer service with image inputs) requires further exploration.
## Acknowledgements
We would like to express our utmost thanks to Dr. Daniel Fried, Emmy Liu and Dr. Graham Neubig for their guidance and insightful suggestions.
We also appreciate the valuable feedback from the reviewers and the area chair.
## References
Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. SPICE: Semantic propositional image caption evaluation. In *Proc. ECCV*.
Satanjeev Banerjee and Alon Lavie. 2005. METEOR:
An automatic metric for MT evaluation with improved correlation with human judgments. In Proc.
ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization.
Susan E Brennan and Herbert H Clark. 1996. Conceptual pacts and lexical choice in conversation. *Journal* of Experimental Psychology: Learning, Memory, and Cognition.
Herbert H Clark and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative process. *Cognition*.
Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville.
2017. Guesswhat?! visual object discovery through multi-modal dialogue. In *Proc. CVPR*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional Transformers for language understanding. In *Proc. NAACL*.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2021.
An image is worth 16x16 words: Transformers for image recognition at scale. In *Proc. ICLR*.
Michael C Frank and Noah D Goodman. 2012. Predicting pragmatic reasoning in language games. *Science*.
Daniel Fried, Justin Chiu, and Dan Klein. 2021.
Reference-centric models for grounded collaborative dialogue. In *Proc. EMNLP*.
Janosch Haber, Tim Baumgärtner, Ece Takmaz, Lieke Gelderloos, Elia Bruni, and Raquel Fernández. 2019.
The photobook dataset: Building common ground through visually-grounded dialogue. In *Proc. ACL*.
He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In *Proc. ACL*.
Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning. In *Proc. ICLR*.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: Decoding-enhanced BERT with disentangled attention. In *Proc. ICLR*.
Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. CLIPScore: a referencefree evaluation metric for image captioning. In *Proc.*
EMNLP.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for NLP. In Proc. ICML.
Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2022. LoRA: Low-rank adaptation of large language models. In *Proc. ICLR*.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al.
2017. Visual Genome: Connecting language and vision using crowdsourced dense image annotations.
IJCV.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In *Proc. ECCV*.
Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *Proc. ICLR*.
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee.
2019. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks.
In *Proc. NeurIPS*.
Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. 2018. FiLM: Visual reasoning with a general conditioning layer. In *Proc.*
AAAI.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *Proc. ICML*.
Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. 2020. SuperGLUE: Learning feature matching with graph neural networks. In *Proc. CVPR*.
Ece Takmaz, Mario Giulianelli, Sandro Pezzelle, Arabella Sinclair, and Raquel Fernández. 2020. Refer, reuse, reduce: Generating subsequent references in visual and conversational contexts. In *Proc. EMNLP*.
Tao Tu, Qing Ping, Govindarajan Thattai, Gokhan Tur, and Prem Natarajan. 2021. Learning better visual dialog agents with pretrained visual-linguistic representation. In *Proc. CVPR*.
Takuma Udagawa and Akiko Aizawa. 2019. A natural language corpus of common grounding under continuous and partially-observable context. In Proc.
AAAI.
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. OFA: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In Proc.
ICML.
Shih-Lun Wu and Yi-Hsuan Yang. 2022. MuseMorphose: Full-song and fine-grained piano music style transfer with one Transformer VAE. *IEEE/ACM*
TASLP.
Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. 2021. SegFormer: simple and efficient design for semantic segmentation with transformers. In *Proc. NeurIPS*.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. In *Proc. ICLR*.
Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. 2017. Scene parsing through ADE20k dataset. In *Proc. CVPR*.
## Appendices A Details On Model Implementation And Training
Our listener model's implementation is based on HuggingFace's DeBERTa module.16 The 16×16
(512-dimensional) patch features for each context image are extracted from last encoder layer of the publically released SegFormer-b4 model17 trained 16github.com/huggingface/transformers/
blob/main/src/transformers/models/ deberta/modeling_deberta.py 17huggingface.co/nvidia/
segformer-b4-finetuned-ade-512-512 on ADE20k (Zhou et al., 2017) semantic image segmentation dataset. CLIPScores between utterances and images are computed using the official repository18 which employs Vision Transformer-base
(ViT-B/32) (Dosovitskiy et al., 2021) as the image encoder. Our listener model adds ∼1M trainable parameters to the 12-layer base DeBERTa backbone, which originally has 100M parameters.
We split our dataset to train/validation/test with a 70/10/20 ratio and make sure that a theme (i.e.,
categories of the 2 largest objects appearing in all 6 context images in a game round), and hence any image, does not lie across multiple splits. Since a game round has 2 perspectives (i.e., players), it also spawns 2 instances. Rounds in which players make mistakes, or mark images before the first utterance, are filtered out. We finally obtain 13.7K/1.8K/3.7K
instances for each of the splits respectively.
We train the model for 100 epochs and early stop on validation accuracy with 10 epochs of patience.
AdamW (Loshchilov and Hutter, 2018) optimizer with 10−3 weight decay is used. We warm up the learning rate linearly for 500 steps to 2 × 10−5, and then linearly decay it to 0 for the rest of the training. Batch size is set to 16. Training takes around 8 hours to complete on an NVIDIA A100 GPU with 40G memory. For fair comparison across model settings and baselines, we randomly draw 3 seeds and run training on all settings/baselines with them.
## B Details On The Attempt To Infuse Visual Features With Cross Attention
In addition to fusing CLIPScores into DeBERTa self-attention, we also attempt cross-attending DeBERTa hidden states to the 6 context images' SegFormer features to incorporate visual information.
We denote the SegFormer patch features by:
$$\nu^{\mathrm{(pt)}}=\{\mathbf{v}_{j,p}^{\mathrm{(pt)}}\in\mathbb{R}^{512}\}_{j=1,\,p=1}^{6,\quad16\times16}\,,\qquad(8)$$
where *j, p* respectively indexes images and patches.
All image features (16×16×6 = 1536 vectors) are concatenated into one long sequence for the DeBERTa hidden states (with text & CLIPScore information) to cross-attend to. As a sequence with length over 1.5K would lead to large memory footprint for attention operations, we downsample the patch features (to 8×8×6 = 384 vectors) through strided 2D group convolution before feeding them 18github.com/jmhessel/clipscore to cross-attention, i.e.,
$$\dot{\mathcal{V}}^{\mathrm{(pt)}}=\mathrm{StridedGroupConv2D}(\mathcal{V}^{\mathrm{(pt)}})\qquad\mathrm{(9)}$$ $$\mathcal{H}^{(l)}\leftarrow\mathrm{Attention}(\mathcal{H}^{(l)},\dot{\mathcal{V}}^{\mathrm{(pt)}},\dot{\mathcal{V}}^{\mathrm{(pt)}})\,,\quad\mathrm{(10)}$$
where H(l)is the l th-layer DeBERTa hidden states.
The patch features in V˙ (pt) are further mean-pooled to form inputs (for target images), i.e., V, to our final MLP classifier (please check Eqn. 3 & 6, too):
$${\mathcal{V}}=\{\mathbf{v}_{j}\}_{j=1}^{6}=\{\,\mathrm{MeanPool}(\{\,\mathbf{\hat{v}}_{j,p}^{(\mathrm{pt})}\}_{p=1}^{8\times8})\,\}_{j=1}^{6}\tag{11}$$
In the model settings whose performance is reported in Table 1 (i.e., ablations a. & c.), we place two such cross-attention layers with tied weights before all DeBERTa self-attention layers to give the model more chances to digest and reason with visual inputs. Doing so introduces 8M new trainable parameters (cf. ∼1M for our best model). We also try to place these cross-attention layers later in the model in unreported experiments. However, when using visual cross-attention, our listener model always suffers more from overfitting—lower training loss but worse evaluation accuracy.
## C Adapting Takmaz Et Al. **(2020)'S Model** For Our Listener Task
The reference resolution model in (Takmaz et al.,
2020) contains two components: query encoder and context encoder:
- Query encoder: takes in BERT embeddings of a *current utterance* and the concatenation of 6 context images' ResNet features, and outputs one representation through learnable weighted averaging (across utterance timesteps).
- Context encoder: encodes each of the 6 images and the associated *reference chain* (i.e., past utterances referring to that image) separately.
The average of each reference chain utterance's BERT embeddings gets summed with that image's ResNet features to form the context representation for that image.
The model is based on fully-connected layers entirely. Finally, dot products between the query representation and 6 context representations are taken, and the arg max is deemed the referent image of the current utterance.
To adapt their model to our full listener task, we feed to the query encoder BERT embeddings of the *whole round of dialogue* and ResNet features of the *target image* instead. We *mean-pool* the 6
| Dataset size | Inputs | Tgt. resolution | SoTA E2E performance | SoTA techniques | |
|--------------------------------------|-------------------------|----------------------|------------------------|--------------------------|----------------|
| MutualFriends (He et al., 2017) | 11K dialogues | Text (tabular) | Bilateral | 96% (He et al., 2017) | GNN, LSTM |
| GuessWhat?! (De Vries et al., 2017) | 150K dialogs, 66K imgs | Text & image | Unilateral | 63% (Tu et al., 2021) | ViLBERT |
| OneCommon (Udagawa and Aizawa, 2019) | 5K dialogues | Text & dots on plane | Bilateral | 76% (Fried et al., 2021) | LSTM, CRF, RSA |
| PhotoBook (Haber et al., 2019) | 12.5k dialogs, 360 imgs | Text & 6 images | Bilateral | No complete system yet | ResNet, LSTM |
MutualFriends (He et al., 2017) 11K dialogues Text (tabular) Bilateral 96% (He et al., 2017) GNN, LSTM GuessWhat?! (De Vries et al., 2017) 150K dialogs, 66K imgs Text & image Unilateral 63% (Tu et al., 2021) ViLBERT OneCommon (Udagawa and Aizawa, 2019) 5K dialogues Text & dots on plane Bilateral 76% (Fried et al., 2021) LSTM, CRF, RSA PhotoBook (Haber et al., 2019) 12.5k dialogs, 360 imgs Text & 6 images Bilateral No complete system yet ResNet, LSTM
Table 2: Some datasets for collaborative dialogue tasks. Bilateral (or unilateral) 'Tgt. resolution' means whether it
requires both (or just one) players to figure out the entities/objects they should focus on. (Performance is measured by end-to-end task success.)
context encoder representations, concatenate this pooled representation with the query representation, and apply a GELU-activated 2-layer MLP
(similar to our model's) on top of the concatenated representations to predict whether the target image is common or *different*. This modified baseline model can hence be trained using an objective similar to our model's (i.e., Eqn. 7). Note that there is no dense learning signal for this adapted baseline, as the representation from query encoder is already pooled across timesteps.
| Layers fed | valid | test |
|----------------|------------|------------|
| [emb] | 72.4 ± 0.7 | 66.3 ± 0.5 |
| [emb, 1st] | 78.7 ± 1.4 | 71.9 ± 1.6 |
| [emb, 1st∼5th] | 82.2 ± 1.0 | 76.5 ± 1.1 |
| [4th∼9th] | 82.7 ± 0.7 | 76.1 ± 0.6 |
| [7th∼12th] | 83.0 ± 0.6 | 75.9 ± 0.6 |
| All layers | 84.8 ± 1.3 | 77.3 ± 0.3 |
| w/o CLIPScores | 70.7 ± 1.1 | 64.8 ± 1.5 |
| Human | 95.0 | 94.5 |
## D Experiments On Clipscore Injection Layers
Wu and Yang (2022) maintained that feeding timevarying conditions to Transformers more times over the attention layers enhances the conditions' influence, and hence improves performance. Therefore, we choose to infuse CLIPScores with DeBERTa at all attention layers by default. Table 3 shows the performance when we inject CLIPScores to fewer layers. As expected, the more layers CLIPScores are fed to, the better the performance (6 layers > 2 layers > 1 layer, all with p < .01). Yet, infusing at earlier or later layers (3rd ∼5 th columns in Table 3) does not make a meaningful difference.
Table 4: Accuracy (%) with repartitioned train/val sets.
Test sets **(I)/(P)** are identical and are the same as the one used in Tables 1 & 3. They are meant to report test accuracy under **(I)/(P)** partitioning. All results are from the same random seed.
## E Experiments On Overfitting Behavior
| val (I) | val (P) | test (I/P) | |
|-----------------------|-----------|--------------|-------------|
| Full model | 63.7 | 97.4 | 71.2 / 76.6 |
| b. − CLIPSc | 58.6 | 91.7 | 63.8 / 63.6 |
| c. − CLIPSc + VisAttn | 57.4 | 99.1 | 63.9 / 57.2 |
Haber et al. (2019) stated that to collect a sufficient number of reference chains for each game theme, only 5 unique combinations (of two sets of 6 images) were picked and shown to the players.19 This number is drastically smaller than the total \# of possible combinations. (Suppose we want the players to have 2∼4 images in common, then there would be 12 6 62 10 4
+
12 6 63 93
+
12 6 64 82
≈ 4.85M
combinations.) Also, we observe that models with full access to image features (i.e., those with visual cross-attention) exhibit worse overfitting. Hence, we suspect that our model overfits to specific image combinations, i.e., memorizing the labels from them. To test this hypothesis out, we repartition our train & validation sets such that a game theme appears in both sets, but in two different ways:
- train/val (I): val set has **unseen** image combinations, but **seen** pairs of players
- train/val (P): val set has **unseen** pairs of players, but **seen** image combinations The test set is left unchanged. We train the models for 50 epochs without early stopping here.
Performance resulting from these repartitions is shown in Table 4. The numbers support our hypothesis in general. Across different settings, our model does almost perfectly when an image combination
(and hence the correct common/different answers)
is seen during training (i.e., val (P)), and fails when being presented with a new image combination of a 19in the 5 rounds of a game with randomized order seen game theme. As anticipated, the accuracy gap is the worst when visual cross-attention is added.
Moreover, it is worth mentioning that our models perform even worse on 'seen images, unseen image combination' (i.e., val (I)) than on 'unseen images'
(i.e., test set). Therefore, we conjecture that, with such a limited number of images and image combinations, it becomes trivial for deep models to exploit the (prescribed) relationships between inputs and labels, hindering the desirable learning goal—knowing the differences across similar images, and identifying crucial ones for the predictive task with the help of dialogue. This is a major limitation of the PhotoBook dataset.
## F The (Imperfect) Reference Chain Extraction Process
Previous works on reference resolution (Haber et al., 2019; Takmaz et al., 2020) require extracted reference chains for training and evaluation. We rerun experiments for the reference resolution model in (Takmaz et al., 2020) and get an 85% accuracy
(on reference resolution, not our full listener task),
which is similar to the reported number. Upon examining the predictions, we find that 9 out of 10 wrong predictions (w.r.t. extracted labels) with the highest confidence are caused by problematic input data/labels resulting from reference chain extraction. These cases are either due to mislabeled ground truth (while the model actually makes a reasonable prediction), low-quality utterances that provide vague or irrelevant information, reference chains not consistently pointing to one image, or a mix of all the above. Table 5 presents some examples.
## G Further Observations On Our Listener Model Behavior And Outputs
First, we are interested in how characteristics of those R
6 CLIPScore vectors might influence our listener model's decisions. As mentioned in Sec. 3.1, an image tends to get a much higher CLIPScore when being spoken about by the utterance. Therefore, we look at the 3 CLIPScore vectors per round with the largest difference between highest and 2 nd-highest CLIPScore values.20 We then group rounds (in test set) according to whether the model predicts all 3 target images correctly as *common* 20A player has to deal with 3 images per round, and we observe that in most cases, there is one utterance talking specifically about each image.
or *different*.
21 For the *all-correct* cases, the difference between the top two values in the CLIPScore vectors (3 per round, as said above) has a mean = 0.112 (std = 0.063), whereas in the cases where the model makes one or more mistakes, the mean is 0.101 (std = 0.062). Unpaired t-test indicates a significant difference (p < .001) between the pair of statistics. This suggests a possibility that our model works better when CLIPScores contrast different images more clearly.
Next, we inspect the cases where our model predicts all 3 target images incorrectly. Out of 111 such rounds, 72 are concentrated in two themes, i.e., *cup & dining table*, and *car & motorcycle*. Images in the two themes are usually more difficult to be told apart. Human players also score a lower 94.1% accuracy on either of the two themes, compared to the 95.3% overall, and 94.5% over the test set. Table 6 displays two examples of such all-wrong rounds (respectively from cup & dining table and *car & motorcycle* game themes). In the first example, target images 1 and 2 are highly similar such that player used 'sandwhich' and 'mug' to describe both of them. In the second example, apart from similar images, multiple questions were thrown at the same time and answered as many as 4 utterances later. Typos (e.g., sandwhich, *vlack*)
and automatically filtered words (e.g., *m**fin*) may also confuse the model. However, we note that with so many inputs (i.e., text, CLIPScores, pooled target image feature) to our listener model, it is not straightforward to figure out the actual causes of wrong predictions.
| "Mislabeled" ground truth and | Not enough | | | | | | |
|---------------------------------|----------------------------------------------------------|-------------------------------------------------|-------------------------------------|------------------------------------------------------------------------|---------------------------|--------|--------|
| "correct" prediction | | information in an utterance | Correct label but wrong prediction | | | | |
| Reference | - | guy riding | | | | | |
| Chain | | biycle with red stripped bicyle | - | My last on is a family sitting at a gray table next to some steps - | A man M on his shirt N/A | | |
| - | does he have gla*ses and and is the elephant a statue | | | | | | |
| - | No sharkboard guy | - | I don't have that one | | | | |
| - | guy on bike with red striped board | | | | | | |
| Utterance | | I have two kids, one holding a red surfboard. | guy in a grey shirt | Yes. | two phones on red | | |
| with laptop | | laptop | | | | | |
| Probability | | 29.98% | | 29.69% | | 28.71% | 28.63% |
| Ground-truth | | | | | | | |
| Prediction | | | | | | | |
| Context and Target Images | Utterances | Labels and Predictions True labels: - Tgt. Image 1: Different - Tgt. Image 2: Common - Tgt. Image 3: Common |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|
| - A: I need to get my eyes checked lol - A: Okay so same english m**fin sandwhich - A: green tea mug - B: Nope - A: okay and I have the half keyboard latte one - B: yes - A: and the last one.. idk - A: it's a sandwhich but it looks like a mess - A: there is a black mug in the bottom left corner - B: Yup and something blue to the top left and striped to the top right? I have that - A: yeah that's it - A: that's all I have - B: Do you have the donut, with the blue mug and red/white staw? - A: nope - B: All done here too! | Model predictions: - Tgt. Image 1: Common - Tgt. Image 2: Different - Tgt. Image 3: Different True labels: - Tgt. Image 1: Common - Tgt. Image 2: Common - Tgt. Image 3: Different | |
| - B: I have the checkered shirt guy. do you have him? - A: do you have man vlack jacket and helmet next to silver car ? - A: Yes i do have the checkered shirt - B: Is that the one at a gas station - A: no its on a street - B: oh then I don't have it - A: do you have red parked motorcycle in fornt of black car ? - B: Do you have one with a guy on a motorcycle in front of a gas station? - B: Yeah I have that one - A: no i do not have gas station - B: ok I'm set - A: me too | Model predictions: - Tgt. Image 1: Different - Tgt. Image 2: Different - Tgt. Image 3: Common | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6 discusses the limitations of our work, including the dataset-specific implications in overfitting, which is expanded in Appendix E, and potential issues coming from upstream CLIP model.
✗ A2. Did you discuss any potential risks of your work?
The decisions made by our model are restricted to the PhotoBook game, and are binary answers on whether an image is common/not common between two players. It's hard to elicit harmful outputs given this setup.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
See last paragraph in Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We use the PhotoBook dataset, which is briefly introduced in Section 1.
✓ B1. Did you cite the creators of artifacts you used?
PhotoBook dataset paper, and previous works about it are cited in the introduction. We also provide URLs to public code used in our paper.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
PhotoBook dataset is publicly available through this website: https://dmg- photobook.github.io/datasets.html, but it lacks license information. Our implementation is based on Python packages installable via PyPI, which come with permissive license terms.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We make it clear in Section 1 that we address the intended task of the dataset.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. We don't collect extra data. As for the PhotoBook dataset we use, participants are anonymized in the dataset already.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. We don't collect extra data.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
These statistics are discussed in Appendix A.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4 And Appendices A, D & E.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Model/trainig settings are discussed in Section 3.2 & 4, and in Appendices A & B in more detail.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
These details are discussed in Section 4, and expanded in Appendix A. We perform almost no hyperparameter search, and instead just use default DeBERTa hyperparameters in HuggingFace.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In Section 4, Appendices A & D, and the tables reporting performance, we state that our experiments are run with 3 randomly selected and fixed random seeds. Standard deviations and statistical significance are also reported.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.1 & 3.2 discuss our use of the CLIPScore. Appendix A provides more CLIPScore-related implementation details.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
jung-etal-2023-bring | Bring More Attention to Syntactic Symmetry for Automatic Postediting of High-Quality Machine Translations | https://aclanthology.org/2023.acl-short.122 | Automatic postediting (APE) is an automated process to refine a given machine translation (MT). Recent findings present that existing APE systems are not good at handling high-quality MTs even for a language pair with abundant data resources, English{--}German: the better the given MT is, the harder it is to decide what parts to edit and how to fix these errors. One possible solution to this problem is to instill deeper knowledge about the target language into the model. Thus, we propose a linguistically motivated method of regularization that is expected to enhance APE models{'} understanding of the target language: a loss function that encourages symmetric self-attention on the given MT. Our analysis of experimental results demonstrates that the proposed method helps improving the state-of-the-art architecture{'}s APE quality for high-quality MTs. | # Bring More Attention To Syntactic Symmetry For Automatic Postediting Of High-Quality Machine Translations Baikjin Jung♢ Myungji Lee♡ Jong-Hyeok Lee♢♡ **Yunsu Kim**♢♡
♢Department of Computer Science and Engineering
♡Graduate School of Artificial Intelligence Pohang University of Science and Technology, Republic of Korea
{bjjung, mjlee7, jhlee, yunsu.kim}@postech.ac.kr
## Abstract
Automatic postediting (APE) is an automated process to refine a given machine translation
(MT). Recent findings present that existing APE systems are not good at handling highquality MTs even for a language pair with abundant data resources, English–German: the better the given MT is, the harder it is to decide what parts to edit and how to fix these errors. One possible solution to this problem is to instill deeper knowledge about the target language into the model. Thus, we propose a linguistically motivated method of regularization that is expected to enhance APE models' understanding of the target language: a loss function that encourages symmetric self-attention on the given MT. Our analysis of experimental results demonstrates that the proposed method helps improving the state-of-the-art architecture's APE quality for high-quality MTs.
## 1 Introduction
Automatic postediting (APE) is an automated process to transform a given machine translation (MT)
into a higher-quality text (Knight and Chander, 1994). Since 2015, Conference on Machine Translation (WMT) has been hosting an annual shared task for APE, and most of the recently developed APE systems are within the common framework of representation learning using artificial neural networks to learn postediting patterns from the training data (Chatterjee et al., 2018, 2019, 2020; Akhbardeh et al., 2021).
Since 2018, all participants in the shared task have used Transformer-based models (Vaswani et al., 2017), but recent findings of the shared task (Chatterjee et al., 2018, 2019, 2020; Akhbardeh et al., 2021) cast doubt on whether Transformer-based APE models learn good generalizations because such models' APE quality appears to be significantly affected by external factors such as the source–target language pair, the qualitative
![0_image_0.png](0_image_0.png)
characteristics of the provided data, and the quality of the given MT.
Especially, the good quality of the given MTs has brought great difficulty in performing APE on the WMT 2019 test data set: the better the given MT is, the harder it is to decide what parts to edit and how to correct these errors (Chatterjee et al.,
2018, 2019). The thing to notice is that this outcome is not a question of data scarcity because the language pair of this test data set, English–German, is a language pair provided with abundant training, validation, and test data. Also, it is not a question of data heterogeneity, either: the domain of this test data set, IT, shows a high degree of lexical repetition, which indicates that data sets in this domain use the same small set of lexical items (Chatterjee et al., 2018, 2019; Akhbardeh et al., 2021). Thus, it would be a question of modeling, and one possible solution is to implant deeper knowledge about the target language into the model.
To this end, we propose a new method of regularization that is expected to enhance Transformerbased APE models' understanding of German translations. Specifically, the proposed method is based on *Feldermodell* (§2), an established linguistic model, which implies the need for proper treatment 1433 of the underlying symmetry of German sentence structures. To instill the idea of syntactic symmetry into Transformer-based APE models, we introduce a loss function that encourages symmetric self-attention on the given MT. Based on experimental results, we conduct a careful analysis and conclude that the proposed method has a positive effect on improving the state-of-the-art architecture's APE quality for high-quality MTs.
## 2 Linguistic Theory
In German linguistics, *das topologische Satzmodell*
('the topological sentence model') or *das Feldermodell* ('the field model') (Reis, 1980; Wöllstein, 2018; Höhle, 2019) describes how constituents of a sentence are closely related even if they are far apart from each other. Usually, *Feldermodell* divides a clause into *das Vorfeld* ('the prefield'; VF),
die linke Satzklammer ('the left bracket'; LSK),
das Mittelfeld ('the middlefield'; MF), *die rechte* Satzklammer ('the right bracket'; RSK), and das Nachfeld ('the postfield'; NF).
(1) [ Heute VF] [ habe LSK] [ ich MF] [ gesehen RSK] [ zufällig NF], (2) [ [ dass LSK] [ du eine Tasse Kaffee MF] [ getrunken hast RSK] NF].
These parts are all interrelated; LSK and RSK
are a typical example: while the former holds a finite verb or a complementizer, the latter holds a past participle, an infinitive, and a particle. In
(1), VF holds "*Heute*" ('today'); LSK holds "*habe*"
('have'); MF holds "ich" ('I'); RSK holds "*gesehen*"
('seen'); and NF holds "*zufällig*" ('by chance'). (2)
is an additional NF of (1) and includes its own LSK
holding "*dass*" ('that'); MF holding "du eine Tasse Kaffee" ('you a cup of coffee'); and RSK holding
"*getrunken hast*" ('drank').
For such analyses, special tree structures such as Doppelbaum (Wöllstein, 2018) ('double tree') can be used, which is a bimodal tree (Fig. 1), where two CP, C, IP, I, and VP subtrees are '**symmetric**'
with respect to V. We assume that this structural symmetry is parameterized from the perspective, not only of generative linguistics (Wöllstein, 2018; Höhle, 2019), but also of a parametric model P =
{Pθ ∣ θ ∈ Θ}, where Pθ and Θ are a probability distribution and the parameter space, respectively.
Especially, if we look at APE in terms of sequence-to-sequence learning (Sutskever et al.,
2014), the probability distribution of the output sequence (y1,⋯, yLy) is obtained in the following manner:
$$\begin{array}{l}{{P_{\theta}(y_{1},\cdots,y_{L_{y}}\mid x_{1},\cdots,x_{L_{x}},z_{1},\cdots,z_{L_{z}})}}\\ {{=\prod_{t=1}^{L_{y}}P_{\theta}(y_{t}\mid u,v,y_{1},\cdots,y_{t-1}),}}\end{array}$$
where u and v are the representations of a source text (x1,⋯, xLx) and its MT (z1,⋯, zLz), respectively. In this process, we presume that the syntactic symmetry of the target language affects the resulting distribution Pθ; in other words, this syntactic symmetry would be an inductive bias (Mitchell, 1980) that should be handled properly.
## 3 Methodology
We implement a multi-encoder Transformer model consisting of the "Joint-Final" encoder and the
"Parallel" decoder, which is a state-of-the-art architecture for APE (Shin et al., 2021), and conduct a controlled experiment without concern for usage of performance-centered tuning techniques. Specifically, the Joint-Final encoder consists of a sourcetext encoder and an MT encoder, which process the given source text and MT, respectively. Based on this baseline architecture, we propose a method to encourage the MT encoder to perform symmetric self-attention by minimizing the skewness of each self-attention layer's categorical distribution pself.
The used measure of skewness is
$$(\ddot{\mu}_{3})_{i}=\left(\sum_{j=1}^{\lfloor{\frac{L_{z}}{2}}\rfloor}p_{\mathrm{self}}[i,j]-\sum_{j=\lceil{\frac{L_{z}}{2}}\rceil+1}^{L_{z}}p_{\mathrm{self}}[i,j]\right)^{2},$$
for each token ziin the given MT (z1,⋯, zLz).
Accordingly, the basic cross-entropy loss LCE
is regularized by (µ¨3)i, resulting in a new loss function
$${\mathcal{L}}_{\mathrm{Doppellbaum}}={\mathcal{L}}_{\mathrm{CE}}+\mathbb{E}[\alpha]\mathbb{E}[{\ddot{\mu}}_{3}]+(1-\alpha),$$
where $$\mathbb{E}[\alpha]=\frac{\sum_{b=1}^{B}\sum_{i=1}^{L_{z}}\alpha_{b,i}}{B\times L_{z}}$$ is the expected value of coefficients
$$\alpha_{b,i}=\sigma(W^{\mathrm{T}}v_{b,i}+\beta)$$
in the given minibatch, and
$$\mathbb{E}\!\left[{\ddot{\mu}}_{3}\right]={\frac{\sum_{b=1}^{B}\sum_{n=1}^{N}\sum_{h=1}^{H}\sum_{i=1}^{L_{z}}({\ddot{\mu}}_{3})_{b,n,h,i}}{B\times N\times H\times L_{z}}}$$
1434
is the expected value of (µ¨3)*b,n,h,i*. In addition,
(1 − α) is an initial inducement to utilizing µ¨3. In the equations above, σ is the sigmoid function, v is the output of the final layer of the MT encoder, W ∈ R
dmodel and β ∈ R are learned parameters, B is the number of data examples, N is the number of layers, and H is the number of heads.
## 4 Experiment
In the conducted experiment, all hyperparameters are the same as those of Shin et al. (2021) except the learning rate (Appendix A); we basically reproduce their experimental design.
| DATA SETS | SIZES | |
|-------------|------------|-----------|
| TRAINING | eSCAPE-NMT | 5,065,187 |
| WMT 2019 | 13,442 | |
| VALIDATION | WMT 2019 | 1,000 |
| TEST | WMT 2019 | 1,023 |
Both the baseline model and the proposed model are trained by using the training data sets and the validation data set listed in Table 1; we first train the models by using eSCAPE-NMT mixed with the WMT 2019 training data in the ratio of 27 ∶ 1, and then tune them by using the WMT 2019 training data solely.
## 5 Results And Analysis
The result of automatic evaluation (Table 2) indicates that the proposed model improves on the baseline model in terms of BLEU (75.47) but does not in terms of TER (16.54), which is unusual. Although those measures have a strong correlation overall (Fig. 2), the proposed model has more outliers, δBLEU (the value obtained by subtracting a given MT's BLEU from the postedited result's BLEU) of which is over 20, compared to the baseline model; they must be the ones that bring the improvement in BLEU.
Thus, we present an additional evaluation result to further investigate this mismatch between TER
improvements and BLEU improvements: a relative frequency distribution of successes and failures in APE with regard to the TER difference
![2_image_0.png](2_image_0.png)
| WMT 2019 | | | | |
|------------|-----------------|------------------|-------|---------|
| SYSTEMS | TER↓ (σ) | BLEU↑ (σ) | | |
| Given MT | 16.84 | (19.52) | 74.73 | (25.89) |
| Baseline | 16.60 † (19.51) | 75.11 † (26.21) | | |
| DOPPELBAUM | 16.54 † (19.48) | 75.47 †* (26.16) | | |
between a given MT and each model's output (Table 3). Then, the mentioned outliers correspond to PERF, which is the set of the cases where an APE system succeeds in perfectly correcting the given MT with one or more errors, considering that the proposed model's PERF has a µδBLEU (the average of sentence-level BLEU improvements) of 27.21. We see that the proposed model has substantially more PERF cases (5.87%) than the baseline model (4.30%) and that because most of those
'new' (1.57pp) cases are results of nontrivial postediting (Table 4), this increase in the proportion of perfect postediting is valid evidence of the proposed method's effect on enhancing the baseline model's APE quality for high-quality MTs.
| MODIFIED | INTACT | | | | | | | | |
|------------|----------|--------|--------|-------|-------|-------|-------|------|------|
| SYSTEMS | F1 | | | | | | | | |
| RUIN | DEGR | EVEN | IMPR | PERF | ACCE | NEGL | | | |
| % | 1.86 | 6.65 | 5.67 | 7.14 | 4.30 | 23.36 | 51.03 | | |
| Baseline | µδBLEU | −24.48 | −13.51 | 0.50 | 9.22 | 27.23 | 0.00 | 0.00 | 22.8 |
| σδBLEU | 15.48 | 9.42 | 3.38 | 8.43 | 16.39 | 0.00 | 0.00 | | |
| % | 1.56 | 7.33 | 5.77 | 7.14 | 5.87 | 23.66 | 48.68 | | |
| DOPPELBAUM | µδBLEU | −26.12 | −11.72 | −0.42 | 10.04 | 27.21 | 0.00 | 0.00 | 25.4 |
| σδBLEU | 16.09 | 9.16 | 3.82 | 8.69 | 16.37 | 0.00 | 0.00 | | |
| TYPES OF APE | NUMBERS | | |
|----------------|-------------|-------------|----|
| Nouns | 5 | | |
| Expressions | 5 | | |
| Linguistic | Agreement | 3 | |
| Prepositions | 2 | | |
| PERF | Punctuation | 5 | |
| Other | URLs | 2 | |
| Noise Removal | 2 | | |
| Total | 24 | | |
| Nouns | 3 | | |
| Linguistic | Expressions | 2 | |
| Adjectives | 1 | | |
| ACCE | Other | Punctuation | 2 |
| Total | 8 | | |
In addition, in an actual example where only the proposed model corrects the given MT perfectly
(Table 5), we observe that the proposed model successfully captures the close relation between the verb "*enthält*" ('contains') and its object so that the correct form "*Variablen*" ('variables') is used.
Considering that the adverb phrase "*zum Beispiel*"
('for example') in the given MT makes some distance between the verb and its object, it appears that the proposed model integrates information from a wider range of constituents than the baseline model; hence the conclusion that the proposed method instills *Feldermodell*'s idea of syntactic symmetry into Transformer-based APE models and enhances their understanding of German translations.
Another example (Table 6) suggests that the increase in the proportion of ACCE (0.3pp), which is the set of the cases where an APE system adopts the given, already perfect MT, should be cautiously interpreted. Although professional translators tend to perform "only the necessary and sufficient corrections" (Bojar et al., 2015), the validity of test data created by professional translators, including the WMT 2019 test data set, can also be disputable because other native speakers might argue that they can perform better postediting. For example, some people may consider hyphenated compound
"*Zoom-Werkzeug*" ('Zoom tool') more natural than closed compound "*Zoomwerkzeug*" (Table 6).
However, considering the big differences in the proportion of NEGL (2.35pp), which is the set of the cases where an APE system neglects to postedit the given MT, and the F1 score (Table 3),
it appears that such a risk need not be considered in this analysis. Moreover, the proposed model has fewer RUIN cases (1.56%), where it injects errors to the given, already perfect MT, than the baseline model (1.86%). Although the proposed model has more DEGR cases (7.33%),
where it degrades the given MT, than the baseline
| CASE 1: PERF | TER↓ | BLEU↑ | | | | | | | | | |
|--------------------|------------------------------------------------------------------------------------------------------------|---------------|-----------|---------|-----|----------|-----------|----|-----|------|--------|
| Source Text | For example , the following function contains variables that are defined in various block scopes . | | | | | | | | | | |
| Given MT | Die | folgende | Funktion | enthält | zum | Beispiel | Variable | , | die | 6.67 | 80.03 |
| in | verschiedenen | Codebereichen | definiert | sind | . | | | | | | |
| Baseline | Die | folgende | Funktion | enthält | zum | Beispiel | Variable | , | die | 6.67 | 80.03 |
| in | verschiedenen | Codebereichen | definiert | sind | . | | | | | | |
| DOPPELBAUM | Die | folgende | Funktion | enthält | zum | Beispiel | Variablen | , | die | 0.00 | 100.00 |
| in | verschiedenen | Codebereichen | definiert | sind | . | | | | | | |
| Manual Postediting | Die folgende Funktion enthält zum Beispiel Variablen , die in verschiedenen Codebereichen definiert sind . | | | | | | | | | | |
Table 5: A case where only the proposed model corrects the given MT perfectly. Considering the manually postedited result, wrong words in the given MT, the APE result of the baseline model, and that of the proposed model are highlighted in pink while correct words are highlighted in green. All the texts are tokenized or detokenized using Moses (Koehn et al., 2007).
| CASE 2: ACCE | TER↓ | BLEU↑ | | | | | | |
|--------------------|------------------------------------------|---------|-----|-----|---------------|----|-------|--------|
| Source Text | Double-click the Zoom tool . | | | | | | | |
| Given MT | Doppelklicken | Sie | auf | das | Zoomwerkzeug | . | 0.00 | 100.00 |
| Baseline | Doppelklicken | Sie | auf | das | Zoom-Werkzeug | . | 16.67 | 53.73 |
| DOPPELBAUM | Doppelklicken | Sie | auf | das | Zoomwerkzeug | . | 0.00 | 100.00 |
| Manual Postediting | Doppelklicken Sie auf das Zoomwerkzeug . | | | | | | | |
Table 6: A case where only the proposed model adopts the given, already perfect MT. Details are the same as in Table 5.
(6.65%), the proposed model's quality degradation µδBLEU = −11.72 is less severe than that of the baseline (µδBLEU = −13.51). Therefore, we conclude that the proposed method results in small but certain improvements.
## 6 Conclusion
To improve the APE quality for high-quality MTs, we propose a linguistically motivated method of regularization that enhances Transformer-based APE models' understanding of the target language:
a loss function that encourages APE models to perform symmetric self-attention on a given MT. Experimental results suggest that the proposed method helps improving the state-of-the-art architecture's APE quality for high-quality MTs; we also present a relative frequency distribution of successes and failures in APE and see increases in the proportion of perfect postediting and the F1 score.
This evaluation method could be useful for assessing the APE quality for high-quality MTs in general. Actual cases support that the proposed method successfully instills the idea of syntactic symmetry into APE models. Future research should consider different language pairs and different sets of hyperparameters.
## 7 Acknowledgements
This work was supported by Institute of Information & Communications Technology Planning &
Evaluation (IITP) grant funded by the Korean government (Ministry of Science and ICT) (No. 20190-01906, Artificial Intelligence Graduate School Program (POSTECH)). We thank Richard Albrecht for assistance in the manual categorization of cases.
## 8 Limitations
First, neither *Feldermodell* (Reis, 1980; Wöllstein, 2018; Höhle, 2019) nor *Doppelbaum* (Wöllstein, 2018) has obtained complete concurrence among linguists. Also, we limit our scope to the English–
German language pair and the IT domain using the WMT 2019 training, validation, and test data sets.
A broader scope would not provide confidence in the validity of conducted experiments because there are hardly any standard setups for experimental research (Chatterjee et al., 2018, 2019; Akhbardeh et al., 2021).
In addition, the conducted experiment should take into consideration the effect of randomness that is attended in the process of training artificial neural networks; different techniques, different hyperparameters, and multiple runs of optimizers (Clark et al., 2011) may present different results. However, as previous studies (Chatterjee et al., 2018, 2019, 2020; Akhbardeh et al., 2021),
including the study on the baseline model (Shin et al., 2021), do not consider the effect of randomness, we also do not investigate the effect of randomness further, considering that training multiple models (Appendix A) to obtain good estimators
(TER and BLEU) will cost a lot.
## References
Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondˇrej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, and Marcos Zampieri. 2021. Findings of the 2021 Conference on Machine Translation (WMT21). In *Proceedings of the Sixth Conference on Machine Translation*,
pages 1–88, Online. Association for Computational Linguistics.
Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 Workshop on Statistical Machine Translation.
In *Proceedings of the Tenth Workshop on Statistical*
Machine Translation, pages 1–46, Lisbon, Portugal.
Association for Computational Linguistics.
Rajen Chatterjee, Christian Federmann, Matteo Negri, and Marco Turchi. 2019. Findings of the WMT
2019 Shared Task on Automatic Post-Editing. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2),
pages 11–28, Florence, Italy. Association for Computational Linguistics.
Rajen Chatterjee, Markus Freitag, Matteo Negri, and Marco Turchi. 2020. Findings of the WMT 2020 Shared Task on Automatic Post-Editing. In *Proceedings of the Fifth Conference on Machine Translation*,
pages 646–659, Online. Association for Computational Linguistics.
Rajen Chatterjee, Matteo Negri, Raphael Rubino, and Marco Turchi. 2018. Findings of the WMT 2018 Shared Task on Automatic Post-Editing. In *Proceedings of the Third Conference on Machine Translation:*
Shared Task Papers, pages 710–725, Belgium, Brussels. Association for Computational Linguistics.
Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A.
Smith. 2011. Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics:
Human Language Technologies, pages 176–181, Portland, Oregon, USA. Association for Computational Linguistics.
Tilman N. Höhle. 2019. Topologische Felder. In Stefan Müller, Marga Reis, and Frank Richter, editors, Beiträge zur deutschen Grammatik: Gesammelte Schriften von Tilman N. Höhle, 2 edition, volume 5 of *Classics in Linguistics*, pages 7–90. Language Science Press, Berlin, Germany.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
Method for Stochastic Optimization. In *3rd International Conference on Learning Representations,*
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: OpenSource Toolkit for Neural Machine Translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72, Vancouver, Canada. Association for Computational Linguistics.
Kevin Knight and Ishwar Chander. 1994. Automated Postediting of Documents. In *Proceedings of the* AAAI Conference on Artificial Intelligence, 12, pages 779–784.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation.
In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
Tom M. Mitchell. 1980. The Need for Biases in Learning Generalizations. Technical report, Rutgers University, New Brunswick, NJ.
Matteo Negri, Marco Turchi, Rajen Chatterjee, and Nicola Bertoldi. 2018. eSCAPE: a Large-scale Synthetic Corpus for Automatic Post-Editing. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),
pages 24–30, Miyazaki, Japan. European Language Resources Association (ELRA).
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Nikolaos Pappas, Lesly Miculicich, and James Henderson. 2018. Beyond Weight Tying: Learning Joint Input-Output Embeddings for Neural Machine Translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 73–83, Brussels, Belgium. Association for Computational Linguistics.
Marga Reis. 1980. On Justifying Topological Frames
: 'Positional Field' and the Order of Nonverbal Constituents in German 0. Documentation et Recherche en Linguistique Allemande Vincennes, 22-23:59–85.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–
1725, Berlin, Germany. Association for Computational Linguistics.
Jaehun Shin, Wonkee Lee, Byung-Hyun Go, Baikjin Jung, Youngkil Kim, and Jong-Hyeok Lee. 2021. Exploration of Effective Attention Strategies for Neural Automatic Post-Editing with Transformer. ACM
Transactions on Asian and Low-Resource Language Information Processing, 20(6).
Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In *Proceedings of the 7th Conference of the* Association for Machine Translation in the Americas: Technical Papers, pages 223–231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014.
Dropout: A Simple Way to Prevent Neural Networks from Overfitting. *Journal of Machine Learning Research*, 15(56):1929–1958.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks.
In *Advances in Neural Information Processing Systems*, volume 27. Curran Associates, Inc.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Angelika Wöllstein. 2018. Topologisches Satzmodell.
In Jörg Hagemann and Sven Staffeldt, editors, *Syntaxtheorien: Analysen im Vergleich*, volume 28 of Stauffenburg Einführungen, pages 145–166. Stauffenburg, Tübingen, Germany.
Yilin Yang, Liang Huang, and Mingbo Ma. 2018. Breaking the Beam Search Curse: A Study of (Re-)Scoring Methods and Stopping Criteria for Neural Machine Translation. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 3054–3059, Brussels, Belgium. Association for Computational Linguistics.
## A Experimental Details
We use the following hyperparameters: the number of layers N = 6, the number of heads H = 8, the dimension of key vectors dk = 64, the dimension of value vectors dv = 64, the vector dimension for multi-head attention layers dmodel = 512, the vector dimension for the inner layer of position-wise feedforward networks dff = 2,048, the dropout (Srivastava et al., 2014) probability Pdrop = 0.1, the label smoothing value ϵLS = 0.1, minibatches of 25,000 tokens, a learning rate of 2.0, warmup for 18,000 training steps, and a shared vocabulary consisting of 32,000 subword units (Sennrich et al., 2016)
1.
We also use weight tying (Pappas et al., 2018) and the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.998, and ϵ = 10−8. Decoding options are beam search with a beam size b = 5, a length penalty multiplied by a strength coefficient α = 0.6, and beam search stopping (Yang et al.,
2018) with the length ratio lr = 1.3.
We use OpenNMT-py 3.0 (Klein et al., 2017)
2 with the random seed 1128. We first train the models for 100,000 steps, about 36 hours on one NVIDIA GeForce RTX™ 3090, and then tune them around 1,000 steps.
1We used SentencePiece (Apache License 2.0) 2The MIT License.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 8.
✗ A2. Did you discuss any potential risks of your work?
With the standard setup, studies in the field of automatic postediting, including this work, do not involve potential risks.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 And Section 5.
✓ B1. Did you cite the creators of artifacts you used?
Section 4 and Section 5.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4, Section 5, and Appendix A.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The used artifacts are not considered to have any extraordinary usages.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The used artifacts are not considered to contain any information that names or uniquely identifies individual people or offensive content.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 1 and Section 4.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4 and Section 5.
## C ✓ **Did You Run Computational Experiments?** Section 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A. The number of parameters are reported in the study on the baseline system.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4, Section 5, and Appendix A.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Section 5. However, the person who helped us only double-checked our analysis; he did not annotate any data.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
yan-etal-2023-embarrassingly | An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition | https://aclanthology.org/2023.acl-short.123 | Named entity recognition (NER) is the task to detect and classify entity spans in the text. When entity spans overlap between each other, the task is named as nested NER. Span-based methods have been widely used to tackle nested NER. Most of these methods get a score matrix, where each entry corresponds to a span. However, previous work ignores spatial relations in the score matrix. In this paper, we propose using Convolutional Neural Network (CNN) to model these spatial relations. Despite being simple, experiments in three commonly used nested NER datasets show that our model surpasses several recently proposed methods with the same pre-trained encoders. Further analysis shows that using CNN can help the model find more nested entities. Besides, we find that different papers use different sentence tokenizations for the three nested NER datasets, which will influence the comparison. Thus, we release a pre-processing script to facilitate future comparison. | # An Embarrassingly Easy But Strong Baseline For Nested Named Entity Recognition
Hang Yan∗
, Yu Sun∗
, Xiaonan Li, Xipeng Qiu†
Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University
{hyan19,lixn20,xpqiu}@fudan.edu.cn [email protected]
## Abstract
Named entity recognition (NER) is the task to detect and classify entity spans in the text.
When entity spans overlap between each other, the task is named as nested NER. Span-based methods have been widely used to tackle nested NER. Most of these methods get a score matrix, where each entry corresponds to a span. However, previous work ignores spatial relations in the score matrix. In this paper, we propose using Convolutional Neural Network (CNN)
to model these spatial relations. Despite being simple, experiments in three commonly used nested NER datasets show that our model surpasses several recently proposed methods with the same pre-trained encoders. Further analysis shows that using CNN can help the model find more nested entities. Besides, we find that different papers use different sentence tokenizations for the three nested NER datasets, which will influence the comparison. Thus, we release a pre-processing script to facilitate future comparison. 1
## 1 Introduction
Named Entity Recognition (NER) is the task to extract entities from raw text. It has been a fundamental task in the Natural Language Processing
(NLP) field. Previously, this task is mainly solved by the sequence labeling paradigm through assigning a label to each token (Huang et al., 2015; Ma and Hovy, 2016; Yan et al., 2019). However, this method is not directly applicable to the nested NER
scenario, since a token may be included in two or more entities. To overcome this issue, the spanbased method which assigns labels to each span is introduced (Eberts and Ulges, 2020; Li et al., 2020; Yu et al., 2020).
![0_image_0.png](0_image_0.png)
Eberts and Ulges (2020) use a pooling method over token representations to get the span representation, and then conduct classification on this span representation. Li et al. (2020) transform the NER task into a Machine Reading Comprehension
(MRC) form, they use the entity type as the query, and ask the model to select spans that belong to this entity type. Yu et al. (2020) utilize the Biaffine decoder from dependency parsing (Dozat and Manning, 2017) to convert the span classification into classifying the start and end token pairs. However, these work does not take advantage of the spatial 1442 correlations between adjacent spans.
As depicted in Figure 1, spans surrounding a span have special relationships with the center span.
It should be beneficial if we can leverage these spatial correlations. In this paper, we use the Biaffine decoder (Dozat and Manning, 2017) to get a 3D
feature matrix, where each entry represents one span. After that, we view the span feature matrix as a spatial object with channels (like images) and utilize Convolutional Neural Network (CNN) to model the local interaction between spans.
We compare this simple method with recently proposed methods (Wan et al., 2022; Li et al., 2022; Zhu and Li, 2022; Yuan et al., 2022). To make sure our method is strictly comparable to theirs, we ask the authors for their version of data. Although all of them use the same datasets, we find that the statistics, such as the number of sentences and entities, are not the same. The difference is caused by the usage of distinct sentence tokenization methods, which will influence the performance as shown in our experiments. To facilitate future comparison, we release a pre-processing script for ACE2004, ACE2005 and Genia datasets.
Our contributions can be summarized as follows.
- We find that the adjacent spans have special correlations between each other, and we propose using CNN to model the interaction between them. Despite being very simple, it achieves a considerable performance boost in three widely used nested NER datasets.
- We release a pre-processing script for the three nested NER datasets to facilitate direct and fair comparison.
- The way we view the span feature matrix as a spatial object with channels shall shed some light on future exploration of span-based methods for nested NER task.
## 2 Proposed Method
In this section, we first introduce the nested NER task, then describe how to get the feature matrix.
After that, we present the CNN module to model the spatial correlation on the feature matrix. A
general framework can be viewed in Figure 2.
## 2.1 Nested Ner Task
Given an input sentence X = [x1, x2*, . . . , x*n] with n tokens, the nested NER task aims to extract all
![1_image_0.png](1_image_0.png)
entities in X. Each entity can be expressed as a tuple (si, ei, ti). si, ei are the start, end index of the entity. ti ∈ {1*, . . . ,* |T|} is its entity type and T = {t1*, ..., t*n} is entity types. As the task name suggests, entities may overlap with each other, but different entities are not allowed to have crossing boundaries. For a sentence with n tokens, there are n(n + 1)/2 valid spans.
## 2.2 Span-Based Representation
We follow Yu et al. (2020) to formulate this task into a span classification task. Namely, for each valid span, the model assigns an entity label to it.
The method first uses an encoder to encode the input sentence as follows:
## H = Encoder(X),
where H ∈ R
n×d, and d is the hidden size. Various pre-trained models, such as BERT (Devlin et al., 2019), are usually used as the encoder. For the word tokenized into several pieces, we use maxpooling to aggregate from its pieces' hidden states.
Next, we use a multi-head Biaffine decoder (Dozat and Manning, 2017; Vaswani et al.,
2017) to get the score matrix R as follows:
$$\begin{array}{r}{\mathbf{H}_{s}=\mathrm{LeakyReLU}(\mathbf{H}W_{s}),}\\ {\mathbf{H}_{e}=\mathrm{LeakyReLU}(\mathbf{H}W_{e}),}\\ {\mathbf{R}=\mathrm{MHBaffine}(\mathbf{H}_{s},\mathbf{H}_{e})}\end{array}$$
| # Param. | ACE2004 | ACE2005 | | | | | |
|-----------------------------------------------------|-----------|-----------|---------|---------|---------|---------|---------|
| (Million) | P | R | F1 | P | R | F1 | |
| Data from Li et al. (2022) W2NER (2022)[BERT-large] | 355.4 | 87.33 | 87.71 | 87.52 | 85.03 | 88.62 | 86.79 |
| Ours[BERT-large] | 345.1 | 87.8238 | 87.4020 | 87.6118 | 86.3961 | 87.2434 | 86.8245 |
| w.o. CNN[BERT-large] | 343.6 | 86.5448 | 87.0941 | 86.8121 | 84.8826 | 86.9933 | 85.9227 |
| Data from Wan et al. (2022) SG (2022)[BERT-base] | 112.3 | 86.70 | 85.93 | 86.31 | 84.37 | 85.87 | 85.11 |
| Ours[BERT-base] | 110.5 | 86.8561 | 86.4536 | 86.6522 | 84.9449 | 85.4027 | 85.1616 |
| w.o. CNN[BERT-base] | 109.1 | 85.7946 | 85.7812 | 85.7822 | 82.9121 | 84.8923 | 83.8916 |
| Data from Zhu and Li (2022) BS (2022)[RoBERTa-base] | 125.6 | 88.43 | 87.53 | 87.98 | 86.25 | 88.07 | 87.15 |
| Ours[RoBERTa-base] | 125.6 | 87.7727 | 88.2836 | 88.0314 | 86.5878 | 87.9446 | 87.2548 |
| w.o. CNN[RoBERTa-base] | 125.2 | 86.7127 | 87.4042 | 87.0518 | 85.4839 | 87.5459 | 86.5026 |
| Data from this work W2NER (2022)[BERT-large]† | 355.4 | 87.1711 | 87.7019 | 87.4311 | 85.7830 | 87.8124 | 86.7721 |
| Ours[BERT-large] | 345.1 | 87.9830 | 87.5022 | 87.7416 | 86.2665 | 87.5631 | 86.9123 |
| w.o. CNN[BERT-large] | 343.6 | 86.6068 | 86.4836 | 86.5419 | 84.9134 | 87.3926 | 86.1330 |
| BS (2022)[RoBERTa-base]† | 125.6 | 87.3240 | 86.8416 | 87.0824 | 86.5838 | 87.8459 | 87.2032 |
| Ours[RoBERTa-base] | 125.6 | 87.3341 | 87.2925 | 87.3116 | 86.7029 | 88.1654 | 87.4226 |
| w.o. CNN[RoBERTa-base] | 125.2 | 86.0936 | 86.8823 | 86.4817 | 85.1767 | 88.035 | 86.5638 |
where Ws, We ∈ R
d×h, h is the hidden size, MHBiaffine(·, ·) is the multi-head Biaffine decoder2, and R ∈ R
n×n×r, r is the feature size.
Each cell (*i, j*) in the R can be seen as the feature vector v ∈ R
rfor the span. And for the lower triangle of R (where *i > j*), the span contains words from the j-th to the i-th (Therefore, one span will have two entries if its length is larger than 1).
## 2.3 Cnn On Feature Matrix
As shown in Figure 1, the cell has relations with cells around. Therefore, we propose using CNN to model these interactions. We repeat the following CNN block several times in our model:
## R′ = Conv2D(R), R′′ = Gelu(Layernorm(R′ + R)),
where Conv2d, LayerNorm and GeLU are the 2D
CNN, layer normalization (Ba et al., 2016) and GeLU activation function (Hendrycks and Gimpel, 2016). The layer normalization is conducted in the feature dimension. A noticeable fact here is that since the number of tokens n in sentences varies, their Rs are of different shape. To make sure results are the same when R is processed in batch, the 2D
CNN has no bias term, and all the paddings in R
are filled with 0.
2The detailed description is in the Appendix A.1.
## 2.4 The Output
We use a perceptron to get the prediction logits P
as follows: 3 P = Sigmoid(Wo(R + R′′) + b),
where Wo ∈ R|T|×r, b ∈ R|T|, P ∈ R
n×n×|T|.
And then, we use golden labels yij and the binary cross entropy to calculate the loss as:
$$\mathbb{L}_{B C E}=-\sum_{0\leq i,j<n}y_{i j}\mathrm{log}(P_{i j}),$$
More special details about our proposed method during training and inference procedure are described in Appendix A.
## 3 Experiment 3.1 Experimental Setup
To verify the effectiveness of our proposed method, we conduct experiments in three widely used nested NER datasets, ACE 20044(Doddington et al.,
2004), ACE 20055(Walker and Consortium, 2005)
and Genia (Kim et al., 2003).
3We did not use the Softmax because in the very rare case
(such as in the ACE2005 and Genia dataset), one span can have more than one entity tag.
4https://catalog.ldc.upenn.edu/LDC2005T09 5https://catalog.ldc.upenn.edu/LDC2006T06 Besides, we choose recently published papers as our baselines. To make sure our experiments are strictly comparable to theirs, we ask the authors for their versions of data. The data statistics for each paper are listed in the Appendix B. For ACE2004 and ACE2005, although all of them use the same document split as suggested (Lu and Roth, 2015),
they use different sentence tokenizations, resulting in different numbers of sentences and entities.
To facilitate future research on nested NER, we release the pre-processing code and fix some tokenization issues to avoid including unannotated text and dropping entities. While for the Genia data, there are some annotation conflicts. For examples, one document with the bibliomisc MEDLINE:97218353 is duplicated in the original data, and different work has different annotations on it.
We fix these conflicts. We replicate each experiment five times and report its average performance with standard derivation.
# Param.
(Million)
Genia
P R F1
Data from Li et al. *(2022)*
W2NER (2022) 113.6 83.10 79.76 81.39
Ours 112.6 83.1824 79.708 **81.40**11
w.o. CNN 111.1 80.664 79.767 80.215
Data from Wan et al. *(2022)*
SG (2022) 112.7 77.92 80.74 79.30
Ours 112.2 81.0548 77.8765 **79.42**20
w.o. CNN 111.1 78.6041 78.3552 78.4716
Data from Yuan et al. *(2022)*
Triaffine (2022) 526.5 80.42 82.06 81.23
Ours 128.4 83.379 79.4315 **81.35**8
w.o. CNN 111.1 80.8723 79.4723 80.1616
Data from this work
W2NER† 113.6 81.5861 79.1149 80.3223
Ours 112.6 81.5221 79.1718 **80.33**13
w.o. CNN 111.1 78.5928 79.8514 79.2212
## 3.2 Main Results
Results for ACE2004 and ACE2005 are listed in Table 1, and results for Genia is listed in Table 2.
When using the same data from previous work, our simple CNN model surpasses the baselines with less or similar number of parameters, which proves that using CNN to model the interaction between neighbor spans can be beneficial to the nested NER
task. Besides, in the bottom block, we reproduce some baselines in our newly processed data to facilitate future comparison. Comparing the last block
(processed by us) and the upper blocks (data from previous work), different tokenizations can indeed influence the performance. Therefore, we appeal for the same tokenization for future comparison.
| FEPR | FERE | NEPR | NERE | |
|------------------|---------|---------|---------|---------|
| ACE2004 Ours | 86.90.2 | 87.30.5 | 88.40.6 | 88.80.9 |
| w.o. CNN 86.30.8 | 86.80.3 | 89.40.8 | 86.61.3 | |
| ACE2005 Ours | 86.20.6 | 88.30.1 | 91.40.5 | 89.00.8 |
| w.o. CNN 85.20.7 | 87.90.3 | 91.30.5 | 86.20.8 | |
| Genia Ours | 81.70.2 | 79.40.2 | 71.71.6 | 75.51.3 |
| w.o. CNN 79.00.3 | 80.00.1 | 72.71.2 | 64.81.0 | |
## 3.3 Why Cnn Helps
To study why CNN can boost the performance of the nested NER datasets, we split entities into two kinds. One kind is entities that overlap with other entities, and the other kind is entities that do not. We design 4 metrics NEPR, NERE, FEPR and FERE, which are flat entity precision, flat entity recall, nested entity precision and nested entity recall, respectively.6, and list the results in Table 3. Compared with models without CNN, the NERE with CNN improve for 2.2, 2.8 and 10.7 on ACE2004, ACE2005 and Genia respectively. Namely, much of the performance improvement can be ascribed to finding more nested entities. This is expected as the CNN can be more effective for exploiting the neighbor entities when they are nested.
## 4 Related Work
Previously, four kinds of paradigms have been proposed to solve the nested NER task.
The first one is the sequence labeling framework (Straková et al., 2019), since one token can be 6The detailed calculation description of the 4 metrics locates in the Appendix D.
contained in more than one entities, the Cartesian product of the entity labels are used. However, the Cartesian labels will suffer from the long-tail issue.
The second one is to use the hypergraph to efficiently represent spans (Lu and Roth, 2015; Muis and Lu, 2016; Katiyar and Cardie, 2018; Wang and Lu, 2018). The shortcoming of this method is the complex decoding.
The third one is the sequence-to-sequence
(Seq2Seq) framework (Sutskever et al., 2014; Lewis et al., 2020; Raffel et al., 2020) to generate the entity sequence. The entity sequence can be the entity pointer sequence (Yan et al., 2021; Fei et al., 2021) or the entity text sequence (Lu et al.,
2022). Nevertheless, the Seq2Seq method suffers from the time-demanding decoding.
The fourth one is to conduct span classification.
Eberts and Ulges (2020) proposed to enumerate all possible spans within a sentence, and use a pooling method to get the span representation. While Yu et al. (2020) proposed to use the start and end tokens of a span to pinpoint the span, and use the Biaffine decoder to get the scores for each span.
The span-based methods are friendly to parallelism and the decoding is easy. Therefore, this formulation has been widely adopted (Wan et al., 2022; Zhu and Li, 2022; Li et al., 2022; Yuan et al., 2022). However, the relation between neighbor spans was ignored in previous work.
## 5 Conclusion
In this paper, we propose using CNN on the score matrix of span-based NER model. Although this method is very simple, it achieves comparable or better performance than recently proposed methods. Analysis shows exploiting the spatial correlation between neighbor spans through CNN can help model find more nested entities. And experiments show that different tokenizations indeed influence the performance. Therefore, it is necessary to make sure all comparative baselines use the same tokenization. To facilitate future comparison, we release a new pre-processing script for three nested NER datasets.
## Limitations
While we discover that simply applying CNN on top of the score matrix of span-based NER model performs well on the nested NER scenario, there are still some limitations that are worth discussing.
Firstly, we mainly choose three commonly used nested NER datasets, which may lack generalization. Secondly, we only focus on nested NER tasks for the spatial relations between spans are more intuitive and common in nested scenario than those in flat NER. However, the principle of using CNN
to model the relations is also applicable to spans in the flat NER task. Future work can take flat NER
into consideration based on our exploration, and experiments on more datasets.
## Acknowledgements
We would like to thank the anonymous reviewers for their insightful comments. We also thank the developers of fastNLP7and fitlog8. This work was supported by the National Natural Science Foundation of China (No. 62236004 and No. 62022027)
and CCF-Baidu Open Fund.
## References
Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E.
Hinton. 2016. Layer normalization. *CoRR*,
abs/1607.06450.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186. Association for Computational Linguistics.
George R. Doddington, Alexis Mitchell, Mark A. Przybocki, Lance A. Ramshaw, Stephanie M. Strassel, and Ralph M. Weischedel. 2004. The automatic content extraction (ACE) program - tasks, data, and evaluation. In *Proceedings of the Fourth International* Conference on Language Resources and Evaluation, LREC 2004, May 26-28, 2004, Lisbon, Portugal. European Language Resources Association.
Timothy Dozat and Christopher D. Manning. 2017.
Deep biaffine attention for neural dependency parsing. In *5th International Conference on Learning* Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Markus Eberts and Adrian Ulges. 2020. Span-based joint entity and relation extraction with transformer 7https://github.com/fastnlp/fastNLP. FastNLP is a natural language processing python package.
8https://github.com/fastnlp/fitlog. Fitlog is an experiment tracking package.
pre-training. In ECAI 2020 - 24th European Conference on Artificial Intelligence, 29 August-8 September 2020, Santiago de Compostela, Spain, August 29 - September 8, 2020 - Including 10th Conference on Prestigious Applications of Artificial Intelligence
(PAIS 2020), volume 325 of *Frontiers in Artificial Intelligence and Applications*, pages 2006–2013. IOS
Press.
Hao Fei, Donghong Ji, Bobo Li, Yijiang Liu, Yafeng Ren, and Fei Li. 2021. Rethinking boundaries: Endto-end recognition of discontinuous mentions with pointer networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 12785–12793. AAAI Press.
Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. *CoRR*, abs/1606.08415.
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging.
CoRR, abs/1508.01991.
Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 861–871. Association for Computational Linguistics.
Jin-Dong Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. GENIA corpus - a semantically annotated corpus for bio-textmining. In *Proceedings of* the Eleventh International Conference on Intelligent Systems for Molecular Biology, June 29 - July 3, 2003, Brisbane, Australia, pages 180–182.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.
2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining.
Bioinform., 36(4):1234–1240.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880.
Association for Computational Linguistics.
Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022.
Unified named entity recognition as word-word relation classification. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth
Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 10965–10973. AAAI Press.
Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC
framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5849–5859. Association for Computational Linguistics.
Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015,*
Lisbon, Portugal, September 17-21, 2015, pages 857–
867. The Association for Computational Linguistics.
Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5755–5772. Association for Computational Linguistics.
Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.
Aldrian Obaja Muis and Wei Lu. 2016. Learning to recognize discontiguous entities. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 75–84. The Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67.
Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization.
In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5326–5331. Association for Computational Linguistics.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
Sequence to sequence learning with neural networks.
In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
C. Walker and Linguistic Data Consortium. 2005. ACE
2005 Multilingual Training Corpus. LDC corpora.
Linguistic Data Consortium.
Juncheng Wan, Dongyu Ru, Weinan Zhang, and Yong Yu. 2022. Nested named entity recognition with spanlevel graphs. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 892–903. Association for Computational Linguistics.
Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 204–
214. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu.
2019. TENER: adapting transformer encoder for named entity recognition. *CoRR*, abs/1911.04474.
Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 5808–5822. Association for Computational Linguistics.
Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020.
Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6470–6476. Association for Computational Linguistics.
Zheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022. Fusing heterogeneous factors with triaffine mechanism for nested named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May
22-27, 2022, pages 3174–3186. Association for Computational Linguistics.
Enwei Zhu and Jinpeng Li. 2022. Boundary smoothing for named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL
2022, Dublin, Ireland, May 22-27, 2022, pages 7096–
7108. Association for Computational Linguistics.
## A Detailed Proposed Method A.1 Multi-Head Biaffine Decoder
The input of Multi-head Biaffine decoder is two matrices Hs, He ∈ R
n×h, and the output is R ∈
R
n×n×r. The formulation of Multi-head Biaffine decoder is as follows
$$\begin{array}{c}{{\bf S_{1}}[i,j]=({\bf H}_{s}[i]\oplus{\bf H}_{e}[j]\oplus{\bf w}_{|i-j|})W,}}\\ {{\{{\bf H}_{s}^{(k)}\},\{{\bf H}_{e}^{(k)}\}=\mathrm{Split}({\bf H}_{s}),\mathrm{Split}({\bf H}_{e}),}}\\ {{\bf S_{2}^{(k)}[i,j]={\bf H}_{s}^{(k)}[i]U{\bf H}_{e}^{(k)}[j]^{T},}}\\ {{\bf S_{2}=\mathrm{Concat}({\bf S}_{2}^{(1)},...,{\bf S}_{2}^{(K)}),}}\\ {{\bf R={\bf S}_{1}+{\bf S}_{2},}}\end{array}$$
where Hs, He ∈ R
n×h, h is the hidden size, w|i−j| ∈ R
cis the span length embedding for length |i − j|, W ∈ R
(2h+c)×r, S1 ∈ R
n×n×r, r is the biaffine feature size, Split(·) equally splits a matrix in the last dimension, thus, H
(k)
s , H
(k)
e ∈
R
n×hk ; hk is the hidden size for each head, and U ∈ R
hk×rk×hk , S2 ∈ R
n×n×r, and R ∈
R
n×n×r.
We do not use multi-head for W, because it does not occupy too many parameters and using multihead for W harms the performance slightly.
## A.2 Training Loss
Unlike previous works that only use the upper triangle part to get the loss (Yu et al., 2020; Zhu and Li, 2022), we use both upper and lower triangles to calculate the loss, as depicted in section 2.4. The reason is that in order to conduct batch computation, we cannot solely compute features from the upper triangle part. Since features from the lower triangle part have been computed, we also use them for the output. The tag for the score matrix is symmetric, namely, the tag in the (*i, j*)-th entry is the same as that in the (*j, i*)-th.
| Sentence | Mention | | | | | | | | | |
|------------|-----------|--------|----------|--------|--------|--------|--------|----------|-------|------|
| #Train | #Dev | #Test | Avg. Len | #Ovlp. | #Train | #Dev | #Test | Avg. Len | | |
| W2NER | 6,802 | 813 | 897 | 20.12 | 12,571 | 22,056 | 2,492 | 3,020 | 2.5 | |
| SG | 6.198 | 742 | 809 | 21.55 | 12,666 | 22,195 | 2,514 | 3,034 | 2.51 | |
| ACE2004 | BS | 6,799 | 829 | 879 | 20.43 | 12,679 | 22,207 | 2,511 | 3,031 | 2.51 |
| Ours | 6,297 | 742 | 824 | 23.52 | 12,690 | 22,231 | 2,514 | 3,036 | 2.64 | |
| W2NER | 7,606 | 1,002 | 1,089 | 17.77 | 12,179 | 24,366 | 3,188 | 2,989 | 2.26 | |
| SG | 7,285 | 968 | 1,058 | 18.60 | 12,316 | 24,700 | 3,218 | 3,029 | 2.26 | |
| ACE2005 | BS | 7,336 | 958 | 1,047 | 18.90 | 12,313 | 24,687 | 3,217 | 3,027 | 2.26 |
| Ours | 7,178 | 960 | 1,051 | 20.59 | 12,405 | 25,300 | 3,321 | 3,099 | 2.40 | |
| W2NER | 15,023 | 1,669 | 1,854 | 25.41 | 10,263 | 45,144 | 5,365 | 5,506 | 1.97 | |
| SG | 15,022 | 1,669 | 1,855 | 26.47 | 10,412 | 47,006 | 4,461 | 5,596 | 2.07 | |
| Genia | Triaffine | 16,692 | - | 1,854 | 25.41 | 10,263 | 50,509 | - | 5,506 | 1.97 |
| Ours | 15,038 | 1,765 | 1,732 | 26.47 | 10,315 | 46,203 | 4,714 | 5,119 | 2.0 | |
## A.3 Inference
When inference, we calculate scores in the upper triangle part as:
$$\hat{P_{i j}}=(P_{i j}+P_{j i})/2,$$
where i ≤ j. Then we only use this upper triangle score to get the final prediction. The decoding process generally follows Yu et al. (2020)'s method.
We first prune out the non-entity spans (none of its scores is above 0.5), then we sort the remained spans based on their maximum entity score. We pick the spans based on this order, if a span's boundary clashes with selected spans', it is ignored.
## B Data
We list the statistics for each dataset in Table 4.
10 As shown in the table, the number of sentences and even the number of entities are different for each paper on the same dataset. Therefore, it is not fair to directly compare results. For the ACE2004 and ACE2005, we release the pre-processing code to get data from the LDC files. We make sure no entities are dropped because of the sentence tokenization. Thus, the pre-processed ACE2004 and ACE2005 data from this work in Table 4 have the most entities.
10The number of entities is different from that reported in their paper, because we found some duplicated sentences in their data.
And for Genia, we appeal for the usage of train/dev/test, and we release the data split within the code repository. Moreover, in order to facilitate the document-level NER study, we split the Genia dataset based on documents. Therefore, sentences from train/dev/test splits are from different documents, the document ratio for train/dev/test is 8:1:1.
Besides, we find one conflicting document annotation in Genia, we fix this conflict. After comparing different versions of Genia, we find the W2NER
(Li et al., 2022) and Triaffine (Yuan et al., 2022)
drop the spans with more than one entity tags (there are 31 such entities). Thus, they have less number of nested entities than us. While SG (Wan et al.,
2022) includes the discontinuous entities, so they have more number of nested entities than us.
## C Implementation Details
We use the AdamW optimizer to optimize the model and the transformers package for the pretrained model (Wolf et al., 2020). The hyperparameter range in this paper is listed in Table 5.
## D Fepr Fere Nepr Nere
We split entities into two kinds based on whether they overlap with other entities, and the statistics for each dataset are listed in Table 6. When calculating the flat entity precision (FEPR), we first get all flat entities in the prediction and calculate their
| ACE2004 | ACE2005 | Genia | |
|------------------|-----------------------|---------|------|
| # Epoch | 50 | 50 | 5 |
| Learning Rate | 2e-5 | 2e-5 | 7e-6 |
| Batch size | 48 | 48 | 8 |
| # CNN Blocks | [2, 3] | [2, 3] | 3 |
| CNN kernel size | 3 | 3 | 3 |
| CNN Channel dim. | [120, 200] [120, 200] | 200 | |
| # Head | [1, 5] | [1, 5] | 4 |
| Hidden size h | 200 | 200 | 400 |
| Warmup factor | 0.1 | 0.1 | 0.1 |
Table 5: The hyper-parameters in this paper.
| # Ent. | # Flat Ent. | # Nested Ent. | |
|----------|---------------|-----------------|-------|
| ACE2004 | 3,036 | 1,614 | 1,422 |
| ACE2005 | 3,099 | 1,913 | 1,186 |
| Genia | 5,119 | 3,963 | 1,156 |
ratio in the gold. For the flat entity recall (FERE),
we get all flat entities in the gold and calculate their ratio in the prediction. And we get the nested entity precision (NEPR) and nested entity recall (NERE) similarly.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section "Limitations" (5th section)
✓ A2. Did you discuss any potential risks of your work?
Section "Limitations" (5th section)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
"Abstract" and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3 and Appendix C
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix C
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
amini-etal-2023-hexatagging | Hexatagging: Projective Dependency Parsing as Tagging | https://aclanthology.org/2023.acl-short.124 | We introduce a novel dependency parser, the hexatagger, that constructs dependency trees by tagging the words in a sentence with elements from a finite set of possible tags. In contrast to many approaches to dependency parsing, our approach is fully parallelizable at training time, i.e., the structure-building actions needed to build a dependency parse can be predicted in parallel to each other. Additionally, exact decoding is linear in time and space complexity. Furthermore, we derive a probabilistic dependency parser that predicts hexatags using no more than a linear model with features from a pretrained language model, i.e., we forsake a bespoke architecture explicitly designed for the task. Despite the generality and simplicity of our approach, we achieve state-of-the-art performance of 96.4 LAS and 97.4 UAS on the Penn Treebank test set. Additionally, our parser{'}s linear time complexity and parallelism significantly improve computational efficiency, with a roughly 10-times speed-up over previous state-of-the-art models during decoding. | # Hexatagging: Projective Dependency Parsing As Tagging
Afra Amini∗ Tianyu Liu∗ **Ryan Cotterell**
{afra.amini, tianyu.liu, ryan.cotterell}@inf.ethz.ch
## Abstract
We introduce a novel dependency parser, the hexatagger, that constructs dependency trees by tagging the words in a sentence with elements from a *finite* set of possible tags. In contrast to many approaches to dependency parsing, our approach is fully parallelizable at training time, i.e., the structure-building actions needed to build a dependency parse can be predicted in parallel to each other.
Additionally, exact decoding is linear in time and space complexity. Furthermore, we derive a probabilistic dependency parser that predicts hexatags using no more than a linear model with features from a pretrained language model, i.e., we forsake a bespoke architecture explicitly designed for the task. Despite the generality and simplicity of our approach, we achieve state-of-the-art performance of 96.4 LAS and 97.4 UAS on the Penn Treebank test set. Additionally, our parser's linear time complexity and parallelism significantly improve computational efficiency, with a roughly 10-times speed-up over previous state-of-the-art models during decoding.
https://github.com/rycolab/ parsing-as-tagging
## 1 Introduction
The combination of parallel computing hardware and highly parallelizable neural network architectures (Vaswani et al., 2017) has enabled the pretraining of language models on increasingly large amounts of data. In order to apply pretrained language models to downstream NLP tasks, many practitioners fine-tune the pretrained model while the task-specific architecture is jointly trained from scratch. Typically, the task-specific architecture is built upon the hidden representations generated by the final layer of a pretrained model. Exploiting pretrained language models in this manner has boosted the performance considerably on many NLP tasks (Devlin et al., 2019; Clark et al., 2020;
∗Equal contribution.
![0_image_0.png](0_image_0.png)
Aghajanyan et al., 2021). However, for the end-toend fine-tuning process to be fully parallelizable, it is also necessary to parallelize the training of the task-specific architecture. Unfortunately, due to the complexity of the output in many structured prediction tasks in natural language, e.g., in dependency parsing, state-of-the-art models still use architectures with limited parallelization during training (Mrini et al., 2020; Yang and Tu, 2022).
In an attempt to develop parsers parallelizable during training, a recent line of work recasts parsing as tagging (Li et al., 2018; Strzyz et al.,
2019; Kitaev and Klein, 2020; Amini and Cotterell, 2022). Under this approach, a parse tree is linearized into a sequence of tags.1 The benefit of such a paradigm is that tagging can be done by only adding a linear classifier on top of a pretrained language model and the tags can, thus, be predicted independently. This leads to a parser that is highly parallelizable and whose training can be easily harmonized with the (parallelizable) fine-tuning of pretrained language models. During decoding, an exact algorithm is used to recover a valid sequence 1In some tagging-based dependency parsers, the cardinality of the set of tags even grows as a function of the length of the input sequence and, thus, is unbounded.
1453 of tags which is then converted back to a parse tree.
Kitaev and Klein (2020) were the first to propose a parsing-as-tagging scheme with a constant tag space for *constituency parsing* and, additionally, the first to achieve results competitive with the stateof-the-art non-parallelizable constituency parsers using such a tagger. However, for dependency parsing, all dependency parsing-as-tagging schemes in the literature (Li et al., 2018; Strzyz et al., 2019; Vacareanu et al., 2020) have infinite tag sets whose cardinality grows with the length of the input sequence, which limits such parsers' efficiency and generality (Strzyz et al., 2019). Moreover, in some cases, this growth hinders generalization to sentences longer than the longest training sentence.
Furthermore, tagging-based dependency parsers still do not perform competitively with the bestperforming parsers in the literature (Li et al., 2018).
In this paper, we propose a novel way of framing projective dependency parsing as a tagging task.
Our approach makes use of 6 distinct tags, motivating us to naming the scheme **hexatagger**. In our experiments, hexatagger achieves state-of-the-art performance on the English Penn Treebank (PTB; Marcus et al., 1993) test set. Notably, it outperforms parsers with more computationally expensive training procedures and extra constituency annotations, e.g., the parser developed by Mrini et al. (2020).
Furthermore, hexatagger achieves results competitive to Yang and Tu's (2022) parser on the Chinese Penn Treebank (CTB; Xue et al., 2005) test set and 12 languages on the pseudo-projectivized data from the Universal Dependencies (UD2.2; Nivre et al., 2018) benchmark. In terms of efficiency, our experiments suggest that hexatagger is 10 times faster than previous top-performing parsers, and consumes significantly less memory, despite using an exact dynamic program for decoding.
## 2 Hexatagging
In this section, we introduce hexatagging, a tagging scheme that consists of 6 unique tag types.
We further prove by construction that there exists an injective mapping between valid sequences of hexatags and dependency trees.
## 2.1 Binary Head Trees
Before going into the details on how to represent dependency trees with a sequence of tags, we introduce **binary head trees** (BHTs), a simple formalism that serves as a useful intermediary between dependency trees and sequence of hexatags. Intuitively, a BHT is a special form of a constituency tree where each internal node is either labeled L
when the head of the derived constituent is in the left subtree or R when the head is in the right subtree. See Fig. 1 for a visual depiction of a BHT. In the next theorem, we formally state the relationship between the set of dependency trees and BHTs.
Theorem 1. There exists a bijective2*function that* maps every projective dependency tree to a BHT.
In the following two paragraphs, we sketch a construction that such a function exists, i.e., we describe how to map any dependency tree to a BHT and then how to map back any BHT to a dependency tree and back again.
Projective Dependency Trees to BHTs. To convert a dependency tree to a BHT, we start from the root and do a depth-first traversal of the dependency tree. To avoid spurious ambiguity (Eisner and Satta, 1999), we canonically order arcs of the tree by processing the arcs left to right and inside out.3 Algorithmically, converting a dependency tree to a BHT proceeds as follows. When we first visit a word, we push it onto a stack and proceed with visiting its dependents. When there is no dependent word left to visit, we create a new node
( L or R ) and attach the top two elements in the stack as the left and right child of this node. A stepby-step demonstration of this algorithm is shown in Fig. 2 and pseudocode is provided in Alg. 1.
BHTs to Projective Dependency Trees. To convert a BHT back to the dependency tree we follow Alg. 2. Algorithmically, we process BHT in a depth-first fashion. Upon visiting R or L nodes, we combine the top two elements in the stack by creating a dependency arc between them. The direction of the arc is determined by the label of the node ( R or L ). See Fig. 3 for an example.
Once the dependency tree is converted to a BHT,
we can linearize it to a sequence of hexatags in a straightforward manner. Theorem 2 states the relationship between BHTs and hexatags formally.
Theorem 2. *There exists a total and injective* function that maps every BHT to a valid hexatag sequence, i.e., in other words, every BHT can be 2We remark that the bijectivitiy follows from a canonical ordering (left-to-right and inside-out) of a node's dependents.
3One can also process the right arcs first. In our experiments, however, we observed no significant difference in the performance of the parser, see App. C for more analysis.
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
C PUSH (D) C PUSH (C)
mapped to a unique hexatag sequence. However, some hexatag sequences do not correspond to BHTs, i.e., the function is not surjective.
In the following subsections, we prove by construction that such a function exists. Throughout the rest of the paper, we refer to those haxatagging sequences that do correspond to BHTs as **valid**.
## 2.2 From Bht To Hexatags
To transform a given BHT to a sequence of hexatags, we enumerate the action sequence that a left-corner shift–reduce parser would take when parsing this BHT (Johnson, 1998). Left-corner parsers have actions that align more closely with the input sequence than top-down or bottom-up shift–reduce actions and, thus, offer a better linearization for tagging tasks (Amini and Cotterell, 2022). A simple explanation of this linearization process is given by Kitaev and Klein (2020, §3.1).
Their algorithm involves an in-order traversal of the tree. Upon visiting each node, we generate a tag that includes the direction of the arc that attaches the node to its parent, i.e., whether that node is a left or a right child of its parent, and the label of the node. When traversing a BHT, this paradigm results in 6 distinct tag types:
- →: this terminal node is the right child of its parent;
- →: this terminal node is the left child of its parent;
•R⇒(L⇒): this non-terminal node is the right child of its parent and the head of the corresponding constituent is on the right (respectively, left) subtree;
- ⇒R( ⇒L): this non-terminal node is the left child of its parent and the head of the corresponding constituent is on the right (respectively, left) subtree.
For an input sequence w = w1 *· · ·* wN , this process gives us a hexatag sequence of length 2N − 1.
Fig. 1 depicts tree-to-tags transformation through an example.
Labeled Dependency Trees. When converting a *labeled* dependency tree to a sequence of hexatags, the arc labels must be encoded in the tags.
Therefore, while reading a terminal node, we concatenate the label of the arc that connects the node to its parent with the hexatag. In this case, the number of distinct tags would be O(|A|), where |A| is the number of unique arc labels. For example, in Fig. 1 the hexatag generated while processing she is: ⟨ →, nsubj⟩.
## 2.3 From Hexatags To Dependency Trees
To transform a sequence of hexatags back to a dependency tree, we again go through a two-step process. First, we again interpret hexatags as actions in a left-corner shift–reduce transition system to construct a BHT. The actions in such a transition system are as follows:
- →: shift the leaf node into the stack;
- ⇒R( ⇒L): create a new node labeled R (respectively, L ), attach the top element in the stack as its left child, and attach a dummy node as its right child (∅ in step 2 in Fig. 3);
- →: pop the subtree on the top of the stack. Replace the dummy node in the subtree with the terminal node. Push the subtree back to the stack;
•R⇒(L⇒): create a new node labeled R (respectively, L ). Pop the top element of the stack, attach it as the new node's left child, and set a dummy node as the node's right child. Pop another subtree of the stack, identify the dummy node in the subtree and replace it with the newly created subtree. Push the subtree back to the stack (step 6 in Fig. 2);
![3_image_0.png](3_image_0.png)
## 3 Probability Model
In this section, we explain how to predict hexatags in parallel. Our tagging model predicts two hexatags for each word in the input sequence with the exception of that last word, for which we only predict one tag. As discussed in §2.1, a hexatagger produces a sequence of 2N − 1 tags t = [t1, t2*, . . . , t*2N−1] for an input sequence of length N, w = w1w2 *· · ·* wN . Therefore, an intuitive way to match the tag sequence with the input sequence is to assign two tags to each word.
We denote a training corpus S of M tuples of input sequences and tag sequences {(wm, t m)}M
m=1.
To learn the scoring function over tags, we follow the same independence assumption as in
(Kitaev and Klein, 2020), i.e., the probability of predicting each tag is independent of other tags given the input sequence. This assumption barely harms model performance (see Amini and Cotterell, 2022, Table 3), but significantly speeds up the training process by enabling each tag to be predicted in parallel, and complexity reduces by a factor of O(N). The training objective is to minimize the negative log-likelihood of the gold-standard tag sequences, i.e.
L(θ) = −X (w,t)∈S log pθ(t | w) (1a) = −X (w,t)∈S log 2N Y−1 n=1 pθ(tn | w) (1b) = −X (w,t)∈S X N n=1 log pθ(t2n−1 | w) (1c) + N X−1 n=1 log pθ(t2n | w)
where θ refers collectively to the parameters of the two linear projections and the parameters of the pretrained model. To obtain pθ(t2n | w) and pθ(t2n+1 | w), we apply two independent linear projections on the contextualized representation of wn 4 given by a pretrained model and convert that to a probability distribution using softmax.
## 4 Decoding
Our goal in this section is to develop an efficient algorithm to find the highest-scoring hexatag sequence under the model developed in §3. As stated in Theorem 2, the transformation function between BHTs and hexatag sequences is not surjective, i.e.,
not all the tag sequences can be transformed back into a BHT. Therefore, we need to find a *valid* hexatag sequence with the maximum probability under the model that can be transformed back to a BHT. Once such hexatag sequence is found, we can follow the two-step algorithm described in §2.3 to obtain the corresponding dependency tree.
To find the highest-scoring valid hexatag sequence, we follow the linear-time algorithm developed by Kitaev and Klein (2020). For a hexatag sequence to be valid, we should be able to interpret it as actions in a left-corner shift–reduce transitions system, described in §2.3. Concretely:
- The first action can only be →because other actions need at least one item in the stack;
- The actions L⇒,R⇒can only be performed if there is at least two items in the stack; 4If a word consists of more than one subword, we apply the projection to the last subword.
| bg | ca | cs | de | en | es | fr | it | nl | no | ro | ru | Avg. | |
|--------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------|-------|
| Zhang et al. (2020) | 90.77 | 91.29 | 91.54 | 80.46 | 87.32 | 90.86 | 87.96 | 91.91 | 88.62 | 91.02 | 86.90 | 93.33 | 89.33 |
| Wang and Tu (2020) | 90.53 | 92.83 | 92.12 | 81.73 | 89.72 | 92.07 | 88.53 | 92.78 | 90.19 | 91.88 | 85.88 | 92.67 | 90.07 |
| +BERTmultilingual | | | | | | | | | | | | | |
| Wang and Tu (2020) | 91.30 | 93.60 | 92.09 | 82.00 | 90.75 | 92.62 | 89.32 | 93.66 | 91.21 | 91.74 | 86.40 | 92.61 | 90.61 |
| Dozat and Manning (2017) | 90.30 | 94.49 | 92.65 | 85.98 | 91.13 | 93.78 | 91.77 | 94.72 | 91.04 | 94.21 | 87.24 | 94.53 | 91.82 |
| Yang and Tu (2022) | 91.10 | 94.46 | 92.57 | 85.87 | 91.32 | 93.84 | 91.69 | 94.78 | 91.65 | 94.28 | 87.48 | 94.45 | 91.96 |
| Hexatagger | 92.87 | 93.79 | 92.82 | 85.18 | 90.85 | 93.17 | 91.50 | 94.72 | 91.89 | 93.95 | 87.54 | 94.03 | 91.86 |
PTB CTB
Table 2: Results on PTB and CTB. ∗indicates usage of extra constituency annotation. \# is our reimplementation using the same pretrained encoder with hexatagger.
- After performing all the actions, the stack should contain a single element.
The above shows that the validity of a hexatag sequence only depends on the *number* of elements in the stack at each point of the derivation.5
## 5 Experiments
| Model | UAS | LAS | UAS | LAS |
|--------------------------|-------|-------|-------|-------|
| Zhou and Zhao (2019) ∗ | 97.0 | 95.4 | 91.2 | 89.2 |
| Mrini et al. (2020) ∗ | 97.4 | 96.3 | 94.6 | 89.3 |
| Chen and Manning (2014) | 91.8 | 89.6 | 83.9 | 82.4 |
| Dozat and Manning (2017) | 95.7 | 94.1 | 89.3 | 88.2 |
| Yang and Tu (2022)# | 97.4 | 95.8 | 93.5 | 92.5 |
| Hexatagger | 97.4 | 96.4 | 93.2 | 91.9 |
We conduct experiments on the English Penn Treebank (PTB; Marcus et al., 1993), the Chinese Penn Treebank (CTB; Xue et al., 2005), and the Universal Dependencies 2.2 (UD2.2; Nivre et al.,
2018). For UD2.2, we adopt the pseudo-projective transformation (Nivre and Nilsson, 2005) to convert non-projective trees into projective trees following previous work (Wang and Tu, 2020; Yang and Tu, 2022). We report dataset statistics in App. E and hyperparameter settings in App. F.
Accuracy. We train the hexatagger model based on XLNet (Yang et al., 2019) and report the results on PTB and CTB in Table 2. Furthermore, we eval-5Specifically, The decoding algorithm can be thought of as constructing a lattice where each node corresponds to the number of elements in the stack for each transition step (N ×d nodes for maximum stack size of d, d ≤ N). Each transition corresponds to performing a valid action. The score of the tag at step n is set to the negative log probability − log pθ(tn | w) of the corresponding hexatag given by the model. Finally, we remark that our decoding algorithm is essentially a shortestpath dynamic program that finds the highest-scoring valid hexatag sequence. See Amini and Cotterell (2022, §5.1) for a deeper discussion of this point.
Table 3: Comparison of parsing speed and memory consumption on PTB test set. Results are averaged over 3 random runs on the same server with one Nvidia A100-80GB GPU using BERT-large as encoder. We use a batch size of 128 sentences, except for ⋆that uses 64, which otherwise results in an out-of-memory error.
uate hexatagger in a set of 12 topologically diverse languages on UD corpus, where we use Multilingual BERT (Devlin et al., 2019) as the underlying model (see Table 1). In PTB, we observe that hexatagger achieves state-of-the-art results, compared to models with custom architectures and even in some cases with extra annotation. In CTB and UD,
hexatagger follows the best performance closely.
Efficiency. We compare the efficiency of hexatagger with biaffine modules,6 which are the backbone of many neural graph-based parsers (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017; Mrini et al., 2020; Yang and Tu, 2022). As depicted in Table 3, we observe that our hexatagger is an order of magnitude faster and consumes less memory. Further analysis is included in App. C.
| Speed (sent/s) ↑ | Memory (GB) ↓ | | | |
|--------------------|-----------------|----------|------------|----------|
| Sent length | Hexatagger | Biaffine | Hexatagger | Biaffine |
| 32 | 2916 | 493 | 2.9 | 4.5 |
| 64 | 3011 | 328 | 3.0 | 10.1 |
| 128 | 2649 | 202 | 3.7 | 30.6 |
| 256 | 3270 | 98 | 4.5 | 56.2⋆ |
| overall | 3176 | 338 | 3.0 | 10.6 |
## 6 Conclusion
In summary, hexatagging, our novel scheme, offers a parallelizable and efficiently decodable backbone for dependency parsing. Without relying on custom architecture for dependency parsing, the hexatagger achieves state-of-the-art accuracy on several datasets using no more than a pretrained language model and linear classifiers.
6By biaffine model we refer to a slim parameterization of a dependency parser that scores the existence of a dependency between wi and wj using a biaffine attention layer over the words' contextualized representations.
## Limitations
Non-projectivity. The primary theoretical limitation of hexatagger is that it can only produce projective dependency trees. We would like to explore the possibility of extending hexatagger to non-projective parsing for future work.
Interpretibility. As a trade-off for efficiency, hexatagger does not model dependency arcs directly. Compared to graph-based models that explicitly score arc scores between pairs of words, it is more difficult to interpret the output of hexatagger.
## Ethics Statement
We do not believe the work presented here further amplifies biases already present in the datasets.
Therefore, we foresee no ethical concerns in this work.
## Acknowledgments
We would like to thank Tim Vieira for his invaluable feedback throughout the process of this paper.
Afra Amini is supported by ETH AI Center doctoral fellowship.
## References
Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta.
2021. Muppet: Massive multi-task representations with pre-finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5799–5811, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Afra Amini and Ryan Cotterell. 2022. On parsing as tagging. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8884–8900, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: An approach to almost parsing. *Computational Linguistics*, 25(2):237–265.
Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks.
In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 740–750, Doha, Qatar. Association for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than
generators. In International Conference on Learning Representations.
Shay B. Cohen and Daniel Gildea. 2016. Parsing Linear Context-Free Rewriting Systems with Fast Matrix Multiplication. *Computational Linguistics*,
42(3):421–455.
Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The Stanford typed dependencies representation. In Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation, pages 1–8, Manchester, UK. Coling 2008 Organizing Committee.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Timothy Dozat and Christopher D. Manning. 2017.
Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Timothy Dozat, Peng Qi, and Christopher D. Manning.
2017. Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20–30, Vancouver, Canada. Association for Computational Linguistics.
Jason Eisner. 1996. Efficient normal-form parsing for combinatory categorial grammar. In *Proceedings of* the 34th Annual Meeting of the Association for Computational Linguistics (ACL), pages 79–86, Santa Cruz.
Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 457–464, College Park, Maryland, USA. Association for Computational Linguistics.
Mark Johnson. 1998. Finite-state approximation of constraint-based grammars using left-corner grammar transforms. In *36th Annual Meeting of the Association for Computational Linguistics and 17th* International Conference on Computational Linguistics, Volume 1, pages 619–623, Montreal, Quebec, Canada. Association for Computational Linguistics.
Eliyahu Kiperwasser and Miguel Ballesteros. 2018.
Scheduled multi-task learning: From syntax to translation. *Transactions of the Association for Computational Linguistics*, 6:225–240.
Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. *Transactions of the* Association for Computational Linguistics, 4:313–
327.
Nikita Kitaev and Dan Klein. 2020. Tetra-tagging:
Word-synchronous parsing with linear-time inference.
In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 6255– 6261, Online. Association for Computational Linguistics.
Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
Marco Kuhlmann, Carlos Gómez-Rodríguez, and Giorgio Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In *Proceedings of the 49th Annual Meeting of the Association for* Computational Linguistics: Human Language Technologies, pages 673–682, Portland, Oregon, USA.
Association for Computational Linguistics.
Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2016.
Global neural CCG parsing with optimality guarantees. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2366–2376, Austin, Texas. Association for Computational Linguistics.
Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018.
Seq2seq dependency parsing. In *Proceedings of the* 27th International Conference on Computational Linguistics, pages 3203–3214, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330.
Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. ˇ Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 523–530, Vancouver, British Columbia, Canada. Association for Computational Linguistics.
Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proceedings of the Tenth International Conference on Parsing Technologies, pages 121–132, Prague, Czech Republic. Association for Computational Linguistics.
Khalil Mrini, Franck Dernoncourt, Quan Hung Tran, Trung Bui, Walter Chang, and Ndapa Nakashole.
2020. Rethinking self-attention: Towards interpretability in neural parsing. In *Findings of the Association for Computational Linguistics: EMNLP*
2020, pages 731–742, Online. Association for Computational Linguistics.
Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In *Proceedings of the* Eighth International Conference on Parsing Technologies, pages 149–160, Nancy, France.
Joakim Nivre, Mitchell Abrams, Željko Agic, Lars ´
Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, John Bauer, Sandra Bellato, Kepa Bengoetxea, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Rogier Blokland, Victoria Bobicev, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Gül¸sen Cebiroglu Eryi ˘ git, Giuseppe G. A. Celano, Savas Cetin, ˘
Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinková, Aurélie Collomb, Çagrı Çöl- ˘ tekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaž Erjavec, Aline Etienne, Richárd Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta Gonzáles Saavedra, Matias Grioni, Normunds Gruz¯ ¯ıtis, Bruno Guillaume, Céline Guillot-Barbance, Nizar Habash, Jan Hajic, Jan ˇ
Hajic jr., Linh Hà M ˇ y, Na-Rae Han, Kim Harris, Dag ˜
Haug, Barbora Hladká, Jaroslava Hlavácová, Florinel ˇ Hociung, Petter Hohle, Jena Hwang, Radu Ion, Elena Irimia, Tomáš Jelínek, Anders Johannsen, Fredrik Jørgensen, Hüner Ka¸sıkara, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, Václava Kettnerová, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phương Lê Hô`ng, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Nikola Ljubešic, Olga Loginova, ´
Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Cat˘ alina M ˘ ar˘ an- ˘
duc, David Marecek, Katrin Marheinecke, Héctor ˇ
Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonça, Niko Miekka, Anna Missilä, Cat˘ alin Mititelu, Yusuke ˘
Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shinsuke Mori, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-Berzkalne, L ¯ ương Nguy˜ên Thi
., Huyê`n Nguy˜ên Thi
. Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adédayò. Olúòkun, Mai Omura, Petya Osenova, Robert Östling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalnin, a, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela Rääbis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Ros, ca, Olga Rudina, Shoval Sadde, Shadi Saleh, Tanja Samardžic, Stephanie Samson, Manuela San- ´
guinetti, Baiba Saul¯ıte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Yuta Takahashi, Takaaki Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdenka Urešová, Larraitz Uria, Hans Uszkor- ˇ
eit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Veronika Vincze, Lars Wallin, Jonathan North Washington, Seyi Williams, Mats Wirén, Tsegay Woldemariam, Tak-sum Wong, Chunxiao Yan, Marat M. Yavrumyan, Zhuoran Yu, Zdenek Žabokrtský, Amir Zeldes, Daniel Zeman, ˇ
Manying Zhang, and Hanzhi Zhu. 2018. Universal dependencies 2.2. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
Joakim Nivre and Jens Nilsson. 2005. Pseudoprojective dependency parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 99–106, Ann Arbor, Michigan. Association for Computational Linguistics.
Michalina Strzyz, David Vilares, and Carlos GómezRodríguez. 2019. Viable dependency parsing as sequence labeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 717–723, Minneapolis, Minnesota. Association for Computational Linguistics.
Robert Vacareanu, George Caique Gouveia Barbosa, Marco A. Valenzuela-Escárcega, and Mihai Surdeanu. 2020. Parsing as tagging. In *Proceedings* of the Twelfth Language Resources and Evaluation Conference, pages 5225–5231, Marseille, France. European Language Resources Association.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Xinyu Wang and Kewei Tu. 2020. Second-order neural dependency parsing with message passing and end-to-end training. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 93–99, Suzhou, China. Association for Computational Linguistics.
Naiwen Xue, Fei Xia, Fu-dong Chiou, and Marta Palmer.
2005. The penn chinese treebank: Phrase structure annotation of a large corpus. *Natural Language Engineering*, 11(2):207–238.
Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In *Proceedings of the Eighth International* Conference on Parsing Technologies, pages 195–206, Nancy, France.
Songlin Yang and Kewei Tu. 2022. Headed-span-based projective dependency parsing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2188–2200, Dublin, Ireland. Association for Computational Linguistics.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc.
Yu Zhang, Zhenghua Li, and Min Zhang. 2020. Efficient second-order TreeCRF for neural dependency parsing. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 3295–3305, Online. Association for Computational Linguistics.
Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In *Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing*, pages 562–571, Honolulu, Hawaii. Association for Computational Linguistics.
Junru Zhou and Hai Zhao. 2019. Head-Driven Phrase Structure Grammar parsing on Penn Treebank. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2396–
2408, Florence, Italy. Association for Computational Linguistics.
Ran Zmigrod, Tim Vieira, and Ryan Cotterell. 2020.
Please mind the root: Decoding arborescences for dependency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4809–4819, Online. Association for Computational Linguistics.
## A Algorithms
Algorithm 1 Create a BHT from a dependency
![8_image_1.png](8_image_1.png)
tree.
![8_image_0.png](8_image_0.png)
Algorithm 2 Create a dependency tree from a BHT.
![8_image_4.png](8_image_4.png)
![8_image_3.png](8_image_3.png)
6: if node is L :
## B Related Work
Traditionally, approaches to dependency parsing have been taxonomized into graph-based and transition-based parsers. The authors of this paper take the stance that this distinction is misleading because the difference lies not in the models themselves, but rather in whether exact or approximate inference algorithms are employed. For instance, Kuhlmann et al. (2011) gives exact algorithms for transition-based dependency parsers, which exposes the inability to formally distinguish graph-based and transition-based parsers. Thus, we classify our related work into sections: exact and approximate decoding. Further, we review works on tagging-based parsing which is the most relevant line of work to this paper.
Exact Decoding. Most exact algorithms for projective dependency parsing models apply a modified form of the CKY algorithm on nested dependency trees. The best runtime among the commonly deployed algorithms O
N3(Eisner, 1996),
but algorithms based on fast matrix multiplication exist and can achieve a lower runtime bound (Cohen and Gildea, 2016). However, exact decoding of *non-projective parsers* is intractable unless under independence assumptions, e.g., edge factored assumption (McDonald and Satta, 2007). Edgefactored parsers (McDonald et al., 2005; Dozat et al., 2017) construct graphs by scoring all possible arcs between each pair of words. They then use the maximum spanning tree (MST) finding algorithms for decoding to build the valid dependency trees with maximum score in O
N2(Zmigrod et al., 2020). The discussed algorithms are exact in inferring the dependency structure, however, they are neither fast nor parallelizable.
Approximate Decoding. Despite not being exact, transition-based parsers offer faster and typically linear-time parsing algorithms (Kudo and
![8_image_2.png](8_image_2.png)
Matsumoto, 2002; Yamada and Matsumoto, 2003; Nivre, 2003). The dependency tree is inferred with a greedy search through transition system actions.
Following this approach, actions are not predicted in parallel and the configuration of the transition system (stack and buffer) needs to be modeled with a neural network (Chen and Manning, 2014), which prevents using pretrained models out of the box.
Tagging-based parsing. Inspired by Bangalore and Joshi's (1999) seminal work *supertagging*, a recent line of work aims to utilize pretrained models and parse dependency trees by inferring tags for each word in the input sequence. Li et al. (2018);
Kiperwasser and Ballesteros (2018) predict the relative position of the dependent with respect to its parent as the tag. They then use beam tree constraints (Lee et al., 2016) to infer valid dependency trees. Strzyz et al. (2019) provides a framework for analyzing similar tagging schemes. Although these works have demonstrated potential in this area, none achieved state-of-the-art results compared to custom architectures and algorithms developed for dependency parsing. Additionally, the output space, or size of the tag set, is unrestricted, which limits the efficiency of this approach.
## C Analysis
LEFT-FIRST vs. RIGHT-**FIRST**. We examine the effect of the two orders of binarization of Alg. 1 in Table 4. In our experiments, the choice of left-first or right-first order has little to no effect on parsing performance.
| PTB | CTB | | | |
|-------------|-------|------|------|------|
| Model | UAS | LAS | UAS | LAS |
| Right-first | 97.2 | 96.3 | 93.2 | 91.9 |
| Left-first | 97.4 | 96.4 | 93.1 | 91.9 |
## D Efficiency Evaluation
For efficiency comparison, we use BERT-large as the base feature encoder for both Hexatagger and Biaffine. We use the English PTB test set and truncate or pad the input sentences to the control length. The results are averaged over 3 random runs on the same server with one Nvidia A100-80GB
GPU. The other experimental settings are kept the same (i.e., the version of PyTorch and Transformer, FP32 precision, batching).
## E Datasets
Preprocessing. Following previous work (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017), the dependency annotations are derived by the Stanford Dependency converter v3.3.0
(de Marneffe and Manning, 2008) from the treebank annotations. Punctuation is omitted for evaluation. Gold part-of-speech tags are provided to the model both during training and evaluation following the code released by Mrini et al. (2020).
Some other authors use system-predicted partof-speech tags (Zhou and Zhao, 2019) or use mixed configurations. E.g., Yang and Tu (2022) uses gold part-of-speech tags on CTB and UD, while not using any on PTB, Dozat and Manning (2017)
uses gold part-of-speech tags on CTB but systempredicted ones on PTB. Our preliminary experiments show that removing the usage of part-ofspeech information barely affects the UAS metric, and gives us a performance of 97.4 UAS and 95.8 LAS on PTB.
Splits. All the datasets splits are consistent with previous work. For PTB, we follow the standard split of Marcus et al. (1993), resulting in 39,832 sentences for training, 1,700 for development, and 2,416 for testing. For CTB, we follow the split of Zhang and Clark (2008), resulting in 16,091 sentences for training, 803 for development, and 1,910 for testing. For UD2.2, we follow Yang and Tu
(2022) and use the standard splits of the following corpora for experiments: BG-btb, CA-ancora, CSpdt, DE-gsd, EN-ewt, ES-ancora, FR-gsd, IT-isdt, NL-alpino, NO-rrt, RO-rrt, RU-syntagrus.
Licenses. The PTB and CTB datasets are licensed under LDC User Agreement. The UD2.2 dataset is licensed under the Universal Dependencies License Agreement.
## F Hyperparameter Settings
We use the Python NLTK package to process the datasets, i.e., converting CoNLL-U formatted data to dependency trees, extracting dependency arcs from dependency trees for evaluation, implementing Alg. 1 and 2. For UD, we apply MaltParser v1.9.27to pseudo-projectivize the non-projective trees (Nivre and Nilsson, 2005).
We use xlnet-large-cased8for English PTB,
chinese-xlnet-mid9for CTB, and bert-multilingualcased10 for UD.
The dimension of POS tag embedding is set to 256 for all experiments. On top of concatenated pretrained representations and POS embedding, we use a 3-layer BiLSTM with a hidden size of 768 for base-sized models (bert-multilingual-cased on UD)
and 1024 for large-sized models (xlnet-large-cased on PTB and chinese-xlnet-mid on CTB).
Dropout layers with a rate of 0.33 are applied after the concatenated embedding layer, between LSTM layers, and before the MLP projection layer to hexatags.
For training, we used AdamW with a learning rate of 2e−5 for pretrained LMs and 1e−4 for POS
embedding, BiLSTM, and MLP. The gradient clipping threshold is set to 1.0. The batch size is set to 32.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Sec. 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 5, App. D
✓ B1. Did you cite the creators of artifacts you used?
5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
App D
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
App D
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
App D
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
App D
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
App. E
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
App. E
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
App. F
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhang-yu-2023-understanding | Understanding Demonstration-based Learning from a Causal Perspective | https://aclanthology.org/2023.acl-short.125 | Demonstration-based learning has shown impressive performance in exploiting pretrained language models under few-shot learning settings. It is interesting to see that demonstrations, even those composed of random tokens, can still improve performance. In this paper, we build a Structural Causal Model (SCM) to understand demonstration-based learning from causal perspectives and interpret random demonstrations as interventions on the demonstration variable within the causal model. We investigate the causal effects and find that the concurrence of specific words in the demonstration will induce bias, while randomly sampled tokens in the demonstration do not. Based on this finding, we further propose simple ways to construct random demonstrations, which even outperform hand-crafted, meaningful demonstrations on public sequence labeling benchmarks. | # Understanding Demonstration-Based Learning From A Causal Perspective
Ruiyi Zhang Adobe Research [email protected]
## Abstract
Demonstration-based learning has shown impressive performance in exploiting pretrained language models under few-shot learning settings. It is interesting to see that demonstrations, even those composed of random tokens, can still improve performance. In this paper, we build a Structural Causal Model (SCM)
to understand demonstration-based learning from causal perspectives and interpret random demonstrations as interventions on the demonstration variable within the causal model. We investigate the causal effects and find that the concurrence of specific words in the demonstration will induce bias, while randomly sampled tokens in the demonstration do not. Based on this finding, we further propose simple ways to construct random demonstrations, which even outperform hand-crafted, meaningful demonstrations on public sequence labeling benchmarks1.
## 1 Introduction
Large pretrained language models (PLMs) have recently shown great progress (Devlin et al., 2019; Liu et al., 2019a; Lewis et al., 2020; Xie et al.,
2020; Huang et al., 2021). These models, such as GPT-4 (Peng et al., 2023), PALM (Anil et al.,
2023), and Llama (Touvron et al., 2023), have shown human-level capability with only a few illustrative examples (Lake et al., 2015). Specifically, demonstration-based learning has been introduced to augment the input with demonstrations, i.e., the input and expected output pairs. Brown et al. (2020) simply picked up to a small number of sampled instances and directly concatenated them with the input to perform *in-context learning*.
Lee et al. (2022) concatenated the input with task demonstrations to create augmented input and fed them into PLMs to obtain improved token representations to do sequence labeling in a classifier-based fine-tuning way.
1Code available at: github.com/zhangry868/RandDemo Tong Yu Adobe Research [email protected] However, how and why such demonstrations help still remains unclear, and there has been a growing amount of work investigating the mechanisms of demonstration-based learning. Min et al.
(2022) investigated in-context learning with demonstrations under zero-shot settings and found that input with random labels can still produce performance comparable to that of correct labels. Zhang et al. (2022a) replaced every token in the demonstration with random ones and still surprisingly observed good few-shot learners even when the demonstration is meaningless. These observations conflict with some existing hypotheses (Gao et al.,
2021; Lee et al., 2022) that models are learning meaningful knowledge from demonstrations.
To better understand demonstration-based learning, we take a deeper dive into the random construction of demonstrations. Specifically, we first build a Structural Causal Model (SCM) to understand demonstration-based learning from a *Causal Perspective*. A causal view is developed to explore the spurious correlations between demonstrations and few-shot training samples. Based on the intervention on the demonstration variable in the SCM, we design multiple simple and effective ways to construct random demonstrations. These methods are evaluated on structured prediction tasks with carefully designed experiment setups. Empirical results show that carefully designed random demonstrations can outperform meaningful demonstrations under the few-shot learning setting. This finding suggests that meaningless demonstrations can still provide valid information for PLMs. Moreover, random demonstrations allow the learning algorithm to identify important features and patterns in the data more effectively than homogeneous handcrafted demonstrations.
## 2 Background
In this section, we introduce the background of sequence labeling and demonstration-based learning.
1465
| Sentence: | The Algerian War of Independence marked the end of French colonial rule in North Africa | . | | | | | | |
|-------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----|----|----|---------|----|----|-----------------|
| Labels: | O B-MISC I-MISC I-MISC I-MISC | O | O | O | O B-ORG | O | O | O B-LOC I-LOC O |
| Biased: French -> [ORG] | Desired: French -> [MISC] | | | | | | | |
| Standard: | [SEP] The unnamed suspect left the British colony after being detained and then freed by the Independent Commission Against Corruption ( ICAC ) , the radio said . Independent Commission Against Corruption is ORG . [SEP] [...] | | | | | | | |
| Random: | [SEP] Lebanon First Ed ##up CBOE suspect CB Chicago K Chicago Board Options Exchange ##ty Paul Gascoigne CBOE Monday Les into vintage I ##tion Ferdinand ##ca Op [SEP] [...] | | | | | | | |
![1_image_0.png](1_image_0.png)
Table 1: An example from the CoNLL03 dataset with different demonstrations. The NER model takes both the sentence and a demonstration as its inputs. The top two rows show examples of the NER model inputs and outputs with standard demonstrations. A biased prediction for 'French' is caused by the demonstration bias. The bottom three lines show three different demonstrations: Standard and Random demonstrations. The notation '[SEP][...]'
indicates that there are demonstrations for other classes, which have been omitted due to limited space.
Sequence Labeling Given an input sentence x =
[x1, x2, · · · , xn] composed of n tokens, the sequence labeling task is to predict a tag yi ∈
Y ∪{O} for each token xi, where Y is a predefined set of tags, and O denotes outside a tagged span. In the few-shot setting, we only have K-shot support set S for training which contains K examples for each tag type. This setting usually refers to K-shot learning. Modern sequence labeling models are usually composed of an encoder and a classification head. The encoders are PLMs such as BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019b),
which provides contextualized representations for each token h = [h1, h2, · · · , hn] given the natural language sequence x = [x1, x2, · · · , xn]. The classification head takes these contextualized representations and predicts the label li for each token xi. The model is optimized with the standard crossentropy loss.
Demonstration-based Learning Given some demonstration x˜, we concatenate the original input x with its demonstration x˜ as [x; x˜]. We then feed the demonstration-augmented input [x; x˜] into the encoder, and get the contextualized representation
[h; h˜]. The classification head takes h as the input and estimate the corresponding token's label li in the original natural-language sequence. Please note that we use identical demonstrations during training and testing (Lee et al., 2022).
Demonstration Construction To construct demonstrations, we first sample an entity e
(c)for each label type t
(c), and its context s
(c)from support set S. Then we convert them into a natural language sequence d
(c) = T(s
(c), e(c), t(c)), where T is the template operator and previous works (Lee et al.,
2022) focus on finding more effective templates.
With these sequences [d
(ci)]
|Y | i=1 with different tags ci, a demonstration x˜ is built by concatenating them together: x˜ = d
(c1) ⊕ d
(c2) *⊕ · · · ⊕* d
(c|Y |), where
⊕ is the concatenation operator. An effective template, such as the one used in Lee et al. (2022),
is "s
(c). e
(c)is t
(c).". Here, we refer the "e
(c)is t
(c)." part in the template as labeling part of the demonstration.
## 3 Demonstration-Based Learning From A Causal Perspective
In this section, we give a specific example to show the potential bias and understand demonstrationbased learning from a causal perspective. Specifically, we first introduce a Structural Causal Model
(SCM) (Pearl et al., 2000) to describe the mechanism and identify the induced bias. Then, we perform demonstration variable intervention and propose multiple simple and effective random demonstration templates inspired by our causal model.
We observe that the frequent co-occurrence of tokens in the classical demonstrations generate harmful superficial patterns which is misleading to the model and leads to biased predictions (Zhang et al.,
2022a; Min et al., 2022). A specific example with different demonstrations is provided in Table 1, where the entity to predict is French. Following previous work (Zhang et al., 2022a), the observed demonstrations (*i.e.*, standard demonstration) provides some biased information: the concurrency of British and ICAC, which is an organization (ORG), may lead to biased predictions: French is labeled as an Organization while its desired prediction is other classes (MISC). Intuitively, the co-occurrence of two specific words in the demonstration may induce bias, while randomly sampled tokens in the demonstration do not. This specific example suggests why random demonstrations may sometimes perform better than standard ones.
## 3.1 Causal Model
To study the causal relationship between the NER
model and its training data, and explain the role of the demonstration, we introduce a SCM to describe
![2_image_0.png](2_image_0.png)
the inference step in NER models. Figure 1 shows the SCM of NER models. There are mainly 6 variables in NER models: 1) *Demonstration Tokens* D, the tokens which form the demonstration; 2)
Context Tokens C, the tokens that are related to the context; 3) *Entity Tokens* E, the tokens which are entities; 4) *Input Example* X, which is composed of C and E in the traditional model and composed of C, E and D in the demonstration-based models; 5) *Unobserved confounders* G, a confounding variable (not a concrete token) that influences the generation of C, E and D; 6) *Evaluation result* Y , the evaluation result (the F1 score) of the NER models.
Under the causal view, the key difference between the traditional NER model and the demonstrationbased NER model is that, the demonstration-based NER model has an additional node D. With the introduction of the demonstration D, a backdoor path G → D → X exists, which further introduces the bias.
Inspired by our SCM model (Figure 1b), we develop sampling techniques to generates new counterfactual examples by the interventions on the existing observational examples to alleviate this bias.
The benefits of interventions on E and C have been studied in (Zeng et al., 2020). In this paper, we focus on understanding the role of demonstrations in NER models under the causal view. We understand the co-occurrence of tokens and harmful superficial patterns from the causal perspective and focus on using interventions on the demonstration variable to create new counterfactual demonstrations.
## 3.2 Controllable Random Demonstrations
In this section, we first provide a running example to better understand the induced bias from human-crafted demonstrations and then present different ways of intervention on the demonstration tokens. The intervention is implemented via controllable random demonstrations to create new counterfactual examples, as replacing standard demonstrations with random tokens can remove induce bias and still make the model a good few-shot learner (Zhang et al., 2022a).
In Lee et al. (2022), an effective template T
is "s
(c). e
(c)is t
(c), and an example demonstration d
(c)can be "[SEP] Obama returns to White House. Obama is PER.". Intuitively, the model understands the demonstrations and then better performs inference. However, random demonstrations can still bring performance improvement (Zhang et al., 2022a). The random template is as simple as
"[si]
L
i=1", where si ∈ p, and p is a token distribution. Random demonstrations are composed of L
tokens randomly sampled from p.
Demonstration Intervention We use the intervention on the demonstration tokens to create new counterfactual examples, to alleviate the biases. If we do not carefully design D, the backdoor path will exist and the model performance is degraded.
Our causal framework enables us to think about the problem from a causal perspective and guides us how to properly design D. We denote uniform distribution composed of vocabulary words of the PLMs as pV. Given the token distribution pV, for any word wi ∈ pV , we have pV(wi) = 1 |V| . Then we have a plain way to construct random demonstrations.
An important observation is that not all counterfactual examples are correct or useful. Hence, the intervention can be better implemented by replacing the uniform distribution with a non-uniform distribution, *i.e.*, by adding or removing words and changing specific words' probabilities. Some mechanism is needed to identify good counterfactual demonstrations, to avoid introducing noise. An intuitive solution is that we consider tokens from the support set are more helpful as PLMs are fine-
| NER | Chunking | | | | | | | | |
|----------|-------------|---------------|-------------|------------|------------|------------|------------|------------|------------|
| Mode | CoNLL03 | OntoNotes 5.0 | CoNLL00 | | | | | | |
| F1 | Precision | Recall | F1 | Precision | Recall | F1 | Precision | Recall | |
| No Demo. | 28.71±10.31 | 39.96±11.25 | 22.68±9.09 | 37.37±7.58 | 33.80±6.79 | 41.92±8.85 | 63.17±4.22 | 59.28±5.05 | 67.72±3.51 |
| Standard | 45.86±6.08 | 47.38±5.93 | 44.75±7.07 | 40.21±7.65 | 32.51±6.87 | 52.82±8.28 | 70.55±3.08 | 66.53±4.40 | 75.21±2.11 |
| Random | 41.33±7.36 | 45.41±7.37 | 38.22±7.65 | 39.71±7.56 | 32.28±6.56 | 51.63±8.75 | 69.28±2.78 | 64.75±3.85 | 74.57±1.66 |
| Rand-S | 45.55±8.02 | 46.84±7.71 | 44.60±8.62 | 41.60±7.05 | 33.96±6.29 | 53.75±7.80 | 70.63±3.01 | 66.24±4.29 | 75.75±1.70 |
| Rand-W | 45.93±7.57 | 47.79±7.42 | 44.50 ±8.13 | 45.49±3.77 | 37.82±3.64 | 57.18±4.17 | 72.15±3.16 | 68.00±4.42 | 76.94±1.67 |
| Rand-E | 47.32±7.42 | 48.96±7.02 | 46.02±8.11 | 46.06±3.84 | 38.32±3.65 | 57.81±4.31 | 74.02±2.93 | 70.37±4.23 | 78.18±1.75 |
tuned on the support set. We expect to see a better downstream predictor when the demonstrations are constructed randomly from a intervened token distribution.
The difference between random demonstrations lies in the vocabulary and its associated probability distributions. We perform the interventions by controlling the vocabulary and changing the probability of random tokens. We encourage entity words
(*e.g.*, ICAC, British) to appear more frequently compared to the others (*e.g.*, is). Based on the previous theoretical justification, we consider the following variants of constructing random demonstrations2construction methods as counterfactual alternatives of the standard demonstrations3:
- **Random**: random context with tokens uniformly sampled from PLMs vocabulary V.
- **Rand-S**: random context with tokens uniformly sampled from unique words (*i.e.*, vocabulary) of support set, denoted as S.
- **Rand-W** 4: random context with tokens sampled from S, and entity tokens in support set, denoted as W; tokens from W have four times higher probability compared with those from S.
- **Rand-E**: similar to Rand-W, but replace entity tokens with entities composed of coherent tokens in support set, denoted as U.
## 4 Experimental Results 4.1 Experiment Setup
Datasets We conduct experiments on two sequence labeling tasks: (i) named entity recognition (NER) on dataset **CoNLL03** (Tjong Kim Sang and De Meulder, 2003), and **OntoNotes** 5.0 (Weischedel et al., 2013); and (ii) chunking on dataset **CoNLL00** (Tjong Kim Sang and Buchholz, 2Random: [SEP] {random context} 3Standard: [SEP] {context} {entity} is {tag}.
4Empirical results show sampling only from W leads to poor performance.
2000). Following previous works Ma et al. (2021);
Zhang et al. (2022a), we omit the 7 value types in OntoNotes and only consider the 6 most frequent types in CoNLL00. For few-shot data sampling, we follow the greedy sampling strategy proposed by Yang and Katiyar (2020) to sample K shots for each type in an increasing order with respect to their frequencies, the detailed algorithm can be found. For each dataset, we sample 5 different K-shot support sets and report mean and standard deviation of metrics. For each K-shot support set, we run the experiments with 3 random seeds.
Main Results We show the results for demonstration-based learning with different modes of demonstrations as well as classical sequence labeling with no demonstration in Table 2.
The results show that demonstration-based method can consistently improve model performance.
In demonstration-based methods, the Random approach shows the worst performance and Rand-S
shows comparable results with the standard demonstrations, and the conclusion is consistent with previous works (Zhang et al., 2022a). Interestingly, if we modify the token sampling distributions and sample more entity or entity-related words as Rand-W and Rand-E, our model shows even better performance than standard meaningful demonstrations. The difference between Rand-W and Rand-E lies in whether there are complete entities, and the results show that adding complete entities instead of random entity words can lead to better performance. At the same time, it shows adding random tokens related to the support set can reduce the fine-tuned bias, which verifies our hypothesis in Section 3.1. Intuitively, the benefits of demonstration-based methods come from tokens of support sets S instead of meaningful demonstrations, as the standard demonstration sampled from the support set also shows good performance.
![4_image_0.png](4_image_0.png)
| Mode | CoNLL03 | OntoNotes5.0 | CoNLL00 |
|----------|------------|----------------|------------|
| No Demo. | 45.70±8.13 | 51.62±2.76 | 72.80±3.53 |
| Standard | 45.73±7.29 | 54.76±2.36 | 75.90±1.95 |
| Rand-S | 46.86±6.50 | 54.35±2.67 | 72.23±3.42 |
| Rand-W | 52.11±6.15 | 54.48±2.35 | 73.84±2.19 |
| Rand-E | 52.87±7.64 | 55.94±2.38 | 75.30±3.06 |
## 4.2 Analysis
Ablation Studies We further investigate whether the performance gain of demonstration-based learning changes over the size of support set. We present results of different modes of demonstrations under K = 5, 10, 20 shots in Figure 2. With more training examples in the support set, the relative performance gap between Rand-E and Standard remains, but it becomes smaller. This indicates that carefully designed random demonstrations show a consistent performance improvement upon standard demonstration. We also observe that the variance within each group becomes smaller as more data becomes available. Among random demonstrations, Rand-E
consistently shows better performance than RandW and Rand-S, which verifies our hypothesis based on the SCM.
Additionally, we investigate the effect of using different base models and replace BERT with RoBERTa. The observed results for RoBERTa in Table 3 are consistent with those of BERT, demonstrating that Rand-E exhibits superior performance across different model architectures.
Name Regularity Bias Name Regularity Bias
(Ghaddar et al., 2021; Lin et al., 2020) in NER
occurs when a model relies on a signal from the entity name to make predictions and disregards evidence from the local context. Ghaddar et al. (2021)
carefully designed a testbed utilizing Wikipedia disambiguation pages to diagnose the Name Regularity Bias of NER models. Details about the NRB
dataset are provided in the appendix.
We use both the NRB and WTS (as control sets)
datasets to evaluate the model trained with different modes of demonstrations on CoNLL03. The results show a smaller gap for random demonstrations, suggesting that random demonstration-based learning can better leverage context information instead of the name regularity patterns.
## 5 Conclusions
In this paper, we present a casual view to understand demonstration-based learning. Based on the structural causal model we constructed, we investigate the causal effects and discover that the concurrence of specific words in the demonstration can induce bias. To address this issue, we perform interventions by constructing random demonstrations. Our empirical results indicate that carefully designed random demonstrations consistently outperform meaningful demonstrations on public sequence labeling benchmarks.
## 6 Limitations
All our experiments are done on the sequence labeling task, and they can be further evaluated on sentence classification tasks with classifier-based fine-tuning since the [CLS] token used for classification represents the whole sentence. We provide a causal opinion on demonstration-based learning and a simple but not systematic method to alleviate the induced bias. Our demonstration-based learning builds upon previous works (Lee et al., 2022; Zhang et al., 2022a), where BERT or RoBERTa are used instead of Large Language Models, such as InstructGPT (Ouyang et al., 2022), PaLM (Chowdhery et al., 2022), and OPT (Zhang et al., 2022b).
Furthermore, our conclusions are drawn from fewshot learning settings and cannot be directly applied to zero-shot inference.
## References
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. *arXiv* preprint arXiv:2305.10403.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33:
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3816–3830, Online. Association for Computational Linguistics.
Abbas Ghaddar, Philippe Langlais, Ahmad Rashid, and Mehdi Rezagholizadeh. 2021. Context-aware Adversarial Training for Name Regularity Bias in Named Entity Recognition. Transactions of the Association for Computational Linguistics, 9:586–604.
Jiaxin Huang, Chunyuan Li, Krishan Subudhi, Damien Jose, Shobana Balakrishnan, Weizhu Chen, Baolin Peng, Jianfeng Gao, and Jiawei Han. 2021. Fewshot named entity recognition: An empirical baseline
study. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 10408–10423, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Brenden Lake, Ruslan Salakhutdinov, and Joshua Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. *Science*, 350:1332–
1338.
Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 2687–2700, Dublin, Ireland.
Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Hongyu Lin, Yaojie Lu, Jialong Tang, Xianpei Han, Le Sun, Zhicheng Wei, and Nicholas Jing Yuan. 2020.
A rigorous study on named entity recognition: Can fine-tuning pretrained model lead to the promised land? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 7291–7300, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a.
Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Qi Zhang, and Xuanjing Huang. 2021. Template-free prompt tuning for few-shot NER. *CoRR*, abs/2109.13532.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? *arXiv* preprint arXiv:2202.12837.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. *NeurIPS*.
Judea Pearl et al. 2000. Models, reasoning and inference. *Cambridge, UK: CambridgeUniversityPress*,
19(2).
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4. *arXiv preprint arXiv:2304.03277*.
Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task chunking.
In *Fourth Conference on Computational Natural Language Learning and the Second Learning Language* in Logic Workshop.
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task:
Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–
147.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19.
Linguistic Data Consortium, Philadelphia, PA, 23.
Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems, volume 33, pages 6256–6268. Curran Associates, Inc.
Yi Yang and Arzoo Katiyar. 2020. Simple and effective few-shot named entity recognition with structured nearest neighbor learning. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6365–6375, Online. Association for Computational Linguistics.
Xiangji Zeng, Yunliang Li, Yuchen Zhai, and Yin Zhang. 2020. Counterfactual generator: A weaklysupervised method for named entity recognition. In EMNLP.
Hongxin Zhang, Yanzhe Zhang, Ruiyi Zhang, and Diyi Yang. 2022a. Robustness of demonstration-based learning under limited data scenario. In *EMNLP*.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al.
2022b. Opt: Open pre-trained transformer language models. *arXiv preprint arXiv:2205.01068*.
| Dataset | |Y | | L | |Dtest| |
|---------------|--------|-----|-----------|
| CoNLL03 | 4 | 18 | 3453 |
| OntoNotes 5.0 | 11 | 21 | 12217 |
| CoNLL00 | 6 | 36 | 2012 |
## A Appendix
NRB Dataset Details The NRB dataset contains examples whose labels can be easily inferred from the local context, but they are difficult to be tagged by a popular NER system. The WTS dataset is a domain control set that includes the same query terms covered by NRB, but these can be correctly labeled by both the popular NER tagger and the local context-only tagger. Therefore, the gap between the NRB and WTS sets measures how effectively the model captures context information to predict token labels.
Effects of Sampling Probability We present two variants, Random-E[X] and Random-W[X], where X refers to how many times the probability of preferred tokens is higher. In this ablation study, we consistently observe that Random-E4 performs better than Random-E2, and Random-W4 outperforms Random-E4. However, if we increase the X value to a very large number, the performance deteriorates.
![8_image_0.png](8_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Section 6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
sarti-etal-2023-ramp | {RAMP}: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation | https://aclanthology.org/2023.acl-short.126 | Attribute-controlled translation (ACT) is a subtask of machine translation that involves controlling stylistic or linguistic attributes (like formality and gender) of translation outputs. While ACT has garnered attention in recent years due to its usefulness in real-world applications, progress in the task is currently limited by dataset availability, since most prior approaches rely on supervised methods. To address this limitation, we propose Retrieval and Attribute-Marking enhanced Prompting (RAMP), which leverages large multilingual language models to perform ACT in few-shot and zero-shot settings. RAMP improves generation accuracy over the standard prompting approach by (1) incorporating a semantic similarity retrieval component for selecting similar in-context examples, and (2) marking in-context examples with attribute annotations. Our comprehensive experiments show that RAMP is a viable approach in both zero-shot and few-shot settings. | # Ramp**: Retrieval And Attribute-Marking Enhanced Prompting For** Attribute-Controlled Translation
Gabriele Sarti∗†, Phu Mon Htut‡, Xing Niu‡**, Benjamin Hsu**‡,
Anna Currey‡, Georgiana Dinu‡**, Maria Nadejde**‡
†University of Groningen ‡AWS AI Labs [email protected], {hphu, xingniu, benhsu, ancurrey, gddinu, mnnadejd}@amazon.com
## Abstract
Attribute-controlled translation (ACT) is a subtask of machine translation that involves controlling stylistic or linguistic attributes (like formality and gender) of translation outputs.
While ACT has garnered attention in recent years due to its usefulness in real-world applications, progress in the task is currently limited by dataset availability, since most prior approaches rely on supervised methods. To address this limitation, we propose *Retrieval and AttributeMarking enhanced Prompting* (RAMP), which leverages large multilingual language models to perform ACT in few-shot and zero-shot settings. RAMP improves generation accuracy over the standard prompting approach by (1) incorporating a semantic similarity retrieval component for selecting similar in-context examples, and (2) marking in-context examples with attribute annotations. Our comprehensive experiments show that RAMP is a viable approach in both zero-shot and few-shot settings.
## 1 Introduction
Text style transfer (TST) is a task that aims to control stylistic attributes of an input text without affecting its semantic content (Jin et al., 2022).
Research in TST has largely focused on English, thanks to the availability of large monolingual English datasets covering stylistic attributes like formality and simplicity (Rao and Tetreault 2018, Zhu et al. 2010, *inter alia*). In recent years, however, multilingual and cross-lingual applications of TST have seen a steady gain in popularity (Briakou et al.,
2021; Garcia et al., 2021; Krishna et al., 2022). A
notable instance of cross-lingual TST is *attributecontrolled translation* (ACT), in which attribute1 conditioning is performed alongside machine translation (MT) to ensure that translations are not only
| Formality-Controlled Translation (COCOA-MT) | |
|-----------------------------------------------|-------------------------------------------------------|
| Neutral Src (EN) | OK, then please follow me to your table. |
| Formal Ref (JA) | ではテーブルまで私について来てください。 |
| Informal Ref (JA) | ではテーブルまで私について来て。 |
| Gender-Controlled Translation (MT-GENEVAL) | |
| Neutral Src (EN) | After retiring from teaching, Cook became a novelist. |
| Feminine Ref (NL) | Nadat ze stopte met lesgeven, werd Cook schrijfster. |
| Masculine Ref (NL) | Nadat hij stopte met lesgeven, werd Cook schrijver. |
![0_image_0.png](0_image_0.png)
Table 1: Examples of attribute triplets from COCOA-MT and MT-GENEVAL. Attribute markers in the attribute-controlled translations are underlined.
correct but match user-specified preferences, such as formality/honorifics (Sennrich et al., 2016; Niu et al., 2017; Michel and Neubig, 2018; Niu and Carpuat, 2020; Nadejde et al., 2022; Wang et al.,
2022), gender (Rabinovich et al., 2017; Vanmassenhove et al., 2018; Saunders and Byrne, 2020), and length (Lakew et al., 2019; Schioppa et al., 2021).
ACT is especially important for sectors like customer service and business communication, where stylistic differences can have an impact on user perception (e.g., misgendering customers or speaking to them in an appropriately informal tone can be offensive or disconcerting). Table 1 gives examples of ACT for formality and gender.
Most prior work on ACT relies on a supervised adaptation component that conditions the generative model on the selective attribute. However, few annotated ACT datasets are available, and they generally cover only a limited set of languages and attributes. Thus, enabling few-shot or zero-shot ACT would facilitate applying attribute control to less-resourced attributes and langauges.
In this paper, we introduce a new approach for ACT: Retrieval and Attribute-Marking enhanced Prompting (RAMP). Recent studies have shown that large language models (LLMs) can perform MT out of the box using the prompting paradigm (Brown et al., 2020; Lin et al., 2022; Chowdhery et al.,
2022). We build on this, prompting LLMs to perform *attribute-controlled* MT through two innovations: (1) *retrieval of similar examples* and (2)
![1_image_0.png](1_image_0.png)
Figure 1: An example of RAMP using 2 in-context examples. (Left) The input sentence is embedded by a sentence similarity model, and the top-k most similar labeled examples are retrieved from a pool of training data to build the prompt context. (Right)
Labeled cross-lingual examples are used to fill in the English prompt template, which is then provided to the LLM to generate the output.
explicit attribute marking.
Recent works adopting the prompting paradigm for text style transfer have mainly focused on the generalization capabilities of large English-centric LMs for zero-shot style transfer using previously unseen style descriptions (Suzgun et al., 2022; Reif et al., 2022). However, prior work on other NLP tasks has shown that cross-lingual prompting of multilingual LLMs can be effective (Zhao and Schütze, 2021; Zhou et al., 2022; Huang et al., 2022). As such, we leverage multilingual LLMs and extend their ACT capabilities cross-lingually to languages not covered by the in-context examples, thus enabling zero-shot ACT.
## 2 Method 2.1 Preliminaries
Attribute-Controlled Translation ACT takes two inputs, a sentence x and a desired target attribute a ∈ A (with A being the space of attributes),
and outputs a translation y that complies with the specified attribute. It can be formulated as a function f : (x, a) → y. In our experiments, we use attribute values provided by the COCOA-MT formality translation dataset and the MT-GENEVAL
gender translation dataset, i.e., A = {formal, informal} or {female, male}.2 Prompting In the prompting paradigm for decoder-only LLMs, inputs are given as decoding prefixes to the model, usually combined with natural language instructions for output generation.
In style-controlled translation, we formulate the prompt for target language l and attribute a using the text "Here is a sentence: {x*} Here is its* l translation written in a a *style:"* to produce the output y.
3In the few-shot setting, we provide a sequence of k labeled *in-context examples* before the unlabeled input, which can be formulated as a function f : {(x1, l1, a, y1), . . . ,(xk+1, lk+1, a)} →
yk+1.
## 2.2 Our Approach: Ramp
RAMP builds on the success of the prompting paradigm on few-shot generation tasks such as monolingual text style transfer (Reif et al., 2022)
and MT (Garcia and Firat, 2022; Agrawal et al.,
2022) by creating more informative prompts through *similarity retrieval* and *attribute marking*.
See Figure 1 for an illustration of RAMP.
Similarity Retrieval In standard prompting, incontext examples are sampled randomly from the pool of labeled examples DA. In RAMP, we select examples based on their similarity with the input text. We first embed both the input text and the source texts of DA using all-MiniLM-L6-v2 (Wang et al., 2020). Then, the top-k most similar examples are retrieved for the input text based on cosine similarity. These are then used in a descending order w.r.t. similarity as the in-context examples in the inference prompt. As demonstrated in Figure 1, the in-context example "You will always be welcome here." has the highest similarity to the test example "You're welcome." so it is prompted first.
Attribute Marking In standard prompting, incontext examples are provided without explicit information on why they satisfy the prompting objective. Inspired by recent studies that have shown that decomposition of complex tasks can improve prompting quality (Nye et al., 2021; Wei et al.,
3We adopt prompt templates similar to the one used by Reif et al. (2022), and we write the prompt template in English.
Complete templates are provided in Appendix A.
2See Section 5 for ethical considerations.
![2_image_0.png](2_image_0.png)
Table 2: Target languages in the test sets and languages **seen**
by LLMs in pre-training. We report results on languages seen by both LLMs. Language codes are defined in Appendix B.
2022), we include for every in-context example an additional sentence directly after the target sentence that specifies which text spans convey the desired attribute (e.g., "The translated sentence conveys a formal style by using words such as
'Vous'."). In our experiments, we use the gold attribute spans included in the CoCoA-MT and MT-GenEval datasets. In section 4 we suggest possibilities for automatically deriving attribute spans when gold training labels are not available.
## 2.3 Cross-Lingual Prompting
The similarity retrieval component of RAMP requires a large pool DA from which to find appropriate incontext examples for prompting. Low-resource attributes or language pairs may have insufficient or no annotated data from which to retrieve such examples. To mitigate this issue, we introduce *crosslingual prompting*, in which the target side of the in-context examples differs from the desired target language of the translation task. As demonstrated in Figure 1, we study whether the system can leverage examples in one language (e.g., attribute indicators in Spanish) to produce the same attribute in another (e.g., French). Two main features of our RAMP model allow us to perform cross-lingual prompting: (1) the use of multilingual LLMs, and
(2) the example retrieval step, which is done on the source language only.
## 3 Experiments 3.1 Datasets
We experiment on two multilingual ACT datasets:
- COCO**A-MT** (Nadejde et al., 2022) covers formality-controlled translation in the conversation domain. Source sentences are underspecified for formality, and references require formality markings (formal or informal).
- MT-GENEVAL (Currey et al., 2022) covers gendered translation in the Wikipedia domain.
We use the *contextual* subset, in which sentences are gender ambiguous in the source while the reference requires gender marking.
We do not use the disambiguating sentences,
| Dataset | Attribute | # Train | # Test | Acc. |
|------------|-------------|-----------|----------|--------|
| COCOA-MT | Formality | 7,600 | 1,596 | 0.990 |
| MT-GENEVAL | Gender | 4,900 | 9,854 | 0.970 |
Table 3: Dataset statistics. We report \# of triplets in the train/**test** split aggregated across all languages and the classification accuracy on the test split of the classifiers.
instead explicitly controlling target gender.
Both datasets have gold annotations for attributemarked target spans, and both cover translation from English into multiple diverse target languages.
We list their target languages in Table 2.
## 3.2 Large Language Models (Llms)
We select three massively multilingual decoderonly LLMs for the prompting experiments: XGLM
(Lin et al., 2022), BLOOM (BigScience, 2022)
and GPT-NEOX (Black et al., 2022). The selected models span three orders of magnitude in terms of number of parameters and differ in the languages that they cover (see Table 2). Appendix D motivates our choice of models in more detail. GPT-3 is not included because it is not freely accessible and it is not intended for multilingual use-cases.
## 3.3 Baseline
Attribute tagging is a standard method for ACT,
so we include a baseline following the approach and configuration used by Nadejde et al. (2022):
a transformer MT model (Vaswani et al., 2017)
pre-trained on public parallel data and further finetuned on contrastive training pairs with attribute tags (from either COCOA-MT or MT-GENEVAL).
We refer to this as **adapted MT**.
## 3.4 Evaluation Metrics
We measure translation quality with BLEU (Papineni et al., 2002) and COMET (Rei et al., 2020).
For attribute accuracy, we use both (1) the lexical matching metrics provided with COCOA-MT
and MT-GENEVAL (**Lexical-Accuracy**) and (2)
sentence encoders trained on contrastive examples
(**Sentential-Accuracy**). For (2), we train multilingual classifiers on top of the mDeBERTa-v3 encoder (He et al., 2021). High-performance pretrained classifiers have been shown to produce attribute accuracy estimates closer to human judgments for style transfer (Lai et al., 2022). Table 3 presents the accuracy of the classification models on the test sets of their respective datasets, averaged over all languages.4 4More details of datasets and classifiers are in Appendix C.
base 28.6 **0.463** 0.835 0.846 23.7 0.445 0.790 0.727
XGLM 7.5B +mark 28.7 0.423 0.920 0.902 23.7 0.444 0.789 0.732
RAMP **30.0** 0.451 0.938 0.923 **24.8 0.473 0.836 0.820** base 39.9 0.691 0.930 0.940 33.3 0.679 0.748 0.704
BLOOM 175B +mark 40.3 0.688 0.970 **0.970** 33.1 0.674 0.759 0.725
RAMP 41.9 0.711 0.973 0.970 **34.3 0.699 0.817 0.818**
Adapted MT 38.5 0.454 0.691 0.693 39.6 0.750 0.842 0.864
| COCOA-MT | MT-GENEVAL | | | | | | |
|------------|--------------|-------|-------|------|-------|-------|-------|
| BLEU | COMET | L-Acc | S-Acc | BLEU | COMET | L-Acc | S-Acc |
| Same-Language |
|-----------------|
Cross-Lingual BLOOM 175B base 32.1 0.644 0.567 0.596 28.5 0.469 0.777 0.633
RAMP 31.8 0.646 0.625 0.622 **29.4 0.502 0.788 0.673**
Table 4: BLEU, COMET, Lexical- and Sentential-Accuracy of selected LLMs using 16 same-language in-context examples on two tasks, alongside adapted MT models. Scores are aggregated across **seen** languages (w.r.t. BLOOM pre-training) and both attributes for each task. (Decomposed results are included in Table 6–9.)
Unlike lexical accuracy, the multilingual attribute classifier does not penalize text generated in incorrect languages. Thus, in cross-lingual prompting experiments, we include a step of language detection5so that generated sentences not in the requested target language are considered incorrect.
## 3.5 Results: Same-Language Prompting
We first evaluate the effectiveness of RAMP for formality- and gender-controlled translation where the language pair used for in-context examples is the same as the one used in the prompt candidate
(e.g., EN→ES formality-controlled translation using EN→ES in-context examples). We test XGLM
7.5B and BLOOM 175B with 16 in-context examples on both tasks.6 Table 4 presents our results alongside the adapted MT baseline. The base model uses in-context examples that are sampled randomly from the pool of labeled examples. We also include an ablation that adds attribute marking only on top of base, without similarity retrieval
(**+mark**).
Using just attribute marking consistently improves attribute accuracy of the generated text, but it leads to degradation of COMET on COCOAMT. The complete RAMP with similarity retrieval not only compensates for the COMET degradation but also improves quality and attribute metrics across the board, especially for the high-capacity BLOOM 175B model.
Adapted MT outperforms BLOOM 175B on MT-GENEVAL in all metrics, but underperforms it on COCOA-MT. This suggests that it is challenging to do fine-grained comparison between LLMs and standard MT systems as they might have different domain coverage. BLOOM 175B consistently outperforms XGLM 7.5B in both generic translation quality and attribute control accuracy, so we proceed with using BLOOM 175B in the crosslingual prompting setting.
## 3.6 Results: Cross-Lingual Prompting
We have demonstrated the effectiveness of selecting similar same-language examples to build the prompt, echoing contemporary work (Liu et al.,
2022; Agrawal et al., 2022). In this section, we evaluate the cross-lingual prompting option, i.e., retrieving in-context examples from other target languages besides the desired language of translation.
We test this zero-shot setting using the leave-oneout strategy, and results of tested language pairs are averaged.7 Table 4 presents our results using BLOOM
175B. On both test sets, compared to the baseline, we observe improved attribute accuracy and comparable or better generic translation quality when using RAMP with cross-lingual prompting.
We do observe translation quality degradation with RAMP on some target languages of COCOAMT, e.g., ES. Manual analysis shows that **repeated**
inaccurate retrieval results could lead to hallucinations.8 For example, RAMP retrieves multiple sentences containing *"million"* for the input *"If you* got it why not? He is worth over 20 billion dollars after all.". This results in mistranslation of *billion* to million (millionario): *"Si lo tienes, ¿por qué no?*
Es millonario después de todo.". We give detailed examples in Appendix H.
## 4 Conclusions
We introduced the new RAMP in-context learning approach to leverage attribute annotations and similar same-language or cross-lingual examples for better prompting quality. We demonstrated its effectiveness with multilingual LLMs for both formalitycontrolled and gender-controlled translation. We use gold annotations for attribute marking, but we leave unsupervised automatic attribute span extraction as future work.
## 5 Limitations
- We currently rely on gold annotations for attribute marking, which are not always available depending on the dataset. However, RAMP
could be easily extended to unsupervised settings through LLM feature attribution (Sarti et al., 2023), i.e., extracting salient tokens driving the attribute prediction. This approach builds upon recent techniques in unsupervised language generation metrics (Fomicheva et al.,
2021, 2022; Leiter et al., 2022). We leave an empirical evaluation of its effectiveness to future work.
- Besides the choice of in-context examples, prompting is also sensitive to their ordering
(Lu et al., 2022) and the design of the template (Jiang et al., 2020). We refrain from tuning example orders and templates to avoid introducing too many variables.
- Multilingual LLMs perform competitive MT
out of the box for languages seen during their pre-training. However, we noticed that BLOOM 175B produces better EN-IT translations than XGLM 7.5B even though IT is not listed as a training language of BLOOM. This could possibly be due to typological similarity between Italian and the Romance languages included in BLOOM training. We leave experiments of unseen languages as future work.
- Multilingual LLMs like the ones used in this paper require larger GPU resources for inference than standard bilingual MT systems.
- One test set we use (MT-GENEVAL) provides only two gender values (female and male), but we do not intend to imply that other genders do not exist.
## References
Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, and Marjan Ghazvininejad. 2022. Incontext examples selection for machine translation.
CoRR, abs/2212.02437.
BigScience. 2022. BLOOM: A 176b-parameter open-access multilingual language model. *CoRR*,
abs/2211.05100.
Sidney Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, Usvsn Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: An opensource autoregressive language model. In Proceedings of BigScience Episode \#5 - Workshop on Challenges & Perspectives in Creating Large Language Models, pages 95–136, virtual+Dublin. Association for Computational Linguistics.
Eleftheria Briakou, Di Lu, Ke Zhang, and Joel Tetreault.
2021. Olá, bonjour, salve! XFORMAL: A benchmark for multilingual formality style transfer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3199–3216, Online. Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33:
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, and et al. 2022. Palm: Scaling language modeling with pathways. *CoRR*, abs/2204.02311.
Anna Currey, Maria Nadejde, Raghavendra Reddy Pappagari, Mia Mayer, Stanislas Lauly, Xing Niu, Benjamin Hsu, and Georgiana Dinu. 2022. MT-GenEval:
A counterfactual and contextual dataset for evaluating gender accuracy in machine translation. In *Proceedings of the 2022 Conference on Empirical Methods* in Natural Language Processing, pages 4287–4299, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, and Yang Gao. 2021. The Eval4NLP shared task on explainable quality estimation: Overview and results. In *Proceedings of* the 2nd Workshop on Evaluation and Comparison of NLP Systems, pages 165–178, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Marina Fomicheva, Lucia Specia, and Nikolaos Aletras. 2022. Translation error detection as rationale extraction. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 4148–4159, Dublin, Ireland. Association for Computational Linguistics.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The pile: An 800gb dataset of diverse text for language modeling.
CoRR, abs/2101.00027.
Xavier Garcia, Noah Constant, Mandy Guo, and Orhan Firat. 2021. Towards universality in multilingual text rewriting. *CoRR*, abs/2107.14749.
Xavier Garcia and Orhan Firat. 2022. Using natural language prompts for machine translation. *CoRR*,
abs/2202.11822.
Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021.
Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. *CoRR*, abs/2111.09543.
Lianzhe Huang, Shuming Ma, Dongdong Zhang, Furu Wei, and Houfeng Wang. 2022. Zero-shot crosslingual transfer of prompt-based tuning with a unified multilingual prompt. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 11488–11497, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438.
Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2022. Deep learning for text style transfer: A survey. *Computational Linguistics*,
48(1):155–205.
Kalpesh Krishna, Deepak Nathani, Xavier Garcia, Bidisha Samanta, and Partha Talukdar. 2022. Fewshot controllable style transfer for low-resource multilingual settings. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7439–7468, Dublin, Ireland. Association for Computational Linguistics.
Huiyuan Lai, Jiali Mao, Antonio Toral, and Malvina Nissim. 2022. Human judgement as a compass to navigate automatic metrics for formality transfer. In Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval), pages 102–115, Dublin, Ireland. Association for Computational Linguistics.
Surafel Melaku Lakew, Mattia Di Gangi, and Marcello Federico. 2019. Controlling the output length of neural machine translation. In Proceedings of the 16th International Conference on Spoken Language Translation, Hong Kong. Association for Computational Linguistics.
Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, and Steffen Eger. 2022. Towards explainable evaluation metrics for natural language generation. *CoRR*, abs/2203.11131.
Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, and Xian Li. 2022. Few-shot learning with multilingual generative language models. In *Proceedings of the 2022 Conference on Empirical Methods* in Natural Language Processing, pages 9019–9052, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO
2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics.
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland. Association for Computational Linguistics.
Paul Michel and Graham Neubig. 2018. Extreme adaptation for personalized neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 312–318, Melbourne, Australia.
Association for Computational Linguistics.
Maria Nadejde, Anna Currey, Benjamin Hsu, Xing Niu, Marcello Federico, and Georgiana Dinu. 2022.
CoCoA-MT: A dataset and benchmark for contrastive controlled MT with application to formality. In Findings of the Association for Computational Linguistics:
NAACL 2022, pages 616–632, Seattle, United States.
Association for Computational Linguistics.
Xing Niu and Marine Carpuat. 2020. Controlling neural machine translation formality with synthetic supervision. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA,
February 7-12, 2020, pages 8568–8575. AAAI Press.
Xing Niu, Marianna Martindale, and Marine Carpuat.
2017. A study of style in machine translation: Controlling the formality of machine translation output.
In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2814–2819, Copenhagen, Denmark. Association for Computational Linguistics.
Maxwell I. Nye, Anders Johan Andreassen, Guy GurAri, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021. Show your work: Scratchpads for intermediate computation with language models. *CoRR*,
abs/2112.00114.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lucia Specia, and Shuly Wintner. 2017. Personalized machine translation: Preserving original author traits. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1074–1084, Valencia, Spain. Association for Computational Linguistics.
Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129–140, New Orleans, Louisiana. Association for Computational Linguistics.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Emily Reif, Daphne Ippolito, Ann Yuan, Andy Coenen, Chris Callison-Burch, and Jason Wei. 2022. A recipe for arbitrary text style transfer with large language models. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 2: Short Papers), pages 837–848, Dublin, Ireland. Association for Computational Linguistics.
Gabriele Sarti, Nils Feldhus, Ludwig Sickert, and Oskar van der Wal. 2023. Inseq: An interpretability toolkit for sequence generation models. *CoRR*,
abs/2302.13942.
Danielle Saunders and Bill Byrne. 2020. Reducing gender bias in neural machine translation as a domain adaptation problem. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 7724–7736, Online. Association for Computational Linguistics.
Andrea Schioppa, David Vilar, Artem Sokolov, and Katja Filippova. 2021. Controlling machine translation for multiple attributes with additive interventions.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6676–6696, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 35–40, San Diego, California. Association for Computational Linguistics.
Mirac Suzgun, Luke Melas-Kyriazi, and Dan Jurafsky. 2022. Prompt-and-rerank: A method for zeroshot and few-shot arbitrary textual style transfer with small language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2195–2222, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 3003–3008, Brussels, Belgium. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, and George F. Foster. 2022.
Prompting palm for translation: Assessing strategies and performance. *CoRR*, abs/2211.09102.
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. MiniLM: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Yifan Wang, Zewei Sun, Shanbo Cheng, Weiguo Zheng, and Mingxuan Wang. 2022. Controlling styles in neural machine translation with activation prompt.
CoRR, abs/2212.08909.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS.
Mengjie Zhao and Hinrich Schütze. 2021. Discrete and soft prompting for multilingual models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8547–8555, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Meng Zhou, Xin Li, Yue Jiang, and Lidong Bing. 2022.
Enhancing cross-lingual prompting with mask token augmentation. *CoRR*, abs/2202.07255.
Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych.
2010. A monolingual tree-based translation model for sentence simplification. In *Proceedings of the* 23rd International Conference on Computational Linguistics (Coling 2010), pages 1353–1361, Beijing, China. Coling 2010 Organizing Committee.
## A Prompt Templates
Formality-Controlled Translation Here is a sentence: {x} Here is its l translation written in a a style: {y} The translated sentence conveys a a style by using words such as 'w1', 'w2'.
Gender-Controlled Translation Here is a sentence: {x} Here is its l translation in which the person is a: {y} In the translation, the a gender of the person is made explicit by words such as 'w1',
'w2'.
## B Language Code
| AR | Arabic | DE | German | EN | English |
|------|----------|------|----------|------|-----------|
| ES | Spanish | FR | French | HI | Hindi |
| IT | Italian | JA | Japanese | NL | Dutch |
| RU | Russian | | | | |
## C Additional Details Of Datasets Splits And Pre-Trained Attribute Classifiers
We use the original train/test split provided by the COCOA-MT dataset. Each split contains *telephony* and *topical_chat* domains. We use the topical_chat domain in our experiments. MTGENEVAL contains a dev and test split, and we use the dev split as training data for the classification model and prompting experiments.
We finetune MDEBERTA-V3-BASE model9 on the contrastive examples in the respective training sets to get the attribute classifiers. We finetune the classifier for 2 epochs with a batch size of 8, learning rate 2e-5, 500 warm up steps, max sequence length of 256, and save checkpoint every 500 steps.
We do not do hyperparameter tuning, and thus, a validation set is not used.
## D Selection Of Large Language Models
XGLM (Lin et al., 2022) is a 7.5B-parameter model trained on a balanced corpus containing 30 languages (excluding NL). It was shown to outperform much larger models such as GPT-3 on tasks related to machine translation and cross-lingual language understanding. We select it due to its broad linguistic coverage and its manageable size.
BLOOM (BigScience, 2022) is a model available in multiple sizes, trained on a curated corpus 9https://huggingface.co/microsoft/mdeberta-v3-base spanning 46 natural languages (and 13 programming languages). However, many of the test set languages are not part of its pre-training corpus (see Table 2). We evaluate two variants of the model
(7.1B and 175B parameters) to assess how it is affected by a massive scaling in model parameters.
The larger variant has a parameter count comparable to the one of GPT-3, while it is presently the largest publicly available multilingual LLM.
GPT-NEOX (Black et al., 2022) is a 20Bparameter model trained on The Pile (Gao et al.,
2021), a large English-centric corpus covering a broad range of domains. While the model saw mainly English data during pre-training and as such is not intended for multilingual usage, it exhibits interesting generalization performances for many of our target languages.
## E Preliminary Evaluation Of Same-Language Prompting
We conduct preliminary evaluations aimed at reducing the number of experimental settings.
We perform formality-controlled translation using COCOA-MT, and evaluate LLMs by varying the number of in-context examples (i.e., 4-8-16-32, selected based on the feasible context length10).
Figure 2 presents results averaged across all four languages **seen** by BLOOM during its pretraining.11 Observations:
- RAMP generally outperforms base prompting
(i.e., random in-context examples and no attribute marking) across most LLMs and example settings for both BLEU and formality accuracy.
- BLEU and formality accuracy improve with increased model size and with the number of examples, until this number reaches 16.
Based on these results we move forward with the XGLM 7.5B and BLOOM 175B models and 16 examples.
## F Detailed Scores Of Aggregated Results
- Table 5: Detailed scores of same-language prompting on COCOA-MT (preliminary evaluation).12
![9_image_0.png](9_image_0.png)
- Table 7: Decomposed results of samelanguage prompting on MT-GENEVAL (full evaluation).
- Table 8: Decomposed results of cross-lingual prompting on COCOA-MT.
- Table 9: Decomposed results of cross-lingual prompting on MT-GENEVAL.
evaluation. Early truncating leads to slightly lower scores in Table 5 than in Table 4.
## G Amended Details Of Cross-Lingual Prompting
We test the zero-shot setting using the leave-oneout strategy, i.e. we retrieve in-context examples from every languages except the desired language of translation. We ensure that we retrieve an equal number of examples from all languages: the number of examples retrieved from each language is the total desired number of in-context examples divided by number of training languages. In COCOAMT, we retrieve 14 in-context examples from 7 languages. In MT-GENEVAL, we retrieve 8 in-context examples from 8 languages. We reduced the number of in-context examples in this setting to avoid out-of-memory errors with BLOOM 175B.
## H Error Analysis Of Cross-Lingual Prompting
Table 10 shows two examples where RAMP performs significantly worse than the base model in terms of COMET. In the first example, having multiple in-context examples containing *"million"* led the model to mis-translate "billion" to *"million"*.
In the second example, we observe that the color related in-context examples led the model to produce hallucinated output about clothing colors.
Repeated misleading in-context examples are less observed on MT-GENEVAL and in the samelanguage setting because (1) COCOA-MT translates the same set of English sentences to different languages while MT-GENEVAL collects English sentences independently; (2) There are no duplicated source (English) sentences for each language.
(Therefore, if RAMP retrieves duplicated English sentences as in Table 10, their reference translations are guaranteed to be in different languages.)
| BLEU | COMET | Sentential Accuracy | | | | | | | | | | | | | | |
|--------|---------|-----------------------|-------|------|-------|-------|--------|-------|-------|-------|--------|-------|-------|-------|-------|-------|
| 0 | 4 | 8 | 16 | 32 | 0 | 4 | 8 | 16 | 32 | 0 | 4 | 8 | 16 | 32 | | |
| BLOOM | base | 21.8 | 28.8 | 30.1 | 30.9 | 20.5 | 0.162 | 0.578 | 0.594 | 0.603 | -0.092 | 0.558 | 0.759 | 0.836 | 0.875 | 0.728 |
| 7.1B | RAMP | 30.9 | 32.3 | 32.9 | 24.6 | 0.597 | 0.613 | 0.621 | 0.150 | 0.842 | 0.887 | 0.907 | 0.840 | | | |
| XGLM | base | 11.8 | 25.3 | 26.6 | 28.3 | 29.2 | -0.534 | 0.443 | 0.449 | 0.499 | 0.517 | 0.524 | 0.764 | 0.841 | 0.854 | 0.893 |
| 7.5B | RAMP | 27.0 | 28.1 | 28.2 | 29.5 | 0.450 | 0.480 | 0.474 | 0.484 | 0.862 | 0.896 | 0.909 | 0.918 | | | |
| GPTNEOX 20B | base | 22.7 | 27.6 | 28.7 | 28.8 | 28.8 | 0.108 | 0.268 | 0.272 | 0.272 | 0.275 | 0.559 | 0.803 | 0.854 | 0.849 | 0.953 |
| RAMP | 29.0 | 29.8 | 30.0 | 29.2 | 0.284 | 0.310 | 0.307 | 0.284 | 0.854 | 0.886 | 0.889 | 0.874 | | | | |
| base | 29.9 | 37.7 | 38.5 | 39.1 | - | 0.476 | 0.731 | 0.744 | 0.750 | - | 0.612 | 0.898 | 0.949 | 0.953 | - | |
| BLOOM | RAMP | 39.2 | 39.75 | 40.3 | - | 0.740 | 0.744 | 0.761 | - | 0.946 | 0.967 | 0.967 | - | | | |
| 175B | | | | | | | | | | | | | | | | |
| ES | FR | HI | PT | | | | | | | |
|------------|-------|-------|---------|-------|--------|--------|-------|-------|-------|-------|
| F | I | F | I | F | I | F | I | AVG | | |
| BLEU | 30.1 | 33.0 | 30.7 | 28.8 | 18.5 | 16.9 | 35.7 | 35.4 | 28.6 | |
| COMET | 0.500 | 0.527 | 0.348 | 0.350 | 0.454 | 0.425 | 0.547 | 0.554 | 0.463 | |
| base | L-Acc | 0.524 | 0.966 | 0.977 | 0.633 | 0.976 | 0.744 | 0.931 | 0.928 | 0.835 |
| S-Acc | 0.507 | 0.958 | 0.953 | 0.840 | 0.963 | 0.748 | 0.888 | 0.912 | 0.846 | |
| BLEU | 31.0 | 33.2 | 29.4 | 27.4 | 19.2 | 18.6 | 35.7 | 35.5 | 28.7 | |
| COMET | 0.498 | 0.541 | 0.207 | 0.188 | 0.439 | 0.409 | 0.552 | 0.552 | 0.423 | |
| +mark | L-Acc | 0.728 | 0.972 | 0.985 | 0.923 | 0.986 | 0.860 | 0.960 | 0.947 | 0.920 |
| S-Acc | 0.697 | 0.958 | 0.963 | 0.917 | 0.983 | 0.838 | 0.927 | 0.937 | 0.902 | |
| XGLM 7.5B | BLEU | 32.8 | 33.5 | 32.7 | 31.0 | 21.0 | 20.3 | 34.2 | 34.4 | 30.0 |
| COMET | 0.480 | 0.511 | 0.314 | 0.302 | 0.502 | 0.491 | 0.488 | 0.522 | 0.451 | |
| RAMP | L-Acc | 0.842 | 0.963 | 0.989 | 0.926 | 0.993 | 0.885 | 0.961 | 0.943 | 0.938 |
| S-Acc | 0.803 | 0.952 | 0.975 | 0.922 | 0.98 | 0.873 | 0.928 | 0.948 | 0.923 | |
| BLEU | 44.3 | 45.0 | 42.9 | 41.0 | 27.1 | 25.8 | 47.3 | 45.7 | 39.9 | |
| COMET | 0.728 | 0.759 | 0.611 | 0.600 | 0.673 | 0.645 | 0.762 | 0.750 | 0.691 | |
| base | L-Acc | 0.795 | 0.96032 | 0.987 | 0.890 | 0.978 | 0.885 | 0.987 | 0.954 | 0.930 |
| S-Acc | 0.889 | 0.963 | 0.987 | 0.888 | 0.980 | 0.863 | 0.987 | 0.960 | 0.940 | |
| BLEU | 45.8 | 44.5 | 43.3 | 41.8 | 28.4 | 27.1 | 46.4 | 45.3 | 40.3 | |
| COMET | 0.726 | 0.745 | 0.610 | 0.594 | 0.677 | 0.659 | 0.751 | 0.745 | 0.688 | |
| +mark | L-Acc | 0.930 | 0.987 | 0.996 | 0.958 | 0.995 | 0.936 | 0.989 | 0.972 | 0.970 |
| S-Acc | 0.942 | 0.985 | 0.992 | 0.957 | 0.992 | 0.925 | 0.990 | 0.977 | 0.970 | |
| BLOOM 175B | BLEU | 46.4 | 46.2 | 43.9 | 42.9 | 30.8 | 29.2 | 48.8 | 47.4 | 41.9 |
| COMET | 0.718 | 0.759 | 0.611 | 0.610 | 0.721 | 0.713 | 0.782 | 0.771 | 0.711 | |
| RAMP | L-Acc | 0.956 | 0.984 | 0.998 | 0.952 | 0.991 | 0.947 | 0.993 | 0.962 | 0.973 |
| S-Acc | 0.957 | 0.982 | 0.995 | 0.945 | 0.993 | 0.935 | 0.990 | 0.967 | 0.970 | |
| BLEU | 44.4 | 43.7 | 43.4 | 37.8 | 19.1 | 17.0 | 53.0 | 49.9 | 38.5 | |
| COMET | 0.712 | 0.724 | 0.559 | 0.547 | -0.191 | -0.263 | 0.783 | 0.764 | 0.454 | |
| L-Acc | 0.697 | 0.598 | 0.822 | 0.377 | 0.869 | 0.449 | 0.972 | 0.744 | 0.691 | |
| S-Acc | 0.700 | 0.600 | 0.810 | 0.400 | 0.680 | 0.600 | 0.950 | 0.800 | 0.693 | |
| Adapted MT | | | | | | | | | | |
Table 6: Decomposed results of same-language prompting on COCOA-MT (full evaluation).
| AR | ES | FR | HI | PT | | | | | | | | |
|------------|-----------------------------------------------------------------------------------------|--------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| F | M | F | M | F | M | F | M | F | M | AVG | | |
| BLEU | 7.6 | 7.5 | 35.5 | 38.2 | 27.1 | 28.6 | 13.8 | 16.4 | 29.2 | 33.1 | 23.7 | |
| COMET | -0.040 | -0.012 | 0.694 | 0.738 | 0.509 | 0.555 | 0.304 | 0.332 | 0.661 | 0.713 | 0.445 | |
| base | L-Acc | 0.848 | 0.947 | 0.688 | 0.808 | 0.715 | 0.880 | 0.585 | 0.956 | 0.621 | 0.855 | 0.790 |
| S-Acc | 0.617 | 0.866 | 0.651 | 0.938 | 0.581 | 0.920 | 0.303 | 0.962 | 0.494 | 0.934 | 0.727 | |
| BLEU | 7.7 | 7.8 | 35.4 | 38.2 | 27.5 | 28.7 | 14.0 | 16.7 | 29.1 | 32.4 | 23.7 | |
| COMET | -0.038 | -0.020 | 0.704 | 0.735 | 0.508 | 0.556 | 0.300 | 0.317 | 0.663 | 0.714 | 0.444 | |
| +mark | L-Acc | 0.868 | 0.939 | 0.665 | 0.811 | 0.701 | 0.881 | 0.581 | 0.955 | 0.626 | 0.860 | 0.789 |
| S-Acc | 0.664 | 0.856 | 0.612 | 0.937 | 0.562 | 0.919 | 0.355 | 0.966 | 0.519 | 0.927 | 0.732 | |
| XGLM 7.5B | BLEU | 9.2 | 8.8 | 37.5 | 39.4 | 27.5 | 29.2 | 14.8 | 16.6 | 31.4 | 33.3 | 24.8 |
| COMET | 0.037 | 0.043 | 0.723 | 0.759 | 0.528 | 0.571 | 0.325 | 0.337 | 0.681 | 0.723 | 0.473 | |
| RAMP | L-Acc | 0.939 | 0.961 | 0.750 | 0.806 | 0.781 | 0.885 | 0.667 | 0.956 | 0.759 | 0.854 | 0.836 |
| S-Acc | 0.836 | 0.901 | 0.722 | 0.936 | 0.716 | 0.937 | 0.509 | 0.974 | 0.729 | 0.940 | 0.820 | |
| BLEU | 14.8 | 16.9 | 45.6 | 50.3 | 38.1 | 41.7 | 20.8 | 24.6 | 37.6 | 42.2 | 33.3 | |
| COMET | 0.282 | 0.395 | 0.837 | 0.892 | 0.719 | 0.770 | 0.599 | 0.629 | 0.807 | 0.861 | 0.679 | |
| base | L-Acc | 0.665 | 0.966 | 0.578 | 0.814 | 0.660 | 0.902 | 0.480 | 0.951 | 0.594 | 0.872 | 0.748 |
| S-Acc | 0.411 | 0.934 | 0.515 | 0.965 | 0.581 | 0.961 | 0.212 | 0.973 | 0.525 | 0.960 | 0.704 | |
| BLEU | 15.2 | 17.1 | 45.8 | 50.0 | 37.9 | 41.3 | 20.3 | 23.8 | 37.6 | 42.2 | 33.1 | |
| COMET | 0.294 | 0.387 | 0.843 | 0.887 | 0.712 | 0.767 | 0.576 | 0.606 | 0.807 | 0.861 | 0.674 | |
| +mark | L-Acc | 0.707 | 0.969 | 0.610 | 0.818 | 0.663 | 0.902 | 0.493 | 0.958 | 0.594 | 0.872 | 0.759 |
| S-Acc | 0.482 | 0.936 | 0.568 | 0.973 | 0.588 | 0.962 | 0.284 | 0.974 | 0.525 | 0.960 | 0.725 | |
| BLOOM 175B | BLEU | 16.7 | 17.6 | 47.9 | 50.2 | 39.5 | 41.8 | 22.2 | 25.0 | 39.3 | 42.7 | 34.3 |
| COMET | 0.358 | 0.407 | 0.860 | 0.895 | 0.734 | 0.787 | 0.632 | 0.646 | 0.810 | 0.858 | 0.699 | |
| RAMP | L-Acc | 0.841 | 0.972 | 0.709 | 0.809 | 0.765 | 0.906 | 0.633 | 0.953 | 0.701 | 0.886 | 0.817 |
| S-Acc | 0.721 | 0.940 | 0.707 | 0.964 | 0.732 | 0.971 | 0.518 | 0.973 | 0.683 | 0.972 | 0.818 | |
| BLEU | 23.3 | 24.4 | 53.2 | 54.2 | 44.2 | 46.4 | 29.3 | 32.3 | 43.4 | 45.7 | 35.9 | |
| COMET | 0.496 | 0.522 | 0.876 | 0.902 | 0.759 | 0.797 | 0.722 | 0.743 | 0.825 | 0.857 | 0.528 | |
| L-Acc | 0.910 | 0.981 | 0.932 | 0.921 | 0.919 | 0.956 | 0.762 | 0.837 | 0.922 | 0.961 | 0.853 | |
| S-Acc | 0.940 | 0.970 | 0.910 | 0.960 | 0.950 | 0.960 | 0.280 | 0.750 | 0.930 | 0.990 | 0.863 | |
| Adapted MT | Table 7: Decomposed results of same-language prompting on MT-GENEVAL (full evaluation). | | | | | | | | | | | |
BLOOM
175B
| ES | FR | HI | PT | | | | | | |
|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| F | I | F | I | F | I | F | I | AVG | |
| BLEU | 40.9 | 46.3 | 33.7 | 32.0 | 21.8 | 18.9 | 33.9 | 29.0 | 32.1 |
| COMET | 0.785 | 0.823 | 0.611 | 0.615 | 0.409 | 0.436 | 0.772 | 0.705 | 0.644 |
| L-Acc | 0.211 | 0.990 | 0.899 | 0.656 | 0.944 | 0.123 | 0.704 | 0.010 | 0.567 |
| S-Acc | 0.200 | 0.930 | 0.880 | 0.715 | 0.940 | 0.100 | 0.975 | 0.025 | 0.596 |
| BLEU | 39.4 | 44.6 | 35.3 | 34.7 | 22.4 | 18.4 | 32.2 | 27.5 | 31.8 |
| COMET | 0.749 | 0.788 | 0.575 | 0.614 | 0.488 | 0.480 | 0.770 | 0.702 | 0.646 |
| L-Acc | 0.169 | 0.978 | 0.949 | 0.770 | 0.973 | 0.143 | 1.000 | 0.015 | 0.625 |
| S-Acc | 0.175 | 0.950 | 0.930 | 0.790 | 0.975 | 0.140 | 0.975 | 0.040 | 0.622 |
Table 8: Decomposed results of cross-lingual prompting on COCOA-MT.
BLOOM
175B
| AR | ES | FR | HI | PT | | | | | | | | |
|-----------------------------------------------------------------------|--------|-------|-------|-------|-------|-------|--------|--------|-------|-------|-------|-------|
| F | M | F | M | F | M | F | M | F | M | AVG | | |
| BLEU | 10.6 | 11.6 | 43.3 | 47.4 | 34.2 | 38.2 | 11.4 | 15.0 | 34.4 | 38.6 | 28.5 | |
| COMET | 0.071 | 0.138 | 0.805 | 0.857 | 0.648 | 0.719 | -0.135 | -0.003 | 0.766 | 0.822 | 0.469 | |
| base | L-Acc | 0.843 | 0.956 | 0.627 | 0.810 | 0.561 | 0.899 | 0.653 | 0.962 | 0.588 | 0.874 | 0.777 |
| S-Acc | 0.541 | 0.785 | 0.529 | 0.936 | 0.389 | 0.944 | 0.051 | 0.745 | 0.475 | 0.939 | 0.633 | |
| BLEU | 10.0 | 10.5 | 44.6 | 47.8 | 35.7 | 39.1 | 13.9 | 16.6 | 36.0 | 39.4 | 29.4 | |
| COMET | -0.044 | 0.020 | 0.818 | 0.860 | 0.686 | 0.739 | 0.139 | 0.212 | 0.779 | 0.816 | 0.502 | |
| RAMP | L-Acc | 0.845 | 0.956 | 0.660 | 0.815 | 0.608 | 0.900 | 0.574 | 0.961 | 0.680 | 0.882 | 0.788 |
| S-Acc | 0.479 | 0.703 | 0.605 | 0.953 | 0.497 | 0.956 | 0.105 | 0.870 | 0.613 | 0.951 | 0.673 | |
| Table 9: Decomposed results of cross-lingual prompting on MT-GENEVAL. | | | | | | | | | | | | |
| In-context examples (EN) | 1. | Maybe he should. What did you think about that guy findin 3 million dollars worth of old baseball cards in his grandpas attic. |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------|
| 2. | Yeah that makes sense, did you heard about the $10 million bunker he has? | |
| 3. | I have. I heard that he started a library in 1895 with 32,000 books in it. All from his personal collection. Can you imagine? | |
| 4. | Yeah that makes sense, did you heard about the $10 million bunker he has? | |
| 5. | Yeah that makes sense, did you heard about the $10 million bunker he has? | |
| 6. | Maybe he should. What did you think about that guy findin 3 million dollars worth of old baseball cards in his grandpas attic. | |
| 7. | That is really expensive I agree, did you watch the Lego Batman movie? | |
| 8. | Yeah that makes sense, did you heard about the $10 million bunker he has? | |
| 9. | That is crazy. Do you like Tom Hanks, he's grossed over 8.5 billion at the box office | |
| 10. That is really expensive I agree, did you watch the Lego Batman movie? 11. That is crazy. Do you like Tom Hanks, he's grossed over 8.5 billion at the box office 12. That is crazy. Do you like Tom Hanks, he's grossed over 8.5 billion at the box office 13. He doesnt look like he has 56 years! I heard he made 75000000 from Mission Impossible 3 14. Really? I guess he made a valuable contribution to science and also to medicine, did you hear of that species of flying snakes | | |
| Input (EN) | If you got it why not? He is worth over 20 billion dollars after all. | |
| Reference (ES) | Si lo tiene, ¿por qué no? Al fin y al cabo, vale más de 20 000 millones de dólares. | |
| RAMP (ES) | Si lo tienes, ¿por qué no? Es millonario después de todo. | |
| base (ES) | Si lo tienes, ¿por qué no? Él vale más de 20 mil millones de dólares después de todo. | |
| In-context examples (EN) | 1. | thats such a fun thing to have in your code of conduct. are you more of a dog person than cat person? |
| 2. | For sure lol, it was so nice talking with you, say hi to your cats for me! | |
| 3. | thats such a fun thing to have in your code of conduct. are you more of a dog person than cat person? | |
| 4. | What can't dogs do! I know they aren't color blind like we were taught when young. It was so nice chatting with you, tell yuki hi! | |
| 5. | For sure lol, it was so nice talking with you, say hi to your cats for me! | |
| 6. | Oh yeah. That sucker was mean let me tell you. I think I would have a rabid dog chase me, which by the way dogs are not color blind! | |
| 7. | Thanks you too! And watch out for dogs, they're not actually colorblind but they just don't see as many colors as a normal human can | |
| 8. | LOL That's funny! Google prefers dogs over cats! The wrote in their code of conduct that they are a dog company. It's been fun chatting with you! | |
| 9. | Such gorgeous dogs! Do you & your dogs live in a northern climate? | |
| 10. LOL That's funny! Google prefers dogs over cats! The wrote in their code of conduct that they are a dog company. It's been fun chatting with you! 11. thats such a fun thing to have in your code of conduct. are you more of a dog person than cat person? 12. Such gorgeous dogs! Do you & your dogs live in a northern climate? 13. haha me too! I heard that they wore clothes that were full of color. Surprised me because I always thought of them as wearing black and white. 14. For sure lol, it was so nice talking with you, say hi to your cats for me! | | |
| Input (EN) | lol I had an African grey that could say everything I said around the house. Like clean your room! as well any loud sounds. | |
| Reference (PT) | "lol Tinha um papagaio-cinzento que conseguia dizer tudo o que ouvia em casa. Tipo ""Limpe o seu quarto!"" e também quaisquer sons altos" | |
| RAMP (PT) | haha eu também! Eu ouvi dizer que eles usam roupas de todas as cores. Surpreendeu-me porque eu sempre pensei neles usando preto e branco. | |
| base (PT) | hahaha eu tinha um papagaio cinza africano que dizia tudo o que eu dizia em casa. Como limpar o quarto! Bem como qualquer som alto. | |
| Table 10: Examples of COCOA-MT (formal) where RAMP performs worse than the base model in cross-lingual zero-shot setting. | | |
Table 10: Examples of COCOA-MT (formal) where RAMP performs worse than the base model in cross-lingual zero-shot setting.
Potentially problematic in-context examples leading to mistranslations or hallucinations are highlighted.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank.
✓ B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
wen-hauptmann-2023-zero | Zero-Shot and Few-Shot Stance Detection on Varied Topics via Conditional Generation | https://aclanthology.org/2023.acl-short.127 | Zero-shot and few-shot stance detection identify the polarity of text with regard to a certain target when we have only limited or no training resources for the target. Previous work generally formulates the problem into a classification setting, ignoring the potential use of label text. In this paper, we instead utilize a conditional generation framework and formulate the problem as denoising from partially-filled templates, which can better utilize the semantics among input, label, and target texts. We further propose to jointly train an auxiliary task, target prediction, and to incorporate manually constructed incorrect samples with unlikelihood training to improve the representations for both target and label texts. We also verify the effectiveness of target-related Wikipedia knowledge with the generation framework. Experiments show that our proposed method significantly outperforms several strong baselines on VAST, and achieves new state-of-the-art performance. | # Zero-Shot And Few-Shot Stance Detection On Varied Topics Via Conditional Generation
Haoyang Wen and **Alexander G. Hauptmann**
Language Technologies Institute, Carnegie Mellon University
{hwen3, alex}@cs.cmu.edu
## Abstract
Zero-shot and few-shot stance detection identify the polarity of text with regard to a certain target when we have only limited or no training resources for the target. Previous work generally formulates the problem into a classification setting, ignoring the potential use of label text.
In this paper, we instead utilize a conditional generation framework and formulate the problem as denoising from partially-filled templates, which can better utilize the semantics among input, label, and target texts. We further propose to jointly train an auxiliary task, target prediction, and to incorporate manually constructed incorrect samples with unlikelihood training to improve the representations for both target and label texts. We also verify the effectiveness of target-related Wikipedia knowledge with the generation framework. Experiments show that our proposed method significantly outperforms several strong baselines on VAST, and achieves new state-of-the-art performance.1
## 1 Introduction
Stance detection is an important task that identifies the polarity of text with regard to certain target (Somasundaran and Wiebe, 2010; Augenstein et al., 2016; Mohammad et al., 2016; Sobhani et al., 2017; Allaway and McKeown, 2020), as shown in Table 1. It is crucial for understanding opinionated information expressed in natural language, and it can facilitate downstream social science analyses and applications (Zhang et al., 2017; Hanselowski et al., 2018; Jang and Allan, 2018).
Previous work on stance detection mostly focuses on in-domain or leave-out targets with only a few target choices (Mohtarami et al., 2018; Xu et al., 2018; Graells-Garrido et al., 2020; Zhang et al., 2020; Liang et al., 2021; Allaway et al., 2021; 1The resource for reproducing this paper is available at https://github.com/wenhycs/ACL2023-Zero-Shot-and
-Few-Shot-Stance-Detection-on-Varied-Topics-via
-Conditional-Generation.
Input Text: Airports and the roads on east nor west coast can not handle the present volume adequately as is. I did ride the vast trains in Europe, Japan and China and found them very comfortable and providing much better connections and more efficient.
Target: high-speed rail **Stance Label:** Supportive (Pro)
Table 1: A stance detection example from VAST.
![0_image_0.png](0_image_0.png)
Jiang et al., 2022). Although achieving promising performance, those models are limited to generalize to a wide variety of targets. Zero-shot and fewshot stance detection on varied topics (VAST; Allaway and McKeown, 2020), instead, provides a diverse set of targets for training and testing. Efforts on this direction includes involving graph modeling (Lin et al., 2021), common sense (Liu et al.,
2021) or Wikipedia knowledge (He et al., 2022),
and contrastive learning (Liang et al., 2022a,b).
These methods generally formulate the problem into a classification setting, which directly trains the label representation from scratch, and does not fully utilize the semantics from those label and target texts.
However, connections among text semantics from input text, target, and label can be beneficial for stance detection. In this paper, we propose a new model by formulating the problem as a denoising task from text templates via conditional generation. Compared to direct classification, we can further exploit the label and topic semantics via learning to decode a series of natural language text containing the predicted label. The denoising scheme can also take advantage of the pretrained language model with similar pretraining task formulation (Lewis et al., 2020). To improve the target representation, we propose to jointly train target prediction with stance detection, which gives the input text and desired stance label to output possible targets. We use unlikelihood training (Welleck et al., 2020) that suppress the likelihood of manually constructed incorrect samples to enhance label
![1_image_0.png](1_image_0.png)
representations. Recently, He et al. (2022) show the effectiveness of target-related Wikipedia knowledge for classification-based stance detection. We also follow the idea and incorporate target-related Wikipedia knowledge for our generation model.
We evaluate our method on VAST. Experimental results show that the conditional generation formulation can achieve better performance compared to classification, demonstrating the effectiveness of connecting input, target, and label semantics for stance detection. Further analysis illustrates the benefits of joint target prediction, unlikelihood training, and Wikipedia knowledge. Our model can achieve new state-of-the-art performance, outperforming several strong baselines from previous work.
## 2 Approach
In this section, we will discuss our approach to zero-shot and few-shot stance detection. We will first introduce the problem formulation, and then discuss our generation-based framework.
## 2.1 Problem Formulation
Stance detection aims to identify the polarity of an input text with regard to a specific target. Formally, a sample instance can be considered as a triple
(x, t, y), where x and t are two sequences of tokens, representing input text and target respectively.
y ∈ {supportive (pro), opposite (con), neutral}
represents then stance label.
A stance-detection model is to infer the stance label y given x and t with parameter θ:
$$f\left(x,t;\theta\right)=y.$$
In the zero-shot and few-shot stance detection dataset with varied targets (Allaway and McKeown, 2020), many target tokens only occur zero or a few times in the training set.
## 2.2 A Generation-Based Framework
Generation-based frameworks have demonstrated their effectiveness for problems beyond traditional generation tasks (Lewis and Fan, 2019; Yan et al.,
2021; Li et al., 2021; Raffel et al., 2022). We use a conditional generation model for this problem, where the condition is a partially-filled template with the input text. The template is two sentences describing the target and stance with a <stance>
placeholder for stance detection. An example of the partially-filled template with input text and output is shown in Figure 1.
Our base model is BART (Lewis et al., 2020),
an encoder-decoder language model pretrained with denoising objectives, which is similar to our generation-based formulation. The generation process can be considered as using the conditional probability to select a new token at each step given input and previously generated tokens:
$$p\left(\mathbf{o}\mid g\left(\mathbf{x},\mathbf{t}\right);\theta\right)=\prod_{i=1}^{\left|\mathbf{o}\right|}p\left(o_{i}\mid\mathbf{o}_{<i},g\left(\mathbf{x},\mathbf{t}\right);\theta\right),$$
where g (x, t) represents the transformation function that fills the target t into the template and forms the input sequence with the input text x. Specifically, g (x, t) will generate a combination of input text and template with special tokens: "<s>
template </s></s> x </s>". The template contains two sentences: "The target is <target>. The stance is <stance>". We will fill in <target>
placeholder with the actual target and keep the
<stance> placeholder for the decoder to generate.
The generated output o is a fully-filled template, where both target and stance placeholders are replaced by actual or predicted values. The model is trained by minimizing the log-likelihood over the whole generated sequence:
$$\begin{split}\mathcal{L}_{s}&=-\log p\left(\boldsymbol{o}\mid g\left(\boldsymbol{x},\boldsymbol{t}\right);\theta\right)\\ &=-\sum_{i=1}^{\left|\boldsymbol{O}\right|}\log p\left(o_{i}\mid\boldsymbol{o}_{<i},g\left(\boldsymbol{x},\boldsymbol{t}\right);\theta\right).\end{split}$$ The final predicted stance label is obtained with a
post-processing function that tries to find the polarity word after the prompt for stance.
2.2.1 Joint Target Prediction Another advantage of using generation-based architecture is that we can leverage auxiliary generative
Stance Detection
![2_image_1.png](2_image_1.png)
![2_image_0.png](2_image_0.png)
Target Prediction Unlikelihood Training
tasks to help train stance detection. We use target prediction, which is to infer the target tokens t given stance label y and input text x:
## Ft (X, Y; Θ) = T.
Target prediction can provide the connection of stance to target in an opposite direction of stance detection. It can also enhance the representation of target tokens by learning to decode them.
The input sequence of target prediction is similar to stance detection, consisting of a partially-filled template and input text. The template used for joint target prediction is slightly different than the one used for stance detection, where we switch the position of two sentences so that the stance information shows up first. We will fill in the actual stance text in the input sequence, and leave the
<target> placeholder for the decoder to generate.
## 2.2.2 Unlikelihood Training
Log-likelihood objective optimizes the likelihood over the entire distribution. However, in our task, especially when generating the stance labels, we should specifically focus on several candidate tokens. Therefore, we introduce unlikelihood training (Welleck et al., 2020), where we use unlikely tokens, *i.e.* incorrect stance predictions, to replace the ground-truth sequence and optimize with the unlikelihood loss for the replaced tokens.
Specifically, for an output sequence o, we assume ok is the stance label and replaced it with an incorrect stance prediction o′k while keeping other tokens to form incorrect sequence o′. The combination of likelihood and unlikelihood will be:
$$\begin{array}{r l}{{\mathcal{L}}_{u}=}&{{}\log p\left(o_{k}^{\prime}\mid\mathbf{\omega}_{<k}^{\prime},g\left(\mathbf{x},\mathbf{t}\right);\theta\right)}\\ {\quad}&{{}-\sum_{i\neq k}\log p\left(o_{i}^{\prime}\mid\mathbf{\omega}_{<i}^{\prime},g\left(\mathbf{x},\mathbf{t}\right);\theta\right),}\end{array}$$
For each ground-truth sequence, we can construct two sequences for unlikelihood training with the other two incorrect stance labels. Table 2 illustrates the examples for different input and output templates for stance prediction, target prediction, and unlikelihood training.
## 2.2.3 Incorporating Wikipedia Knowledge
He et al. (2022) collect relevant Wikipedia snippets for each target and propose to incorporate Wikipedia knowledge to enhance target representations for BERT-based (Devlin et al., 2019) classification, which demonstrates a significant improvement. We follow He et al. (2022) and incorporate Wikipedia knowledge into our generationbased method. Specifically, we append Wikipedia snippets to the end of our input sequence: "<s>
template </s></s> x </s></s> Wikipedia snippet </s>". We use the new input sequence to perform both training and inference while the output sequences remain as the fully-filled templates.
## 2.2.4 Training Objective
The final training objective is the combination of loss functions from stance detection, target prediction, and unlikelihood training:
$${\mathcal{L}}={\mathcal{L}}_{s}+\alpha_{t}{\mathcal{L}}_{t}+\alpha_{u}{\mathcal{L}}_{u},$$
where Lt represents the log-likelihood loss over the output template for target prediction, αt, αu are used to balance different loss functions.
## 3 Experiments 3.1 Data
VAST contains 18,548 examples from New York Times "Room for Debate" section with 5,630 different targets for zero-shot and few-shot stance detection. The original examples of VAST are collected from Habernal et al. (2018) under Apache2.0 license2. We use Wikipedia knowledge collected by He et al. (2022), which uses API to crawl Wikipedia pages for targets. Wikipedia content can be used under Creative Commons Attribution Share-Alike license (CC-BY-SA)3. We use the same training/devlopment/test split as Allaway and McKeown (2020).
## 3.2 Experimental Setup
| Model | Precision | Recall | F1 |
|---------------------|-------------|----------|------|
| BERT Classification | 72.6 | 72.0 | 72.1 |
| BART w/ Template | 75.7 | 75.1 | 75.3 |
| + Topic Prediction | 76.0 | 75.6 | 75.7 |
| + Unlikelihood | 76.4 | 75.9 | 75.9 |
| + Wikipedia | 78.0 | 77.3 | 77.4 |
| Model | Zero-Shot | Few-Shot | Overall |
|-----------|-------------|------------|-----------|
| TGA-Net | 66.6 | 66.3 | 66.5 |
| BERT-GCN | 68.6 | 69.7 | 69.2 |
| CKE-Net | 70.2 | 70.1 | 70.1 |
| WS-BERT | 75.3 | 73.6 | 74.5 |
| Our Model | 76.4 | 78.0 | 77.3 |
with several existing systems including 1) TGANet (Allaway and McKeown, 2020); 2) BERTGCN (Lin et al., 2021); 3) CKE-Net (Liu et al.,
2021); 4) WS-BERT (He et al., 2022). Following their setup, we use macro-average F1 as the evaluation metric, and we report performance on the subset of test set for zero-shot and few-shot, and the overall test set.
We use BART-base4as our base model, of which the number of parameters is roughly consistent with baselines on BERT-base5. Our best model is optimized with AdamW (Loshchilov and Hutter, 2019)
for 30 epochs with a learning rate of 1e-5. We use a linear scheduler with a warmup proportion of 0.1 and the training batch size is 32. We use greedy search during inference. We reported performances on development set and test set using the averaged results from 5 different random seeds. Test results are reported based on the best overall F1 performance on the development set. αtis set to 1 and αu is set to 0.5. Our final model takes about 5 hours for training on one Nvidia RTX 3090 GPU.
![3_image_0.png](3_image_0.png)
## 3.3 Results 3.3.1 Comparing With Model Variants
We first conduct comparison of some of our model variants to illustrate the effectiveness of our proposed components. The results are shown in Table 3. From the comparison of BERT-based classification (BERT Classification) and BART-based denoising generation from templates (BART w/
Template), we can find that adopting the generation framework can significantly improve the model performance. Our proposed topic prediction and unlikelihood training can further boost performance.
The final model with knowledge from Wikipedia, verifies the effectiveness of Wikipedia knowledge for stance detection with a generative framework.
## 3.3.2 Comparing With Existing Systems
Our overall performance is shown in Table 4. Our method can significantly outperform those previous baselines, indicating the effectiveness of our proposed generation framework for zero-shot and few-shot stance detection with varies topics.
## 3.4 Qualitative Analysis
Figure 2 show the t-SNE (van der Maaten and Hinton, 2008) visualization of intermediate representations before the classification layer from our model and BERT classification model on the development set. We use random initialization with perplexity as 50 for visualization and we color each visualized instance with its corresponding stance label. The visualization of BERT classification shows small clusters with hybrid labels, While we can see that instances with our generation method are clustered with labels, where neutral labels are at the top and supportive labels are generally at the bottom.
## 4 Related Work
Zero-shot and few-shot stance detection. Zeroshot and few-shot stance detection focus on detecting stances for unseen or low-resource targets. Allaway and McKeown (2020) construct a dataset with varied topics that can be used to test stance detection under zero-shot and few-shot settings. Previous efforts mostly focus on modeling targets, documents, or their connections. Allaway and McKeown (2020) obtain generalized topic representation through clustering. Liu et al.
(2021) use commonsense knowledge graph to enhance the connection between target and document.
Liang et al. (2022a,b) use contrastive learning to learn target features. He et al. (2022) incorporate Wikipedia knowledge to enhance target representations. While in our work, we use a conditional generation framework to build the connections between input, target, and label text semantics.
Text processing via conditional generation.
Our work is also motivated by the recent success of tackling text processing problems as conditional generation (Lewis et al., 2020; Raffel et al., 2022).
In addition to the conventional text generation problems, conditional generation frameworks are effectively applied in information extraction (Li et al.,
2021), question answering (Lewis and Fan, 2019; Raffel et al., 2022) and sentiment analysis (Yan et al., 2021). In our work, we further explore stance detection via conditional generation.
## 5 Conclusion
In this paper, we propose a generation-based framework for zero-shot and few-shot stance detection that generate stance label from pre-defined templates. We further propose an auxiliary task, joint target prediction that takes stance label and input text to generate targets, and unlikelihood training on manually constructed incorrect generation output. Combining with Wikipedia knowledge for target from He et al. (2022), our model can achieve new state-of-the-art performance on VAST.
## Limitations
Because of the nature of our framework design, our work requires a diverse set of targets during training, which is important for target prediction and therefore the stance detection method. It is difficult to be applied to other stance detection datasets when there are limited training resources with regard to targets, such as Conforti et al. (2020) and Mohammad et al. (2016). Besides, the model is trained on news-related debate corpus, so it may need further domain adaptation if applying the model to other domains such as social media.
We are using an auto-regressive generation framework, which will also require extra inference time to generate the whole output sequence compared to the classification model. We would encourage readers to compare it with classification methods for efficiency when it will be applied in a time-sensitive scenario.
## References
Emily Allaway and Kathleen McKeown. 2020. ZeroShot Stance Detection: A Dataset and Model using Generalized Topic Representations. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8913–
8931, Online. Association for Computational Linguistics.
Emily Allaway, Malavika Srikanth, and Kathleen McKeown. 2021. Adversarial learning for zero-shot stance detection on social media. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4756–4767, Online.
Association for Computational Linguistics.
Isabelle Augenstein, Tim Rocktäschel, Andreas Vlachos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 876–885, Austin, Texas. Association for Computational Linguistics.
Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, and Nigel Collier. 2020. Will-they-won't-they: A very large dataset for stance detection on Twitter. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 1715–
1724, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Eduardo Graells-Garrido, Ricardo Baeza-Yates, and Mounia Lalmas. 2020. Representativeness of abortion legislation debate on twitter: A case study in
argentina and chile. In *Companion Proceedings of* the Web Conference 2020, WWW '20, page 765–774, New York, NY, USA. Association for Computing Machinery.
Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. The argument reasoning comprehension task: Identification and reconstruction of implicit warrants. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers),
pages 1930–1940, New Orleans, Louisiana. Association for Computational Linguistics.
Andreas Hanselowski, Avinesh PVS, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M.
Meyer, and Iryna Gurevych. 2018. A retrospective analysis of the fake news challenge stance-detection task. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 1859–
1874, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Zihao He, Negar Mokhberian, and Kristina Lerman.
2022. Infusing knowledge from Wikipedia to enhance stance detection. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 71–77, Dublin, Ireland. Association for Computational Linguistics.
Myungha Jang and James Allan. 2018. Explaining controversy on social media via stance summarization.
In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR '18, page 1221–1224, New York, NY, USA.
Association for Computing Machinery.
Yan Jiang, Jinhua Gao, Huawei Shen, and Xueqi Cheng.
2022. Few-shot stance detection via target-aware prompt distillation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 837–847, New York, NY, USA. Association for Computing Machinery.
Mike Lewis and Angela Fan. 2019. Generative question answering: Learning to answer the whole question.
In *7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May* 6-9, 2019. OpenReview.net.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Sha Li, Heng Ji, and Jiawei Han. 2021. Document-level event argument extraction by conditional generation.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 894–908, Online. Association for Computational Linguistics.
Bin Liang, Zixiao Chen, Lin Gui, Yulan He, Min Yang, and Ruifeng Xu. 2022a. Zero-shot stance detection via contrastive learning. In Proceedings of the ACM
Web Conference 2022, WWW '22, page 2738–2747, New York, NY, USA. Association for Computing Machinery.
Bin Liang, Yonghao Fu, Lin Gui, Min Yang, Jiachen Du, Yulan He, and Ruifeng Xu. 2021. Target-adaptive graph for cross-target stance detection. In *Proceedings of the Web Conference 2021*, WWW '21, page 3453–3464, New York, NY, USA. Association for Computing Machinery.
Bin Liang, Qinglin Zhu, Xiang Li, Min Yang, Lin Gui, Yulan He, and Ruifeng Xu. 2022b. JointCL: A joint contrastive learning framework for zero-shot stance detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 81–91, Dublin, Ireland. Association for Computational Linguistics.
Yuxiao Lin, Yuxian Meng, Xiaofei Sun, Qinghong Han, Kun Kuang, Jiwei Li, and Fei Wu. 2021. BertGCN:
Transductive text classification by combining GNN
and BERT. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 1456–1462, Online. Association for Computational Linguistics.
Rui Liu, Zheng Lin, Yutong Tan, and Weiping Wang.
2021. Enhancing zero-shot and few-shot stance detection with commonsense knowledge graph. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3152–3157, Online. Association for Computational Linguistics.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016.
SemEval-2016 task 6: Detecting stance in tweets.
In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31–
41, San Diego, California. Association for Computational Linguistics.
Mitra Mohtarami, Ramy Baly, James Glass, Preslav Nakov, Lluís Màrquez, and Alessandro Moschitti. 2018. Automatic stance detection using end-to-end memory networks. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)*, pages 767–776, New Orleans, Louisiana. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2022. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1).
Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017.
A dataset for multi-target stance detection. In *Proceedings of the 15th Conference of the European* Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 551–557, Valencia, Spain. Association for Computational Linguistics.
Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 116–124, Los Angeles, CA. Association for Computational Linguistics.
Laurens van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. *Journal of Machine* Learning Research, 9(86):2579–2605.
Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural text generation with unlikelihood training. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Chang Xu, Cécile Paris, Surya Nepal, and Ross Sparks.
2018. Cross-target stance classification with selfattention networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 778–783, Melbourne, Australia. Association for Computational Linguistics.
Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 2416–2429, Online.
Association for Computational Linguistics.
Rong Zhang, Qifei Zhou, Bo An, Weiping Li, Tong Mo, and Bo Wu. 2020. Enhancing neural models with vulnerability via adversarial attack. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 1133–1146, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Shaodian Zhang, Lin Qiu, Frank Chen, Weinan Zhang, Yong Yu, and Noémie Elhadad. 2017. We make choices we think are going to save us: Debate and stance identification for online breast cancer cam discussions. In *Proceedings of the 26th International* Conference on World Wide Web Companion, WWW
'17 Companion, page 1073–1081, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Limitations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Introduction, Section 3.1 Data
✓ B1. Did you cite the creators of artifacts you used?
Introduction, Section 3.1 Data
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 3.1 Data
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3.1 Data, Section 3.2 Experimental Setup
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We use an existing resource and detail of the data is discussed and introduced in their own published paper.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We use an existing resource and detail of the data is discussed and introduced in their own published paper.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3.1 Data
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3.2 Experimental Setup The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.2 Experimental Setup
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3.2 Experimental Setup, Table 1
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.2 Experimental Setup D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
juhng-etal-2023-discourse | Discourse-Level Representations can Improve Prediction of Degree of Anxiety | https://aclanthology.org/2023.acl-short.128 | Anxiety disorders are the most common of mental illnesses, but relatively little is known about how to detect them from language. The primary clinical manifestation of anxiety is worry associated cognitive distortions, which are likely expressed at the discourse-level of semantics. Here, we investigate the development of a modern linguistic assessment for degree of anxiety, specifically evaluating the utility of discourse-level information in addition to lexical-level large language model embeddings. We find that a combined lexico-discourse model outperforms models based solely on state-of-the-art contextual embeddings (RoBERTa), with discourse-level representations derived from Sentence-BERT and DiscRE both providing additional predictive power not captured by lexical-level representations. Interpreting the model, we find that discourse patterns of causal explanations, among others, were used significantly more by those scoring high in anxiety, dovetailing with psychological literature. | # Discourse-Level Representations Can Improve Prediction Of Degree Of Anxiety
Swanie Juhng1, Matthew Matero1, Vasudha Varadarajan1**, Johannes C. Eichstaedt**2 Adithya V. Ganesan1and **H. Andrew Schwartz**1 1Department of Computer Science, Stony Brook University 2Department of Psychology, Stanford University
{sjuhng,mmatero,vvaradarajan,avirinchipur,has}@cs.stonybrook.edu [email protected]
## Abstract
Anxiety disorders are the most common of mental illnesses, but relatively little is known about how to detect them from language. The primary clinical manifestation of anxiety is worry associated cognitive distortions, which are likely expressed at the discourse-level of semantics.
Here, we investigate the development of a modern linguistic assessment for degree of anxiety, specifically evaluating the utility of discourselevel information in addition to lexical-level large language model embeddings. We find that a combined *lexico-discourse* model outperforms models based solely on state-of-theart contextual embeddings (RoBERTa), with discourse-level representations derived from Sentence-BERT and DiscRE both providing additional predictive power not captured by lexical-level representations. Interpreting the model, we find that discourse patterns of causal explanations, among others, were used significantly more by those scoring high in anxiety, dovetailing with psychological literature.
## 1 Introduction
Anxiety disorders are one of the most prevalent mental health conditions, affecting an estimated 284 million people worldwide (Roth, 2018) and with an estimated financial burden of $46.6 billion annually in the U.S. alone (DeVane et al., 2005).
This puts the impact of anxiety on par with depression (Guntuku et al., 2017; Mahdy et al., 2020), yet much less work in the NLP community has focused on detecting anxiety disorders as has been done for depressive disorders.
One of the key characteristics of anxiety disorders is cognitive distortion (Muran and Motta, 1993; Maric et al., 2011), or an illogical reasoning in dealing with life events (Kaplan et al., 2017).
The primary window into such distortions is language, including one's own explanatory style - the way they reason about the occurrence of events
(Peterson, 1991).
Explanatory style may not be well represented by single words or words in context (i.e., *lexicallevel* features). For example, consider the *catastrophizing* statement (i.e., worrying that a bad event will lead to an extreme outcome) "*I'm sick. Now* I'm going to miss my classes and fail them all."
(Hazlett-Stevens and Craske, 2003). To see that
"*fail them all*" is catastrophizing the event *"I'm sick"*
requires understanding that the latter is a causal explanation for the expected falling behind. This is discourse-level information - semantics at the level of complete clausal statements or relating statements to each other (discourse relations) (Pitler et al., 2008).
Here, we propose a language-based assessment of anxiety utilizing both lexical-level and discourselevel representations. We first compare models that leverage discourse-level representations alone. We then propose a dual lexical- and discourse-level
(*lexico-discourse*) approach and evaluate whether the combination of both types of representations leads to improved performance. Finally, we explore specific types of discourse relations that are thought to be associated with cognitive distortions, and look at their association with anxiety in order to illuminate what our lexico-discourse approach can pick up on at the discourse semantics level.
Our **contributions** include: (1) proposal of a novel user-level language assessment model that integrates both discourse-level and lexical-level representations; (2) empirical exploration of different discourse and lexical-level contextual embeddings and their value towards predicting the degree of anxiety as continuous values; (3) examination of the association between a person's anxiety and their discourse relation usage, finding that causal explanations are the most insightful for prediction; and
(4) finding that to the best of our knowledge, this is the first model of anxiety from language specifically fit against a screening survey (rather than users self-declaring having experienced anxiety symptoms, or annotators perceiving the presence of the condition).
## 2 Related Work
Anxiety is characterized by disruptive feelings of uncertainty, dread, and fearfulness, and is generally defined as anticipation of future threats (Cohen et al., 2016). Researchers have recently been turning to social media language as a potential alternative source for mental health assessment, investigating, e.g., depression (Schwartz et al., 2014; Bathina et al., 2021; Kelley and Gillan, 2022), PTSD (Coppersmith et al., 2014; Benton et al., 2017b; Son et al., 2021), and suicide risk (Coppersmith et al.,
2016; Mohammadi et al., 2019; Matero et al., 2019).
Such an approach was also utilized in analyzing anxiety (Shen and Rudzicz, 2017; Tyshchenko, 2018; Guntuku et al., 2019; Budiyanto et al., 2019; Owen et al., 2020; Saifullah et al., 2021). Work towards this goal include Shen and Rudzicz (2017)
who attempted to classify Reddit posts into binary levels of anxiety by lexical features and Guntuku et al. (2019) who explored Ngram associations with anxiety in Twitter users. Few have attempted to capture discourse-level information in such systems.
While some have focused on cognitive distortions in patient-therapist interactions (Simms et al.,
2017; Burger et al., 2021; Shreevastava and Foltz, 2021), none have attempted to combine discourselevel information with more standard lexical-level embeddings in studying ecological (i.e., everyday, happening in the course of life) online language patterns. For mental health tasks, state-of-the-art systems have primarily relied on contextual word-level information from transformers like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) (Mohammadi et al., 2019; Matero et al., 2019). Furthermore, Ganesan et al. (2021) improved mental health task performance by reducing the dimensions of contextual embeddings to approximately 1 12 of the original. Here, we seek to establish the role of the contextual embeddings as well as propose and evaluate a model that integrates discourselevel modeling with contextual embeddings, motivated by the ability of discourse relations to capture cognitive distortions.
## 3 Method
Discourse-Level Embeddings. We consider a variety of discourse-level embeddings, ranging from those capturing phrases or sentences to one capturing relations between clauses. *SentenceBERT* (Reimers and Gurevych, 2019) is a variant of BERT that captures a whole sentence by optimizing for semantic similarity using siamese and triplet networks. *Phrase-BERT* (Wang et al., 2021)
attempts to capture shorter phrasal semantics using contrastive learning with machine-generated paraphrases and mined phrases. Finally, *DiscRE* (Son et al., 2022) captures representations of the *relationship* between discourse units (i.e., clauses rooted with a main verb) using a weakly supervised, multitask approach over bidirectional sequence models.
Lexical Embeddings. Amongst potential options for state-of-the-art auto-encoder language models, we consider BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019). Such selection is supported by empirical evidence; these two models have previously been found to result in top performance in related mental health assessment tasks
(Matero et al., 2019; Ganesan et al., 2021). Beyond the fact that these models have lead to state-of-theart performance in language understanding tasks, they are also known to capture *some* discourse information (Kishimoto et al., 2020; Liu et al., 2021).
Thus, they form a very high benchmark to try to out-predict with discourse-level embeddings.
Overall Model. The architecture of our prediction models is laid out in Figure 1. Each model consists of a discourse submodel and lexical submodel, and the two following equations demonstrate the aggregation of representations in each submodel.
d, m, u each denotes discourse unit, message, and user.
The discourse submodel takes discourse units parsed from a message1to derive discourse-level embeddings, denoted as e du
(Eq. 1), which are aggregated into message-level and then into a userlevel embedding, eu (Eq. 2):
$$e_{u}^{m}=\mathrm{compose}_{d\in m}(e_{m}^{d})\qquad\qquad(1)$$ $$e_{u}=\mathrm{compose}_{m\in u}(e_{u}^{m})\qquad\qquad(2)$$
The lexical submodel takes the embeddings derived from the word-based transformer models as message-level representations and aggregates them to user-level. Compose is the embeddings aggregation function at each step, which can be mean, min, or max. Here we follow the practice from 1Discourse units are sentences for Sentence-BERT and clauses for DiscRE and Phrase-BERT.
![2_image_0.png](2_image_0.png)
Ganesan et al. (2021) and Matero et al. (2021) and use the mean.2 Finally, the concatenation of the representations acts as input to our feed-forward network (FFN) that predicts the degree of anxiety.3 Theoretically Relevant Discourse Dimensions.
Previous work has suggested open vocabulary (latent) embeddings of discourse relations (i.e., DiscRE, Sentence-BERT) are more powerful than explicitly defined relations (Son et al., 2022), thus we utilize models that score specific type of relations
(e.g., causal explanation) as a means to *explain* what the embeddings and models are able to capture. We evaluate four discourse relations relevant to anxiety. *Causal explanations* are a statement of why an event happened. Using the model of Son et al. (2018) with F1 of approximately .87 over social media, we computed the percentage of the messages written by a user that contain causal explanation. *Counterfactuals* imagine what could have happened as an alternative to actual events.
Using the model of Son et al. (2017), we calcu-2We also experimented with min, max, and combinations of the three as well as alternative compositions but found no benefit. Given we are focused primarily on integrating discourse-level information, we suggest future work explore more sophisticated aggregation and compositional methods.
3Using a single hidden layer of size 32 with *tanh* activation trained with a learning rate of 5e-3 and batch size of 500 users; Code available here: https://github.com/swaniejuhng/
lexico-discourse/
late the proportion of the messages from each user that communicates counterfactual thoughts. Finally, *dissonance* refers to situations in which one's stated behavior or belief contradicts a prior belief; consonance is its opposite concept. We use the RoBERTa-based topic-independent classifier that evaluates whether a pair of messages composes dissonance (Varadarajan et al., 2022, 2023). Instead of assessing all pairs, we take two temporally adjacent messages (maximum distance of 2) to reduce computation time.
## 4 Dataset
Our primary dataset comprises 12,489 Facebook users who took a personality questionnaire, including assessment of anxiety, and consented to share their status updates for academic research (Stillwell and Kosinski, 2012). The anxiety assessment consists of the anxiety facet of the neuroticism factor
(Johnson, 2014), which has shown to correlate with other measures of anxiety such as GAD-7 (Milic´
et al., 2019) and STAI (Teachman, 2006) as well as have high convergence with anxiety disorders themselves (Rector et al., 2012). Each user was asked the following five questions: Get stressed out easily, Am not easily bothered by things (inverse coded),
Am relaxed most of the time (inverse coded), *Fear* for the worst, *Worry about things*. Users responded
| Inputs | MSE | MAE | rdis |
|----------------|-------|-------|--------|
| BERT L23 | .720 | .682 | .452 |
| BERT L21-24 | .717 | .679 | .446 |
| RoBERTa L23 | .717 | .683 | .458 |
| RoBERTa L21-24 | .714 | .680 | .453 |
| Inputs | MSE | MAE | rdis |
|--------------------|-------|-------|--------|
| sentiment lexicon | .799 | .722 | .110 |
| PB (Phrase-BERT) | .726 | .688 | .430 |
| SB (Sentence-BERT) | .725 | .686 | .438 |
| DiscRE | .751 | .704 | .382 |
Table 1: Evaluation of baseline (sentiment lexicon) and our three discourse-level models. **Bold** represents best in column.
on 1-5 Likert scales ("Very inaccurate." to "Very accurate."). The responses to these questions are averaged together to form a continuous variable which determines the degree of anxiety.
Secondary Evaluation Data. We also include an evaluation using another smaller dataset that was collected by the authors. It was collected from consenting participants and asked the same facet of anxiety questions. In this case, only the past 2 years of Facebook posts were used to build representations of each user to be used for prediction. This dataset is used only for evaluation, where training occurs over the previously described large Facebook set.
## 5 Results And Discussion
We evaluate our models by disattenuated Pearson correlation coefficient rdis (Spearman, 1987; Lynn et al., 2018) between the model predictions and anxiety scores derived from the survey as our main metric, but include mean squared error as well.
Table 1 displays the performances of the models trained solely on discourse-level representations as well as a sentiment lexicon baseline model
(Mohammad and Turney, 2013). Models utilizing Phrase-BERT or Sentence-BERT yielded decent results, while the DiscRE-based is by itself somewhat less informative.
Table 4: Evaluation of our model on a different dataset.
Bold represents best in column.
| Inputs | MSE | MAE | rdis |
|----------------------|-------|-------|--------|
| base: mean | .352 | .486 | .0 |
| base: sentiment | .905 | .838 | .131 |
| RB L23 | 1.103 | .937 | .421 |
| RB L23 + SB + DiscRE | 1.047 | .912 | .496 |
| Inputs | MSE | MAE | rdis |
|---------------------------|-------|-------|--------|
| RB L23 | .717 | .683 | .458 |
| RB L23 + PB | .715 | .682 | .456 |
| RB L23 + SB | .711 | .680 | .466* |
| RB L23 + DiscRE | .714 | .681 | .464* |
| RB L23 + SB + PB | .712 | .680 | .462 |
| RB L23 + PB + DiscRE | .712 | .681 | .461 |
| RB L23 + SB + DiscRE | .707 | .678 | .473* |
| RB L23 + PB + SB + DiscRE | .710 | .679 | .465 |
Table 2 compares BERT and RoBERTa using the embeddings from the second-to-last hidden layer
(L23) and the top-4 hidden layers (L21-24). We choose the RoBERTa L23 embeddings to represent the performances of the contextual embeddings in the following experiments.
While Phrase-BERT performs well in isolation, Table 3 suggests utility did not increase when used alongside RoBERTa. Alternatively, the model that employed RoBERTa, Sentence-BERT, and DiscRE representations achieves the best performance among all. This implies the two discourse-level embeddings have non-overlapping utility that contextual embeddings lack.
In Table 4, we verified the performance of our models on the alternate, held-out Facebook dataset as described in Section 4. Our central finding, that utilizing discourse-level semantics improves performance, is replicated in this entirely new dataset with the model having RoBERTa L23 with Sentence-BERT and DiscRE having significantly lower error. The improvement is similar to the first dataset showing the generalization of our approach.
Explaining Discourse Improvement. We shine light on what the model is able to capture in terms of discourse-level information by finding whether theoretically-related dimensions of cognitive distortions are associated with the models'. Table 5
| Discourse relation type | Cohen's d |
|---------------------------|-------------|
| causal explanation | .695 |
| counterfactuals | .227 |
| dissonance | .229 |
| consonance | .231 |
shows the Cohen's d which was computed using the following equation,
$$d=\zeta_{h i g h}\left(\frac{\mathrm{posts}_{r e l}}{\mathrm{posts}_{a l l}}\right)-\zeta_{l o w}\left(\frac{\mathrm{posts}_{r e l}}{\mathrm{posts}_{a l l}}\right)\tag{3}$$
high and low each indicates the group of users with predicted degree of anxiety higher or lower than median, and ζ is the "z-score" (mean-centered, standardized) of the proportions per user.
We see that all discourse dimensions were related to the score, but causal explanations, often related to overgeneralization, had the highest difference (e.g., "You know life is going to be permanently complicated when your in-laws start turning their backs on you like a domino effect."). This suggests that the causal explanation discourse relation may account for unique information to improve the overall results.
Potential for Use in Practical Applications. Other than use in medical settings, secondary use cases of our models include assessments from public entities such as public health officials, schools, and human resource department of companies to quantify levels of expressed anxiety.
## 6 Conclusion
Anxiety is one of the most prevalent mental health disorders, and the ability to more accurately assess it in a way that can capture cognitive distortions
(i.e., via discourse-level features) could lead to improved diagnostics and treatment of the condition.
We analyzed the effects of using both discourseand lexical-level information within a single model for the assessment of degree of anxiety from Facebook status updates. We found benefit from the discourse-level information beyond lexical-level contextual embeddings (i.e., transformer language models) that have been found to produce state-ofthe-art results for other mental health assessment tasks, motivating the idea that anxiety-based models can benefit from capturing not only contextual lexical information but also higher-level semantics at the level of thought patterns. Lastly, we examined the effect of theoretically relevant discourse relations in assessing anxiety, discovering that causal explanation is the most informative.
## 7 Ethics Statement
Our work is contributing to an area of research that requires valid assessments of mental health to robustly evaluate the progress the new approaches can make in order to ultimately improve mental health assessment (De Choudhury et al., 2013; Coppersmith et al., 2018; Zirikly et al., 2019; Son et al., 2021). The intention of this work for its stakeholders at this point in time, clinical psychology and the interdisciplinary area of NLP and psychology, is its use toward developing more accurate and validated techniques for the benefit of society and human well-being.
We view this work as a step toward an assessment tool that could be used alongside professional oversight from trained clinicians. In this interdisciplinary work, we aim to improve the state-of-theart automatic assessment models. However, at this time, we do not enable use of our model(s) independently in practice to label a person's mental health states. Clinical diagnosis requires more information such as interviews and physical examinations in addition to surveys. In addition, use of such models for targeted messaging or any assessment based on private language without author consent is prohibited among our terms of use. This research has been approved by an independent academic institutional review board (IRB).
Before our models are used by trained clinicians, they must demonstrate validity in a clinical setting for the target clinical population. The study steps for said evaluation should be reviewed by an external ethical review board, and practice should follow clinical guidelines. Unlike an invasive medical device, the majority of measures used in psychiatry are not required to go through regulatory agency reviews (e.g., through the Food and Drug Administration (FDA) in the U.S.), but rather are indicated based on clinical practice guidelines after reliability and validity of these measures have been established in a large body of research. If future use cases of this technique seek to apply it as a marker or indicator for a specific condition, they may seek that the U.S. FDA officially declare it as a biomarker of the condition.
## 8 Limitations
This work has several key limitations. First, we have relied on evaluation against self-reported
(questionnaires) assessment of anxiety. Selfreporting the degree of anxiety on a survey instrument is not entirely dependable in diagnostic accuracy. However, it has shown reliable associations with diagnoses, serving clinical assessment treatment purposes beyond diagnosis (Kroenke et al.,
2001). For example, anxiety scores from selfreported surveys have been robustly associated with consequential real-world outcomes such as mortality (Kikkenborg Berg et al., 2014). Clinical evaluation of the assessments proposed in this work should be evaluated against clinical outcomes.
Furthermore, the sample may not fully reflect the language use of the general population as it is skewed towards young and female4and only focused on English spoken by those from the U.S. and U.K., although previous work suggests this dataset contains a diverse representation of socioeconomic status (Matz et al., 2019). Additionally, we do not focus on actual utilization of discourse relations in assessing anxiety, as the scope of this work limits us to showing the viability of modeling anxiety on a continuous scale and the importance of discourse information towards modeling it. Lastly, the strong associations of theoretical discourse relations come from models that themselves are not perfect, with F1 scores ranging from 0.770 for counterfactuals to 0.868 for causal explanations, though one might expect this error to lead to underestimates of correlation with anxiety.
With NLP increasingly working towards better human-focused applications (e.g., improving mental health assessment), we are presented with increasing considerations for human privacy as a trade-off with considerations for open data sharing.
In this case, the data used was shared with consent only for academic research use. Open sharing of such data violates trust with research participants (and agreements with ethical review boards).
These and additional issues are discussed at length in Benton et al. (2017a). While it would be ideal to 4The self-reported user age averaged 22.6 (SD 8.2), and over half (58.1%) marked their gender as female.
release everything and preserve privacy, in this situation, we believe the fact that the unprecedented data is not universally available suggests an imperative for those with access to openly share our work as best possible within ethical guidelines. We are thus releasing aggregated anonymized features from the secondary evaluation dataset that allows one to qualitatively replicate the associations in our results while preserving the privacy of participants.
## References
Krishna C Bathina, Marijn Ten Thij, Lorenzo LorenzoLuaces, Lauren A Rutter, and Johan Bollen. 2021.
Individuals with depression express more distorted thinking on social media. *Nature Human Behaviour*, 5(4):458–466.
Adrian Benton, Glen Coppersmith, and Mark Dredze.
2017a. Ethical research protocols for social media health research. In *Proceedings of the first ACL workshop on ethics in natural language processing*, pages 94–102.
Adrian Benton, Margaret Mitchell, and Dirk Hovy.
2017b. Multitask learning for mental health conditions with limited social media data. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 152–162, Valencia, Spain.
Association for Computational Linguistics.
Setiyo Budiyanto, Harry Candra Sihombing, and Fajar Rahayu IM. 2019. Depression and anxiety detection through the closed-loop method using dass-21.
TELKOMNIKA (Telecommunication Computing Electronics and Control), 17(4):2087–2097.
Franziska Burger, Mark A. Neerincx, and Willem-Paul Brinkman. 2021. Natural language processing for cognitive therapy: Extracting schemas from thought records. *PLOS ONE*, 16:1–24.
Scott D Cohen, Daniel Cukor, and Paul L Kimmel. 2016.
Anxiety in patients treated with hemodialysis. *Clinical Journal of the American Society of Nephrology*,
11(12):2250–2255.
Glen Coppersmith, Craig Harman, and Mark Dredze.
2014. Measuring post traumatic stress disorder in twitter. In Eighth international AAAI Conference on Weblogs and Social Media.
Glen Coppersmith, Ryan Leary, Patrick Crutchley, and Alex Fine. 2018. Natural language processing of social media as screening for suicide risk. *Biomedical* informatics insights, 10:1178222618792860.
Glen Coppersmith, Kim Ngo, Ryan Leary, and Anthony Wood. 2016. Exploratory analysis of social media prior to a suicide attempt. In Proceedings of the Third Workshop on Computational Linguistics and Clinical
Psychology, pages 106–117, San Diego, CA, USA.
Association for Computational Linguistics.
Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depression via social media. In *Proceedings of the international* AAAI conference on web and social media, volume 7, pages 128–137.
C. Lindsay DeVane, Evelyn Chiao, Meg Franklin, and Eric J Kruep. 2005. Anxiety disorders in the 21st century: Status, challenges, opportunities, and comorbidity with depression. *AJMC*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–
4186.
Adithya V Ganesan, Matthew Matero, Aravind Reddy Ravula, Huy Vu, and H Andrew Schwartz. 2021.
Empirical evaluation of pre-trained transformers for human-level nlp: The role of sample size and dimensionality. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4515–4532.
Sharath Chandra Guntuku, Daniel Preotiuc-Pietro, Johannes C Eichstaedt, and Lyle H Ungar. 2019. What twitter profile and posted images reveal about depression and anxiety. In *Proceedings of the international AAAI Conference on Web and Social Media*,
volume 13, pages 236–246.
Sharath Chandra Guntuku, David B Yaden, Margaret L
Kern, Lyle H Ungar, and Johannes C Eichstaedt.
2017. Detecting depression and mental illness on social media: an integrative review. Current Opinion in Behavioral Sciences, 18:43–49. Big data in the behavioural sciences.
Holly Hazlett-Stevens and Michelle G. Craske. 2003.
The catastrophizing worry process in generalized anxiety disorder: A preliminary investigation of an analog population. *Behavioural and Cognitive Psychotherapy*, 31(4):387–401.
John A Johnson. 2014. Measuring thirty facets of the five factor model with a 120-item public domain inventory: Development of the ipip-neo-120. *Journal* of Research in Personality, 51:78–89.
Simona C Kaplan, Amanda S Morrison, Philippe R
Goldin, Thomas M Olino, Richard G Heimberg, and James J Gross. 2017. The cognitive distortions questionnaire (cd-quest): validation in a sample of adults with social anxiety disorder. Cognitive therapy and research, 41(4):576–587.
Sean W Kelley and Claire M Gillan. 2022. Using language in social media posts to study the network dynamics of depression longitudinally. *Nature Communications*, 13(1):1–11.
Selina Kikkenborg Berg, Lau Caspar Thygesen, Jesper HASTRUP Svendsen, Anne Vinggaard Christensen, and Ann-Dorthe Zwisler. 2014. Anxiety predicts mortality in icd patients: results from the crosssectional national copenhearticd survey with register follow-up. *Pacing and Clinical Electrophysiology*,
37(12):1641–1650.
Yudai Kishimoto, Yugo Murawaki, and Sadao Kurohashi. 2020. Adapting BERT to implicit discourse relation classification with a focus on discourse connectives. In *Proceedings of the Twelfth Language* Resources and Evaluation Conference, pages 1152–
1158, Marseille, France. European Language Resources Association.
Kurt Kroenke, Robert L Spitzer, and Janet BW Williams.
2001. The phq-9: validity of a brief depression severity measure. *Journal of general internal medicine*,
16(9):606–613.
Xin Liu, Jiefu Ou, Yangqiu Song, and Xin Jiang. 2021.
On the importance of word and sentence representation learning in implicit discourse relation classification. In *Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence*,
IJCAI'20.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692*.
Veronica Lynn, Alissa Goodman, Kate Niederhoffer, Kate Loveys, Philip Resnik, and H. Andrew Schwartz.
2018. CLPsych 2018 shared task: Predicting current and future psychological health from childhood essays. In *Proceedings of the Fifth Workshop on* Computational Linguistics and Clinical Psychology:
From Keyboard to Clinic, pages 37–46, New Orleans, LA. Association for Computational Linguistics.
Nourane Mahdy, Dalia A Magdi, Ahmed Dahroug, and Mohammed Abo Rizka. 2020. Comparative study:
different techniques to detect depression using social media. In *Internet of Things—Applications and* Future, pages 441–452. Springer.
Marija Maric, David A Heyne, Brigit M van Widenfelt, and P Michiel Westenberg. 2011. Distorted cognitive processing in youth: the structure of negative cognitive errors and their associations with anxiety.
Cognitive Therapy and Research, 35(1):11–20.
Matthew Matero, Akash Idnani, Youngseo Son, Salvatore Giorgi, Huy Vu, Mohammadzaman Zamani, Parth Limbachiya, Sharath Chandra Guntuku, and H Andrew Schwartz. 2019. Suicide risk assessment with multi-level dual-context language and bert. In
Proceedings of the sixth workshop on computational linguistics and clinical psychology, pages 39–44.
Matthew Matero, Nikita Soni, Niranjan Balasubramanian, and H. Andrew Schwartz. 2021. MeLT:
Message-level transformer with masked document representations as pre-training for stance detection.
In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 2959–2966, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Sandra C Matz, Jochen I Menges, David J Stillwell, and H Andrew Schwartz. 2019. Predicting individuallevel income from facebook profiles. *PLOS ONE*,
14(3):e0214369.
Jakov Milic, Ivana Škrlec, Iva Mili ´ c Vranješ, Matea ´
Podgornjak, and Marija Heffer. 2019. High levels of depression and anxiety among croatian medical and nursing students and the correlation between subjective happiness and personality traits. *International* Review of Psychiatry, 31(7-8):653–660.
Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word-emotion association lexicon. *Computational Intelligence*, 29(3):436–465.
Elham Mohammadi, Hessam Amini, and Leila Kosseim.
2019. CLaC at CLPsych 2019: Fusion of neural features and predicted class probabilities for suicide risk assessment based on online posts. In *Proceedings* of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 34–38, Minneapolis, Minnesota. Association for Computational Linguistics.
Elizabeth M. Muran and Robert W. Motta. 1993. Cognitive distortions and irrational beliefs in post-traumatic stress, anxiety, and depressive disorders. Journal of Clinical Psychology.
David Owen, Jose Camacho Collados, and Luis Espinosa-Anke. 2020. Towards preemptive detection of depression and anxiety in twitter. In Proceedings of the 5th Social Media Mining for Health Applications (\#SMM4H) Workshop & Shared Task.
Gregory Park, H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Michal Kosinski, David J
Stillwell, Lyle H Ungar, and Martin EP Seligman.
2015. Automatic personality assessment through social media language. *Journal of personality and* social psychology, 108(6):934.
Christopher Peterson. 1991. The meaning and measurement of explanatory style. *Psychological Inquiry*,
2(1):1–10.
Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind Joshi. 2008. Easily identifiable discourse relations. In Coling 2008:
Companion volume: Posters, pages 87–90, Manchester, UK. Coling 2008 Organizing Committee.
Neil A Rector, Robert Michael Bagby, Veronika Huta, and Lindsay E Ayearst. 2012. Examination of the trait facets of the five-factor model in discriminating specific mood and anxiety disorders. Psychiatry Research, 199(2):131–139.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
GA Roth. 2018. Global burden of disease collaborative network. global burden of disease study through 2017
(gbd 2017) results. *The Lancet*, 392:1736–1788.
Shoffan Saifullah, Yuli Fauziah, and Agus Sasmito Aribowo. 2021. Comparison of machine learning for sentiment analysis in detecting anxiety based on social media data. *arXiv preprint arXiv:2101.06353*.
H Andrew Schwartz, Johannes Eichstaedt, Margaret Kern, Gregory Park, Maarten Sap, David Stillwell, Michal Kosinski, and Lyle Ungar. 2014. Towards assessing changes in degree of depression through facebook. In *Proceedings of the Workshop on Computational Linguistics and Clinical Psychology*, pages 118–125.
H. Andrew Schwartz, Salvatore Giorgi, Maarten Sap, Patrick Crutchley, Lyle Ungar, and Johannes Eichstaedt. 2017. DLATK: Differential language analysis ToolKit. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing:
System Demonstrations, pages 55–60, Copenhagen, Denmark. Association for Computational Linguistics.
Judy Hanwen Shen and Frank Rudzicz. 2017. Detecting anxiety through Reddit. In Proceedings of the Fourth Workshop on Computational Linguistics and Clinical Psychology - From Linguistic Signal to Clinical Reality, pages 58–65, Vancouver, BC. Association for Computational Linguistics.
Sagarika Shreevastava and Peter Foltz. 2021. Detecting cognitive distortions from patient-therapist interactions. In *Proceedings of the Seventh Workshop on* Computational Linguistics and Clinical Psychology:
Improving Access, pages 151–158, Online. Association for Computational Linguistics.
T. Simms, C. Ramstedt, M. Rich, M. Richards, T. Martinez, and C. Giraud-Carrier. 2017. Detecting cognitive distortions through machine learning text analytics. In 2017 IEEE International Conference on Healthcare Informatics (ICHI), pages 508–512.
Youngseo Son, Nipun Bayas, and H. Andrew Schwartz.
2018. Causal explanation analysis on social media.
In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Youngseo Son, Anneke Buffone, Joe Raso, Allegra Larche, Anthony Janocko, Kevin Zembroski, H Andrew Schwartz, and Lyle Ungar. 2017. Recognizing counterfactual thinking in social media texts. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 654–658, Vancouver, Canada.
Association for Computational Linguistics.
Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019. Clpsych 2019 shared task: Predicting the degree of suicide risk in reddit posts. In Proceedings of the sixth workshop on computational linguistics and clinical psychology, pages 24–33.
Youngseo Son, Sean AP Clouston, Roman Kotov, Johannes C Eichstaedt, Evelyn J Bromet, Benjamin J
Luft, and H Andrew Schwartz. 2021. World trade center responders in their own words: Predicting ptsd symptom trajectories with ai-based language analyses of interviews. *Psychological Medicine*.
Youngseo Son, Vasudha Varadarajan, and H. Andrew Schwartz. 2022. Discourse relation embeddings:
Representing the relations between discourse segments in social media. In *Proceedings of the Workshop on Unimodal and Multimodal Induction of* Linguistic Structures (UM-IoS), pages 45–55, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Charles Spearman. 1987. The proof and measurement of association between two things. The American Journal of Psychology, 100(3/4):441–471.
David Stillwell and Michal Kosinski. 2012. mypersonality project: Example of successful utilization of online social networks for large-scale social research.
Bethany A Teachman. 2006. Aging and negative affect:
the rise and fall and rise of anxiety and depression symptoms. *Psychology and aging*, 21(1):201.
Yevhen Tyshchenko. 2018. Depression and anxiety detection from blog posts data. *Nature Precis. Sci.,*
Inst. Comput. Sci., Univ. Tartu, Tartu, Estonia.
Vasudha Varadarajan, Swanie Juhng, Syeda Mahwish, Xiaoran Liu, Jonah Luby, Christian C. Luhmann, and H. Andrew Schwartz. 2023. Transfer and active learning for dissonance detection: Addressing the rare-class challenge. In Proceedings of The 61st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
Vasudha Varadarajan, Nikita Soni, Weixi Wang, Christian Luhmann, H. Andrew Schwartz, and Naoya Inoue. 2022. Detecting dissonant stance in social media: The role of topic exposure. In Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS).
Association for Computational Linguistics.
Shufan Wang, Laure Thompson, and Mohit Iyyer. 2021.
Phrase-bert: Improved phrase embeddings from bert with an application to corpus exploration. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*. Association for Computational Linguistics.
## A Appendix
i hate feel so sick tired i don't i can't anymore me i'm my hurts sad her pain she wish why stupid really :( want alone fucking ugh sleep cry feeling i have
Table 6: Top 30 Ngrams most associated with predicted anxiety score from our best model; extracted using DLATK (Schwartz et al., 2017).
For the main dataset, a 10-fold cross validation was used with a 9:1 split at the user-level for each fold on 11,773 users that wrote 2,077,115 messages, while 168,044 messages written by 716 users who took the full version of anxiety questionnaire were used for testing. Following the practice of Park et al. (2015) to ensure adequate representation of language, the test set also limited the users to those writing at least 1,000 words. On average, each user wrote approximately 180 messages, 298 sentences, and 581 clauses. The label of training subset has a mean of 2.983 and standard deviation of 0.915, whereas those of test set are 3.004 and 0.895.
The secondary evaluation dataset spans 165 users and 52,773 messages, the result of filtering for each user to have written 500 or more words total. Each user wrote around 320 messages, 674 sentences, and 1,045 clauses on average. The mean and standard deviation of the label are 3.769 and 0.593.
Table 6 shows Ngram (lexical-level) features associated with high scores: negative emotions
('hate', 'sick', 'tired', 'cry') as well as absolutes
('anymore') and negations ('I can't', 'I don't'). Notably, conjunctions are not present among the most distinguishing Ngrams, suggesting that many of the discourse relations are not explicitly signaled with connective words (e.g., "because", "while").
Although predicting anxiety as a continuous variable reflects recent work suggesting it should be treated on a spectrum, from a practical point of view, it is sometimes necessary to make a binary classification. We therefore evaluated classify-
![9_image_0.png](9_image_0.png)
ing into low and high bins at the median (Table 7), showing that our model leveraging representations from RoBERTa, Sentence-BERT, and DiscRE
again yields significant improvement compared to baseline and sentiment lexicon models.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
ozdayi-etal-2023-controlling | Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning | https://aclanthology.org/2023.acl-short.129 | Large Language Models (LLMs) are known to memorize significant portions of their training data. Parts of this memorized content have been shown to be extractable by simply querying the model, which poses a privacy risk. We present a novel approach which uses prompt-tuning to control the extraction rates of memorized content in LLMs. We present two prompt training strategies to increase and decrease extraction rates, which correspond to an attack and a defense, respectively. We demonstrate the effectiveness of our techniques by using models from the GPT-Neo family on a public benchmark. For the 1.3B parameter GPT-Neo model, our attack yields a 9.3 percentage point increase in extraction rate compared to our baseline. Our defense can be tuned to achieve different privacy-utility trade-offs by a user-specified hyperparameter. We achieve an extraction rate reduction of up to 97.7{\%} relative to our baseline, with a perplexity increase of 16.9{\%}. | # Controlling The Extraction Of Memorized Data From Large Language Models Via Prompt-Tuning
Mustafa Safa Ozdayi1∗
, Charith Peris2†
, Jack Fitzgerald2**, Christophe Dupuy**2, Jimit Majmudar2, Haidar Khan2, Rahil Parikh2**, Rahul Gupta**2 1Department of Computer Science, The University of Texas at Dallas 2Alexa AI, Amazon
## Abstract
Large Language Models (LLMs) are known to memorize significant portions of their training data. Parts of this memorized content have been shown to be extractable by simply querying the model, which poses a privacy risk. We present a novel approach which uses prompttuning to control the extraction rates of memorized content in LLMs. We present two prompt training strategies to increase and decrease extraction rates, which correspond to an attack and a defense, respectively. We demonstrate the effectiveness of our techniques by using models from the GPT-Neo family on a public benchmark. For the 1.3B parameter GPTNeo model, our attack yields a 9.3 percentage point increase in extraction rate compared to our baseline. Our defense can be tuned to achieve different privacy-utility trade-offs by a user-specified hyperparameter. We achieve an extraction rate reduction of up to 97.7% relative to our baseline, with a perplexity increase of 16.9%.
## 1 Introduction
Pretrained large language models (LLMs; Devlin et al., 2019; Radford et al., 2019; Raffel et al., 2020; Soltan et al., 2022), commonly trained on massive crowd-sourced corpora, have been of much interest in the recent past due to their usage as backbones in state-of-the-art models across multiple downstream NLU tasks. However, they have been shown to memorize significant portions of their training data that can be extracted using appropriately-crafted prompts (Carlini et al., 2020, 2022; Zhang et al.,
2021). Such extractions pose a privacy risk to the contributors of the training data.
In this context, methods that allow developers to control the extractability of memorized examples from LLMs are of much value. For example, methods that increase extraction rates correspond to attacks in an adversarial setting, and provide developers with the ability to analyze privacy-risk.
Methods that decrease extraction rates, referred to as defenses, are useful for protecting against such attacks. Historically, defense methods tend to be compute intensive (Abadi et al., 2016; Dupuy et al.,
2021).
In this work, we train continuous *soft-prompts*
(Lester et al. 2021; hereafter referred to simply as prompts) and leverage them as a way of passing an external signal into an LLM, to control the extraction of memorized data. We freeze the model weights, and only use the trained prompt to control the generation. First, we train prompts in an attack setting and study the extent of extractable memorized content in our models. Second, we explore a defense setting where we create prompts that reduce extraction rates and achieve different privacy-utility trade-offs, via a user-specified hyperparameter. Since the original model weights are frozen in both these settings, our methods are compute efficient across the board.
To the best of our knowledge, our work is the first to adapt the use of instructive prompts for the analysis and mitigation of privacy in LLMs. We have released the code developed for our experiments1.
## 2 Background And Related Work
Previous work has shown that LLMs display memorization and has explored a range of methods that quantify extractability (Carlini et al., 2018, 2020, 2022). Differentially-private training (Dwork, 2006; Abadi et al., 2016) is a popular method that has been used to mitigate this risk. However, it tends to reduce model utility and requires retraining of the LLM, which might not be feasible due to heavy computational burden.
1https://github.com/amazon-science/controlling-llmmemorization
∗ Work done while the author was an intern at Amazon; [email protected]
†[email protected] 1512 The use of instructive prompts for language models has been extensively researched, including use during pretraining (Raffel et al., 2020), as a second stage of training (Sanh et al., 2022; Wei et al.,
2021), and during inference to guide model output
(Brown et al., 2020). Within the third category, in order to improve upon manual prompt engineering researchers have implemented methods to learn discrete natural language prompts (Shin et al., 2020),
to mine them (Jiang et al., 2020), or, neglecting natural language, to learn continuous prompts (Li and Liang, 2021; Lester et al., 2021).
Our work leverages continuous prompts as a way of passing an external signal to a model to trigger a desired model behavior (i.e., less or more memorized data in open language generation, which map to an extraction attack and defense, respectively).
## 3 Method
Prompt-tuning requires the prepending of a prompt to the prefix embedding and access to the training loss (see Figure 1). Given these constraints, we explore a white-box attack where the adversary has access to the target model parameters, and a blackbox defense where the adversary interacts with the target model via an API. We therefore do not test our defense against our own attack.
Let [prefix || suffix] be a sequence in the training set where the prefix is of length k tokens. Carlini
![1_image_0.png](1_image_0.png) et al. (2022) defined a suffix to be *k-extractable* if the model generates the suffix exactly, after being prompted with its the corresponding length-k prefix. Our white-box attack aims to increase the number of k-extractable sequences, while our black-box defense aims to reduce the number of k-extractable sequences that can be extracted by an adversary who submits prefixes via an API.
## 3.1 Attack
In the attack setting, we assume that the adversary has a set of [ prefix || suffix ] sequences S*train*,
sampled from the training set of the target model.
Their goal is to extract the suffixes corresponding to a disjoint set of prefixes, denoted by S*test* 2.
To do so, the adversary first initializes a prompt:
a continuous set of l × e parameters where e is the embedding size of the model, and l is the length of the prompt, a hyperparameter decided by the adversary. The prompt is trained over S*train* to facilitate the correct generation of suffixes. To do this, we first prepend the prompt to the embedding of the prefix and pass the joint embedding through the model for generation. We then minimize the loss objective (see below) with respect to the prompt while keeping the parameters of the model frozen.
We explore two loss objectives. The first is causal language modeling (hereafter referred to as CLM), where we minimize the cross-entropy loss over the entire sequence (Radford et al., 2019). In the second, the prompt is optimized by minimizing the cross entropy loss of only the suffixes, given the prefixes. Here, the training is aligned with our inference task such that during training the model is penalized only on the suffix tokens; hence we refer to it as *aligned CLM*. During inference, the learned prompt is prepended to each embedding of the prefixes in S*test*, and the joint embedding is passed to the model for generation (see Figure 1).
## 3.2 Defense
In the defense setting, the defender (API owner)
trains the prompt, and prepends it to the incoming prefixes before passing them to the model. Our algorithm is inspired by machine-unlearning literature (Halimi et al., 2022), and defenses against membership inference and backdoor attacks (Chen et al., 2022; Ozdayi et al., 2021). We introduce a 2For simplicity, we assume all prefixes are k-length. This can easily be ensured by padding or truncating different length prefixes if needed in a real-world setting.
hyperparameter named *learning threshold* denoted by θ. During prompt training (see Section 3.1),
when loss is *less* than θ we do *gradient ascent* to penalize the prompt. If the loss is *greater* than θ, we perform gradient descent with respect to the prompt as usual. Training is stopped once the average epoch loss is equal or above θ. This allows us to increase training loss in a controlled manner and stabilize it around θ. Through this process, we can achieve various privacy-utility trade-offs efficiently without re-training any part of the model.
To explore θ, we set the initial value to be slightly above the model training loss and increase in steps of 0.25 until desired performance is achieved.
## 4 Experiments
For our experiments, we use the 125M and 1.3B
parameter variants of the GPT-Neo models (Black et al., 2021). These are public, decoder-only transformer models (Vaswani et al., 2017) trained using CLM on the Pile dataset (Gao et al., 2020). We extract S*train* and S*test* from the Language Model Extraction Benchmark dataset (Google-Research).
This dataset contains 15k sequences sampled from the training split of the Pile where each sequence is partitioned into a prefix and suffix. In the default evaluation setting, both prefix and suffix consist of 50 tokens. We ensure a random train/test split of 14k/1k samples.
Our evaluation metric of choice is *Exact extraction rate* which is the fraction of correctly generated suffixes (i.e., all tokens of the generated suffix match with ground-truth suffix) over the test set.
We additionally discuss fractional extraction rate and present results in Appendix A. As a baseline, we use the attack analyzed in Carlini et al. (2022),
which consists of feeding the prefixes to the model, and generating suffixes with greedy decoding. This is the only extraction attack for this setting apart from our work, to the best of our knowledge. Our training setup is discussed in Appendix B. All experiments are repeated over 5 runs with a new random train/test split in each run.
## 4.1 Attack
We explore the performance of our attack across several dimensions: prompt length, suffix size, prefix size, and beam size. We use greedy-decoding in all cases, except the beam size experiments.
Prompt Length First, we explore prompt length in the context of the default setting (prefix and suffix consist of 50 tokens; Figures 2-A1 and 2-A2).
We note that prompts tuned with both CLM and aligned CLM provide improvements over the baseline in all cases, with aligned CLM providing the best performance. *Given this, we train prompts* using the aligned CLM objective for all other experiments, including our defense.
With aligned CLM, we achieve the highest extraction rates of 25.8% and 54.3% for the 125M
and 1.3B models, respectively (an improvement of 8.9 and 9.3 percentage points, respectively), with a 100 token prompt (blue line). We observe that extraction rates increase with prompt length and tend to saturate after prompt length 100. Over-fitting was ruled out as a potential cause of saturation as there is no increase in test loss observed during training. This suggests that there is a max limit on the parameter count in the prompt that might add value for extraction purposes given our objective.
We note that more sophisticated training strategies
(designing better loss functions, better prompt initialization etc.) might yield better extraction rates.
Suffix Size Next, we fix the prefix size to 50 and vary the suffix size. As shown in Figures 2-B1 and 2-B2, extraction rates decrease roughly exponentially with suffix size. We note that as suffix size increases, longer prompts (≥ 20) provide greater improvements over the baseline. For example, with a prompt length of 100 (blue line) using the 1.3B
model, at suffix size 5 we observe an extraction rate increase of 5.3 percentage points. Whereas at suffix size 50, the increase is 9.3 percentage points.
Prefix Size Next, we fix the suffix size to 50 and vary the prefix size. As shown in Figures 2-C1 and 2-C2, extraction rates increase roughly logarithmically (as in Carlini et al. 2022). Contrary to suffix size, we observe that the gaps between baseline and attacks decrease with increasing prefix size. This suggests that our attack stands to benefit a less informed adversary (small prefix sizes) when compared to the baseline.
Beam Decoding Finally, we utilize the default setting with prefix and suffix sizes at 50 tokens and vary the beam size (beam size=1 corresponds to greedy decoding). The results are shown in Figures 2-D1 and 2-D2. We observe that extraction rates increase across the board when increasing beam size from 1 to 5. However, improvements tend to plateau or oscillate when beam size is greater than 5. The 1.3B model benefits more
![3_image_0.png](3_image_0.png)
| Model | θ | Exact Extract | Pile Test |
|--------------|---------------|-----------------|----------------|
| Rate | PPL | | |
| 0 ∗ | 0.169 ± 0.007 | 15.71 ± 0.431 | |
| 1.25 | 0.031 ± 0.005 | 16.601 ± 0.197 | |
| 1.5 | 0.006 ± 0.001 | 17.499 ± 0.156 | |
| 1.75 | 0.001 ± 0.0 | 19.691 ± 0.598 | |
| GPT2 124M | - | 0.004 ± 0.002 | 30.323 ± 1.019 |
| GPT-Neo 125M | 0 ∗ | 0.450 ± 0.015 | 9.213 ± 0.232 |
| 0.5 | 0.108 ± 0.02 | 9.758 ± 0.245 | |
| 0.75 | 0.022 ± 0.004 | 10.267 ± 0.094 | |
| 1 | 0.01 ± 0.002 | 10.775 ± 0.248 | |
| GPT2 1.5B | - | 0.019 ± 0.002 | 17.155 ± 0.545 |
| GPT-Neo 1.3B | | | |
from increasing beam size achieving the highest extraction rate of 61.4%, at a beam size of 20 (with a prompt length of 150). The highest extraction rate achieved for the 125M model was 28.3% at a beam size of 15 (with a prompt length of 100).
## 4.2 Defense
Finally, we evaluate the privacy-utility trade-off of our black-box defense. As mentioned in Section 3, our defense is designed for a black-box adversary, and cannot be tested against our white-box attack.
Therefore, we utilize the baseline attack (Section 4)
to quantify privacy. We note that longer prompts did not add value in a defense setting, so we resort to using a prompt of length 1. We utilize perplexity
(PPL) on generated suffixes, to quantify the utility of the model in addition to using exact extraction rate as in Section 3.1. To measure PPL, we use a random subset of 1k sequences sampled from the test split of the Pile, ensuring that PPL is measured on data unseen by the model. We also compare our metrics with those of similar sized models that were not trained on the Pile dataset (GPT2 models). Our premise here is that better performance in terms of privacy and utility, when compared to an out-ofdomain model of similar size, would mean that our defense mechanism is of value to an API owner.
In Table 1, we display our results obtained using the default evaluation setting (prefix and suffix comprise of 50 tokens). Our defense achieves lower extraction rates with competitive PPL values. For the 125M model, we achieve an exact extraction rate reduction of 99.4% relative to baseline with a PPL increase of 25.3% at θ = 1.75. For the 1.3B
model, the extraction rate is reduced by 97.7% relative to baseline with a PPL increase of 16.9% at θ = 1. The ability to achieve lower extraction rates with lower PPL values as measured against the GPT2 models of the corresponding size, provides evidence that our defense is effective.
## 5 Conclusion
We present the first known effort to leverage prompt-tuning to control the extractability of memorized data from LLMs in an open language generation task. We develop a novel data extraction attack and defense, and illustrate their performance under various settings. Our attack consistently outperforms the baseline in terms of exact extraction rate.
Our defense provides competitive privacy-utility trade-offs and would prove beneficial to API owners with model trained on sensitive content. These results are achieved efficiently, without any change to the original model weights. We details avenues of future work in Appendix C
## 6 Limitations
We briefly mention some limitations of our work.
First, we have only used a single dataset, and a single model family in our experiments. This is mainly due to the fact that the benchmark we use is the only publicly available dataset at this time to the best of our knowledge. We also solely focused on extraction metrics, but did not do a deeper analysis on the extracted sequences. A fine-grained analysis of extracted sequences could yield important insights for understanding memorization and extraction in LLMs. Similarly, we also did not analyze what our prompts converge to, and whether they yield explainable prompts at the time of converge. Such analysis can provide better insights as to why, for example, training prompts with aligned CLM performs better that the basic CLM setting. Finally, we believe the evaluation of our defense could be improved further by measuring other utility metrics (e.g., accuracy) on downstream tasks.
## 7 Ethical Considerations
We leverage prompt-tuning to control the extractability of memorized data from LLMs in an open language generation task and explore two settings; an attack and a defense. We acknowledge that our attack methodology could be misused by an adversary with white-box access to extract memorized private information from a target large language model. Our goal is to raise awareness in the community to the possibility and severity of this nature of attack. We hope that developers, armed with this knowledge, can use relevant defense mechanisms to avoid such potential misuse.
## Acknowledgements
The authors would like to thank Wael Hamza for helpful discussions on this topic and Stephen Rawls for help with securing the GPU instances that were required for experimentation.
## References
Huggingface accelerate.
Martín Abadi, Andy Chu, Ian J. Goodfellow, H. B.
McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. *Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security*.
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with MeshTensorflow.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, and Chiyuan Zhang.
2022. Quantifying memorization across neural language models. *ArXiv*, abs/2202.07646.
Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Xiaodong Song. 2018. The secret sharer: Evaluating and testing unintended memorization in neural networks. In USENIX Security Symposium.
Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Xiaodong Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. 2020. Extracting training data from large language models. In *USENIX Security Symposium*.
Dingfan Chen, Ning Yu, and Mario Fritz. 2022. Relaxloss: Defending membership inference attacks without losing utility. *ArXiv*, abs/2207.05801.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Christophe Dupuy, Radhika Arava, Rahul Gupta, and Anna Rumshisky. 2021. An efficient dp-sgd mechanism for large scale nlu models. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4118–4122.
Cynthia Dwork. 2006. Differential privacy. In *Encyclopedia of Cryptography and Security*.
Leo Gao, Stella Rose Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The pile: An 800gb dataset of diverse text for language modeling.
ArXiv, abs/2101.00027.
Google-Research. Google-research/lm-extractionbenchmark.
Anisa Halimi, Swanand Kadhe, Ambrish Rawat, and Nathalie Baracaldo. 2022. Federated unlearning:
How to efficiently erase a client in fl?
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. *ArXiv*, abs/2106.09685.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? *Transactions of the Association for* Computational Linguistics, 8:423–438.
Diederik P. Kingma and Jimmy Ba. 2014. Adam:
A method for stochastic optimization. *CoRR*,
abs/1412.6980.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597, Online. Association for Computational Linguistics.
Jimit Majmudar, Christophe Dupuy, Charith S. Peris, Sami Smaili, Rahul Gupta, and Richard S. Zemel.
2022. Differentially private decoding in large language models. *ArXiv*, abs/2205.13621.
Mustafa Safa Ozdayi, Murat Kantarcioglu, and Yulia R.
Gel. 2021. Defending against backdoors in federated learning with robust learning rate. *Proceedings* of the AAAI Conference on Artificial Intelligence, 35(10):9268–9276.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems 32*, pages 8024–8035. Curran Associates, Inc.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1).
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In *Proceedings of the 26th* ACM SIGKDD International Conference on Knowledge Discovery amp; Data Mining, KDD '20, page 3505–3506, New York, NY, USA. Association for Computing Machinery.
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations.
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV,
Eric Wallace, and Sameer Singh. 2020. AutoPrompt:
Eliciting knowledge from language models with automatically generated prompts. In *Empirical Methods* in Natural Language Processing (EMNLP).
Saleh Soltan, Shankar Ananthakrishnan, Jack FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith Peris, Stephen Rawls, Andy Rosenbaum, Anna Rumshisky, Chandana Satya Prakash, Mukund Sridhar, Fabian Triefenbach, Apurv Verma, Gokhan Tur, and Prem Natarajan. 2022. Alexatm 20b: Few-shot learning using a large-scale multilingual seq2seq model. *arXiv*.
Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *ArXiv*, abs/1706.03762.
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing nlp. In Conference on Empirical Methods in Natural Language Processing.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2021. Finetuned language models are zero-shot learners.
Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, and Nicholas Carlini. 2021. Counterfactual memorization in neural language models. *ArXiv*, abs/2112.12938.
## A Fractional Extraction Rate Results
Fractional extraction rate is the fraction of generated tokens that are both *correct and in the right* position, over the dataset (see lower section of Figure 2). Our reason to measure this metric is to provide a more detailed assessment of risks associated with extraction. Exact extraction rate is particularly important in cases where the attacker requires an exact match in order for the extraction to be of use; a good example is the case of extracting a credit card number. In such cases, even getting a few tokens incorrect will render the attack useless.
However, when the attacker cares more about the meaning of the extracted sequences, fractional extraction rate can be a better metric to assess the risk.
This is because a human might be able to infer the correct meaning of the sequence even when few tokens are wrong.
The results related to this metric are shown in Figure 3. Comparing these results with the exact extraction rate results (Figure 2), we observe the same trends across all of our experiment. We note that the same shared trends are observed in the case of our defense. In this case the fractional extraction rate results are tabulated in Table 2.
## B Training Setup
Our soft-prompts are initialized to random word embeddings as described in Lester et al. (2021).
We use a batch size of 128 and an Adam optimizer (Kingma and Ba, 2014) with a learning rate of 5e − 4. For the attack setting, the prompts are trained for 15 epochs. In the defense case, the prompts are trained until training loss stabilizes around the specified θ value (as described in Section 3.2), which happens within 2-3 epochs in our experiments.
We use a Pytorch (Paszke et al., 2019) implementation where we leverage the HuggingFace Accelerate (HF) and DeepSpeed (Rasley et al., 2020)
libraries to handle distributed training over 8 GPUs with fp16 mixed precision. On a p3dn.24xlarge instance, the average attack prompt training time
![7_image_0.png](7_image_0.png)
| Model | θ | Fract Extract | Pile Test |
|--------------|---------------|-----------------|----------------|
| Rate | PPL | | |
| 0 ∗ | 0.35 ± 0.006 | 15.71 ± 0.431 | |
| 1.25 | 0.192 ± 0.011 | 16.601 ± 0.197 | |
| 1.5 | 0.123 ± 0.005 | 17.499 ± 0.156 | |
| 1.75 | 0.087 ± 0.003 | 19.691 ± 0.598 | |
| GPT2 124M | - | 0.099 ± 0.003 | 30.323 ± 1.019 |
| GPT-Neo 125M | 0 ∗ | 0.634 ± 0.013 | 9.213 ± 0.232 |
| 0.5 | 0.316 ± 0.022 | 9.758 ± 0.245 | |
| 0.75 | 0.171 ± 0.004 | 10.267 ± 0.094 | |
| 1 | 0.128 ± 0.006 | 10.775 ± 0.248 | |
| GPT2 1.5B | - | 0.166 ± 0.003 | 17.155 ± 0.545 |
| GPT-Neo 1.3B | | | |
was 0.9 hours per prompt while the average defense prompt training time was 0.02 hours per prompt.
## C Future Work
We have several avenues that we would like to explore in the context of future work. We envision that more sophisticated training strategies might yield better extraction rates in our attack setting
(designing better loss objectives, better initialization of soft-prompts etc.) and we would like to explore this further.
We would like to explore different prompt learning algorithms such as other parameter-efficient training methods (Li and Liang, 2021; Hu et al.,
2021), and hard-prompt learning methods (Wallace et al., 2019), in order to conduct a more robust analysis of extraction rates.
We would like to test the transferability of trained prompts across different models and datasets.
Finally, we would like to combine our defense with other existing defenses such as those applied at training time (e.g. versions of differentially private stochastic gradient descent; Abadi et al. 2016; Dupuy et al. 2021) or those applied at decoding stage (e.g., differentially private decoding; Majmudar et al. 2022). The goal would be to achieve better privacy-utility trade-offs under a combination of such defenses.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
See Section 6
✓ A2. Did you discuss any potential risks of your work?
See Ethical Considerations under Section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
See Abstract and Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** See Section 4
✓ B1. Did you cite the creators of artifacts you used?
We've cited the models. We cited the dataset in the right way to the best of our knowledge.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
These are publicly available models and data and so their licenses are in accordance with our work.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. The artifacts we use have been used by multiple publications for the same purpose as ours and are in accordance with their intended use. We do not create any model or data related artifacts.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
This data is part of the Pile dataset (Gao et al. 2020) that has seen much study in previous publications in the context of large language model training. Therefore, we do not take special steps to discuss this.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
The dataset we use been discussed in Gao et al. 2020 citation and an interested reader will be able to gather information here. The models are also discussed in the Black et al. 2021 citation.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
See Section 4.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** See Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
See Section 4 and Training set up in Appendix B
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
For training we utilize a set of parameters that have been commonly used in previous studies. For theta (a hyper parameter that we introduce) see Table 1 for theta values that we explore. And Appendix B for experimental setup.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We show errorbars in all our plots, See Figure 2 and 3. We also report mean and stdev based on 5 runs.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We clearly define our metrics in Section 4 and Appendix A. They do not use existing packages. We do not do any pre-processing or normalization.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
inaba-etal-2023-multitool | {M}ulti{T}ool-{C}o{T}: {GPT}-3 Can Use Multiple External Tools with Chain of Thought Prompting | https://aclanthology.org/2023.acl-short.130 | Large language models (LLMs) have achieved impressive performance on various reasoning tasks. To further improve the performance, we propose MultiTool-CoT, a novel framework that leverages chain-of-thought (CoT) prompting to incorporate multiple external tools, such as a calculator and a knowledge retriever, during the reasoning process. We apply MultiTool-CoT to the Task 2 dataset of NumGLUE, which requires both numerical reasoning and domain-specific knowledge. The experiments show that our method significantly outperforms strong baselines and achieves state-of-the-art performance. | # Multitool-Cot: Gpt-3 Can Use Multiple External Tools With Chain Of Thought Prompting
Tatsuro Inaba1 Hirokazu Kiyomaru1 Fei Cheng1 **Sadao Kurohashi**1,2 1Kyoto University, Japan 2National Institute of Informatics, Japan
{inaba, kiyomaru, feicheng, kuro}@nlp.ist.i.kyoto-u.ac.jp
## Abstract
Large language models (LLMs) have achieved impressive performance on various reasoning tasks. To further improve the performance, we propose MultiTool-CoT, a novel framework that leverages chain-of-thought (CoT)
prompting to incorporate multiple external tools, such as a calculator and a knowledge retriever, during the reasoning process. We apply MultiTool-CoT to the Task 2 dataset of NumGLUE, which requires both numerical reasoning and domain-specific knowledge.
The experiments show that our method significantly outperforms strong baselines and achieves state-of-the-art performance. 1
## 1 Introduction
Reasoning refers to the logical process of inferring unknown facts from known facts. Solving reasoning problems requires language understanding, real-world knowledge, arithmetic calculation, and symbolic processing. Improving the reasoning capability of artificial intelligence has been a long-standing challenge and remains an active research topic to this day (Gordon et al., 2012; Sap et al., 2020).
Recently, large language models (LLMs) have achieved amazing performance on various reasoning tasks (Brown et al., 2020; Lewkowycz et al.,
2022; Zhang et al., 2022; Chowdhery et al., 2022). However, the amount of real-world knowledge learned by LLMs is still constrained by the size of model parameters and the training data. This problem could be more severe in the case of sparse domain-specific knowledge. Furthermore, LLMs are based on the computation among continuous token representations, which cannot ensure accurate arithmetic calculations.
To solve these problems, previous studies propose to complement the capabilities of LLMs with 1Our code is publicly available at https://github.com/
InabaTatsuro/MultiTool-CoT.
an external tool, such as a web browser or a calculator (Nakano et al., 2021; Cobbe et al., 2021; Yao et al., 2022). This is performed by invoking an external tool during reasoning with LLMs and injecting the results into the reasoning process. However, previous studies have focused on using a single external tool to solve a single problem with LLMs and have not addressed different problems together.
This paper proposes MultiTool-CoT, an interactive framework that allows LLMs to use multiple external tools during reasoning. Figure 1 provides an overview. In MultiTool-CoT, LLMs solve reasoning problems by generating reasoning processes including tool triggers to invoke external tools. We let LLMs learn to invoke multiple external tools at proper reasoning steps by chain-ofthought (CoT) prompting based on few-shot learning (Wei et al., 2022).
As a proof of concept, we apply MultiToolCoT to the Task 2 dataset of NumGLUE (Mishra et al., 2022), which requires both numerical reasoning and domain-specific knowledge. Experiments show that MultiTool-CoT significantly outperforms strong baselines and achieves state-ofthe-art performance.
## 2 Related Work
Large language models (LLMs) can perform various tasks by *prompting* (Liu et al., 2022). As for reasoning tasks, chain-of-thought (CoT) prompting (Wei et al., 2022; Kojima et al., 2022) is known for its effectiveness, which elicits the answer with intermediate reasoning steps from LLMs.
There is a growing body of work on using an external tool to improve reasoning with LLMs. Cobbe et al. (2021) use a calculator to process mathematical formulas that appear in reasoning processes by fine-tuning LLMs to generate mathematical formulas with a tool trigger to call the calculator. Nakano et al. (2021) allow LLMs to use a 1522
![1_image_0.png](1_image_0.png)
web browser by fine-tuning LLMs to generate action codes to operate the browser. Previous studies focus on a single problem of LLMs, namely, errorprone arithmetic calculation or incomplete realworld knowledge, and address it by fine-tuning LLMs so that they can call a single external tool. In contrast, this study addresses multiple problems together by allowing LLMs to use multiple external tools. Besides, this study presents a few-shot learning-based framework (Brown et al., 2020) for doing this, which does not require fine-tuning.
A very recent study (Yao et al., 2022) proposes a few-shot learning-based method for invoking a Wikipedia API to perform knowledge-intensive reasoning tasks. However, this study has not investigated the effectiveness of using multiple external tools. A Python library named LangChain2implements a framework for allowing LLMs to use multiple external tools based on Yao et al. (2022),
which is similar to ours. However, its effectiveness has not been investigated in any benchmark datasets as of this submission.
## 3 Method
We propose MultiTool-CoT, an interactive framework that allows LLMs to use multiple external 2https://langchain.readthedocs.io/en/latest tools during reasoning. Figure 1 illustrates an overview.
MultiTool-CoT leverages chain-of-thought
(CoT) prompting based on few-shot learning (Wei et al., 2022). Our prompt consists of an instruction specifying the available external tools, few-shot examples demonstrating several question-answer pairs with reasoning processes, and a question to be solved. We manually annotate the reasoning processes shown as few-shot examples with tool triggers marked with corresponding input data, adhering to a specific format. In this study, we let the string <<External tool name>> be a tool trigger. For example, if we use a calculator as an external tool, we annotate the reasoning processes with the tool trigger <<Calculator>> after input formulas like 2 × 62.
When reasoning, GPT-3 follows the prompt and generates a reasoning process including tool triggers. If a tool trigger is generated, we stop text generation. We then extract the name of the external tool and the input for the tool from the reasoning process, execute the tool with the input, and append the result to the end of the reasoning process. After that, we restart text generation.
If we cannot execute an external tool for some reason (e.g., invalid tool input is generated), we fall back on GPT-3 and let it generate the output of the tool.
We observe that the final answer value is nearly always contained in the last sentence of the reasoning process. Therefore, we apply an additional GPT-3 few-shot learning process for mapping the last sentence to the answer value by prompting several sentence-answer pairs.
## 4 Experiment
As a proof of concept, we applied MultiTool-CoT to solve a knowledge-based numerical reasoning task.
## 4.1 Dataset
We used the Task 2 dataset of NumGLUE (Mishra et al., 2022), which requires both numerical reasoning and domain-specific knowledge, mainly related to chemistry. Example (1) shows a question in the dataset.
(1) Find the amount of Calcium hydroxide required to react with 2 moles of Carbon dioxide to form 2 moles of Calcium carbonate along with 2 moles of Water.
All the answers are given as numbers. We used 325 questions in the test split for evaluation. We evaluated the accuracy.
## 4.2 External Tools
We implemented the following external tools and used them in the proposed framework.
- **Calculator (C**AL): The calculator is given a mathematical formula and outputs the calculation result. The calculator is implemented using Python's eval function3. Operators in mathematical formulas are replaced according to Python's syntax. We prompt GPT-3 to output the tool trigger, <<Calculator>>,
with a mathematical formula on the same line.
- **Chemical reaction predictor (C**RP): The chemical reaction predictor is given the chemical formula of reactants and products and outputs the chemical reaction equation by adjusting the coefficients so that the reactants and products have the same number of each atom. We prompt GPT-3 to output the tool trigger, <<Chemical reaction 3https://docs.python.org/3/library/functions.
html\#eval
| Method Zero-Shot† | 1 |
|--------------------------|-------|
| Zero-Shot+CoT | 32.62 |
| Few-Shot† | 42 |
| Few-Shot+CoT | 57.85 |
| MultiTool-CoT (CAL only) | 62.77 |
| MultiTool-CoT (CRP only) | 64.31 |
| MultiTool-CoT (MML only) | 69.23 |
| MultiTool-CoT (Ours) | 85.85 |
Table 1: Performance in the Task 2 dataset of NumGLUE. The best result is shown in **bold**. (†) is cited from Mishra et al. (2022).
predictor>>, with the reactants and products on the previous two lines.
- **Molar mass list (M**ML): The molar mass list is given a chemical formula and outputs its molar mass. The molar mass of the chemical formula is calculated from the atoms and their number in the formula. The molar mass of the atoms is obtained from the knowledge base listing the weight of all atoms.
We prompt GPT-3 to output the tool trigger,
<<Molar mass list>>, with a chemical formula on the same line.
## 4.3 Methods For Comparison
We used GPT-3 (text-davinci-003; 175B parameters) via OpenAI API4and compared the following methods.
Zero-Shot We fed only the question into GPT-3 and considered the generated text as the answer.
Zero-Shot+CoT (Kojima et al., **2022)** We fed the question with the sentence "Let's think step by step." into GPT-3 and obtained the answer with the intermediate reasoning steps. We then added the sentence fragment "Therefore, the answer (Arabic numerals) is " after the generated text and fed it into GPT-3 to get the final answer.
Few-Shot We fed the question with few-shot examples of question-answer pairs into GPT-3 and obtained the generated text as the answer.
Few-Shot+CoT We performed the proposed method without invoking any external tools. If the tool triggers were generated, we used GPT-3 to output the result.
4https://openai.com/api/
![3_image_0.png](3_image_0.png)
MultiTool-CoT ({CAL|CRP|MML} **only)** We performed the proposed method with one of the external tools introduced in Section 4.2. As for the other external tools, we let GPT-3 generate the result.
MultiTool-CoT (Ours) We performed the proposed method with all the external tools introduced in Section 4.2.
In few-shot settings, we used 20 questions in the training split as few-shot examples. The questions were manually selected to avoid bias in the number of external tool calls. In order to annotate the questions with reasoning processes with tool triggers, we followed a two-step process. First, we employed GPT-3 to generate the reasoning processes for solving these questions using zero-shot chain-of-thought prompting (Kojima et al., 2022), aiming to obtain reasoning processes that GPT-3 can easily follow. Then, we manually annotated the reasoning processes with tool triggers and the input and output for the corresponding external tools.
We set the temperature parameter of GPT-3 as 0 to generate constant predictions. Therefore, we report the results of single runs of the methods.
## 4.4 Results
Table 1 shows the results. The proposed method achieved an accuracy of 85.85, a state-of-the-art performance. We observed a significant performance improvement compared to methods that did
![3_image_1.png](3_image_1.png) not use external tools and methods that used only one external tool. Note that the performance improvement from using multiple external tools is larger than the sum of the performance improvements from using each tool individually. This is because GPT-3 can fail to provide accurate answers due to a combination of different types of errors, such as incorrect arithmetic calculation and knowledge. The use of multiple external tools addressed such cases effectively, thereby improving the overall accuracy.
## 4.5 Case Study
Figure 2 shows an improved example. ZeroShot and Few-Shot generated wrong answers.
Zero-Shot+CoT and Few-Shot+CoT performed reasoning based on the incorrect molar mass of Al2(CO3)3, resulting in incorrect answers. Besides, Few-Shot+CoT failed to calculate 12 ×
3/342 × 100. Our method, MultiTool-CoT, was able to answer correctly based on correct knowledge and calculation, relying on external tools.
More examples are presented in Figure 3 and Figure 4 in Appendix.
Despite the excellent results, there were 46 instances in which the proposed method failed to deliver accurate answers. Upon manual investigation of all the errors, we identified that the majority of them were caused by incorrect reasoning processes (39%) and invalid tool inputs (35%).
The remaining errors were categorized into incorrect gold answers (15%) and variations in answer formats (11%). Examples can be found in Appendix B. These errors are beyond the scope of what external tools can assist with.
## 5 Conclusion
We proposed MultiTool-CoT, a framework that allows LLMs to use multiple external tools, such as a knowledge retriever and a calculator, during reasoning. We applied MultiTool-CoT to a numerical reasoning task that requires knowledge of chemistry and confirmed its effectiveness. The proposed framework is general and can be applied to various tasks by changing and extending external tools. We plan to verify the effectiveness of the proposed method in other tasks in the future.
## Limitations
The major limitation of the present study is that the effectiveness of the proposed method has been confirmed only for a single task. This is because most existing reasoning tasks are relatively simple that they can be solved by a single external tool at most. For example, most existing numerical reasoning tasks provide self-contained questions; that is, all the required knowledge is included in the questions. In such tasks, a calculator is all that is needed as an external tool. However, it would be rare for a single external tool to be sufficient in real-world applications such as medical text analysis. It is crucial for future work to validate the effectiveness in such realistic scenarios that necessitate the use of multiple external tools.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022.
PaLM: Scaling Language Modeling with Pathways.
arXiv preprint arXiv:2204.02311.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*.
Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. 2012. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In **SEM 2012: The First* Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 394–398, Montréal, Canada. Association for Computational Linguistics.
Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In *Advances in Neural Information Processing Systems*,
volume 35, pages 22199–22213. Curran Associates, Inc.
Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models. In *Advances in Neural* Information Processing Systems, volume 35, pages 3843–3857. Curran Associates, Inc.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2022. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
ACM Computing Surveys, 55(9).
Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. 2022. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3505–3523, Dublin, Ireland. Association for Computational Linguistics.
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2021. Webgpt: Browser-assisted question-answering with human feedback. *arXiv preprint arXiv:2112.09332*.
Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, and Dan Roth. 2020. Commonsense reasoning for natural language processing. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics: Tutorial Abstracts, pages 27–33, Online. Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022.
React: Synergizing reasoning and acting in language models. *arXiv preprint arXiv:2210.03629*.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. OPT:
Open Pre-trained Transformer Language Models.
arXiv preprint arXiv:2205.01068.
| Few-Shot Examples | Acc. | |
|---------------------|--------|-------|
| CoT | 5 | 55.38 |
| CoT | 10 | 56.31 |
| CoT | 20 | 57.85 |
| MultiTool-CoT | 5 | 83.69 |
| MultiTool-CoT | 10 | 84.00 |
| MultiTool-CoT | 20 | 85.85 |
Table 2: Performance for the different number of fewshot examples in the Task 2 dataset of NumGLUE. The best result is shown in **bold**.
## A Effect Of The Number Of Few-Shot Examples On Performance
We investigated the effect of the number of fewshot examples on performance. Table 2 shows the results. Reducing the number of few-shot examples decreased accuracy, regardless of whether external tools were used. Surprisingly, however, the drop in performance was not drastic, suggesting the strong generalization ability of GPT-3. Note that it is hopeless to further improve the performance by simply increasing the number of fewshot examples because the total number of tokens in the 20 few-shot examples is nearly 3,000 while the number of tokens that GPT-3 can process is 4,000.
## B Analysis Of Error Types
We manually investigated all 46 errors as described in Section 4.5. There were four types of errors: incorrect reasoning processes (39%), invalid tool inputs (35%), incorrect gold answers (15%), and variations in answer formats (11%).
Incorrect Reasoning Processes Figure 5 shows an error due to an incorrect reasoning process.
GPT-3 generated an incorrect mathematical formula (underlined in red), which was expected to be 3 × 16/160 × 100. Consequently, even though the calculation was performed correctly using the calculator, the final answer turned out to be incorrect.
Invalid Tool Inputs Figure 6 shows an error caused by an invalid tool input. GPT-3 generated an invalid product, CH2Cl2 (underlined in red),
which was expected to be CCl4. Thus, the chemical reaction predictor encountered a run-time error, resulting in an incorrect final answer.
Incorrect Gold Answers Figure 7 shows an error resulting from an incorrect gold answer. The answer predicted by the proposed method was "85 g/mol," whereas the gold answer was "90 g/mol."
Variations in Answer Formats Figure 8 shows an error attributed to a variation in the answer format. The answer predicted by the proposed method was "1 mole," while the gold answer was "18 g". Since 1 mole of water is 18g, they both represent the same quantity. However, due to the difference in the answer formats, it is considered an error.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
![8_image_0.png](8_image_0.png)
Figure 5: An example of incorrect reasoning processes.
![8_image_1.png](8_image_1.png)
Figure 6: An example of the invalid tool inputs.
![8_image_2.png](8_image_2.png)
Figure 7: An example of incorrect gold answers.
![8_image_3.png](8_image_3.png)
C12
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
the 'Limitations' section
✗ A2. Did you discuss any potential risks of your work?
This study focuses on improving the reasoning performance of language models. We cannot think of particular concerns.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
0,1
✓ A4. Have you used AI writing assistants when working on this paper?
We use Gammarly for grammar checking for a part of the sections of the paper.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4,5
✓ B1. Did you cite the creators of artifacts you used?
4,5 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
xu-etal-2023-mpmr | m{PMR}: A Multilingual Pre-trained Machine Reader at Scale | https://aclanthology.org/2023.acl-short.131 | We present multilingual Pre-trained Machine Reader (mPMR), a novel method for multilingual machine reading comprehension (MRC)-style pre-training. mPMR aims to guide multilingual pre-trained language models (mPLMs) to perform natural language understanding (NLU) including both sequence classification and span extraction in multiple languages. To achieve cross-lingual generalization when only source-language fine-tuning data is available, existing mPLMs solely transfer NLU capability from a source language to target languages. In contrast, mPMR allows the direct inheritance of multilingual NLU capability from the MRC-style pre-training to downstream tasks. Therefore, mPMR acquires better NLU capability for target languages. mPMR also provides a unified solver for tackling cross-lingual span extraction and sequence classification, thereby enabling the extraction of rationales to explain the sentence-pair classification process. | # Mpmr: A Multilingual Pre-Trained Machine Reader At Scale∗
Weiwen Xu12,† Xin Li2,‡ Wai Lam1 **Lidong Bing**2 1The Chinese University of Hong Kong 2DAMO Academy, Alibaba Group
{wwxu,wlam}@se.cuhk.edu.hk {xinting.lx,l.bing}@alibaba-inc.com
## Abstract
We present multilingual Pre-trained Machine Reader (mPMR), a novel method for multilingual machine reading comprehension (MRC)-
style pre-training. mPMR aims to guide multilingual pre-trained language models (mPLMs)
to perform natural language understanding
(NLU) including both sequence classification and span extraction in multiple languages. To achieve cross-lingual generalization when only source-language fine-tuning data is available, existing mPLMs solely transfer NLU capability from a source language to target languages. In contrast, mPMR allows the direct inheritance of multilingual NLU capability from the MRCstyle pre-training to downstream tasks. Therefore, mPMR acquires better NLU capability for target languages. mPMR also provides a unified solver for tackling cross-lingual span extraction and sequence classification, thereby enabling the extraction of rationales to explain the sentence-pair classification process.1 fi
## 1 Introduction
Multilingual pre-trained language models, acronymed as mPLMs, have demonstrated strong Natural language understanding (NLU) capability in a wide range of languages (Xue et al., 2021; Cai et al., 2021, 2022; Conneau et al., 2020a; Ding et al., 2022; Li et al., 2020a). In particular, mPLMs can maintain exceptional cross-lingual language understanding (XLU) capability on unseen *target* languages though mPLMs are only fine-tuned on resource-rich *source* languages like English.
It has been proved that optimizing cross-lingual representations of mPLMs can improve XLU ca-
∗ This work was supported by Alibaba Group through Alibaba Research Intern Program. The work described in this paper was also partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14200719). † This work was done when Weiwen Xu was an intern at Alibaba DAMO
Academy. ‡ Xin Li is the corresponding author.
1The code, data, and checkpoints are released at https:
//github.com/DAMO-NLP-SG/PMR
![0_image_0.png](0_image_0.png)
pability. For example, cross-lingual supervisions, such as parallel sentences (Conneau and Lample, 2019) or bilingual dictionaries (Conneau et al.,
2020b) could enhance cross-lingual representations with better language alignment. XLM-R (Conneau et al., 2020a) and mT5 (Xue et al., 2021)
showed that appropriately incorporating more languages during pre-training leads to better crosslingual representations. A few works enriched the cross-lingual representations with factual knowledge through the utilization of multilingual mentions of entities (Calixto et al., 2021; Ri et al., 2022)
and relations (Liu et al., 2022; Jiang et al., 2022)
annotated in knowledge graphs. Despite their differences, the above methods essentially constructed more diverse multilingual corpora for pre-training mPLMs. These mPLMs would presumably meet their saturation points and are known to suffer from curse of multilinguality (Conneau et al., 2020a; Pfeiffer et al., 2022; Berend, 2022). Under this situation, introducing more training data from either existing (Pfeiffer et al., 2022) or unseen (Conneau et al., 2020a) languages for enhancing mPLMs may not bring further improvement or even be detrimental to their cross-lingual representations.
fi fi 1533 In the paper, instead of training a new mPLM
with better cross-lingual representations, we propose multilingual Pre-trained Machine Reader
(mPMR) to directly guide existing mPLMs to perform NLU in various languages. As shown in Figure 1, mPMR resembles PMR (Xu et al., 2022) for constructing multilingual machine reading comprehension (MRC)-style data with Wikipedia hyperlinks. These data are used to retrofit an mPLM
into an mPMR through an MRC-style continual pre-training. During retrofitting process (i.e., pretraining), mPMR jointly learns the general sequence classification and span extraction capability for multiple languages. In XLU fine-tuning, mPLMs solely rely on cross-lingual representations to transfer NLU capability from a source language to target languages. By contrast, mPMR enables the direct inheritance of multilingual NLU capability from the MRC-style pre-training to downstream tasks in a unified MRC formulation, which alleviates the discrepancies between source-language fine-tuning and target-language inference (Zhou et al., 2022a,b, 2023). Therefore, mPMR shows greater potential in XLU than mPLMs.
To improve the scalability of mPMR across multiple languages, we further propose *Unified Q/C*
Construction and *Stochastic answer position* strategies for refining the curation of MRC data. With these two strategies, mPMR can better generalize to low-resource languages and becomes more robust to position bias (Ko et al., 2020).
The experimental results show that mPMR obtains clear improvements over XLM-R (Conneau et al., 2020a) on span extraction, with an average improvement of up to 12.6 F1 on TyDiQA, and 8.7 F1 on WikiAnn respectively. The analysis reveals that mPMR benefits from more multilingual MRC data for pre-training. We also found that mPMR converges faster in downstream tasks and is capable of using its strong extraction capability for explaining the sequence classification process.
## 2 Mpmr
We present the MRC model and training data of mPMR. We closely follow PMR (Xu et al., 2022)
and introduce the modifications for enabling multilingual MRC-style pre-training.
## 2.1 Model Pre-Training
Our mPMR follows the same MRC architecture of Xu et al. (2022, 2023) with an encoder and an extractor. The encoder maps input tokens X, the concatenation of the query Q, the context C, and special markers (i.e., [CLS] and [SEP]), into hidden representations H. For any two tokens Xi and Xj
(*i < j*), the extractor receives their contextualized representations Hi and Hj and predicts the probability score Si,j indicating the probability of the token span Xi:j being the answer to the query Q.
mPMR is guided with the Wiki Anchor Extraction (WAE) objective to train both the encoder and the extractor. WAE checks if the answer to the query exists in the context. If so, WAE would first regard the query and the context to be relevant and extracts the [CLS] token as a sequence-level relevance indicator. WAE would then extract all corresponding answers from the context.
## 2.2 Multilingual Mrc Data
Training mPMR requires the existence of labeled
(query, context, answer) triplets. To obtain such data, we collected Wikipedia articles with anchor annotations for 24 languages, which are the most widely used and cover a reasonable number of languages used in XLU tasks (Ri et al., 2022).
As shown in Figure 1, we utilized a Wikipedia anchor to obtain a pair of correlated articles. One side of the pair is the article that provides in-depth descriptions of the anchor entity, which we defined as the *definition article*. The other side of the pair is named as the *mention article*, which mentions the specific anchor text2. We composed an answerable MRC example in which the anchor is the answer, the surrounding text of the anchor in the mention article is the context, and the definition of the anchor entity in the definition article is the query. Additionally, we can generate an unanswerable MRC
example by pairing a query with an irrelevant context without anchor association.
Unified Q/C Construction. PMR constructed the MRC query and context as valid sentences so as to keep the text coherent. However, sentence segmentation tools are usually not available for low-resource languages. To remedy this, we did not apply sentence segmentation but only preprocess Wikipedia articles with word tokenization in mPMR. For each anchor, the MRC query comprises the first Q words in the definition article. To prevent information leakage during pre-training, similar to PMR, we anonymized the anchor entity 2definition/mention article refers to home/reference article of Xu et al. (2022).
| Model | #Params | EQA | NER | ABSA | Sentence Pair | Avg. | | | | | | |
|---------------------|-----------|-------------------------|-------------------------------------|--------|-----------------|--------------|-------|------|-------|-------|-------|------|
| XQuAD | MLQA | TyDiQA | WikiAnn CoNLL SemEval16 PAWS-X XNLI | | | | | | | | | |
| Metrics | F1 / EM | F1 / EM | F1 / EM | F1 | F1 | F1 | Acc. | Acc. | | | | |
| XLM-R | 550M | 76.6 / 60.8 71.6 / 53.2 | 65.1 / 45.0 | 65.4 | 82.0 | 66.9‡ | 86.4 | 79.2 | 74.2 | | | |
| mT5 | 580M | 67.0 / 49.0 64.6 / 45.0 | 57.2 / 41.2 | 55.7 | 71.0‡ | 62.5‡ | 86.4 | 75.4 | 67.5 | | | |
| VECO | 550M | 77.3 / 61.8 71.7 / 53.2 | 67.6 / 49.1 | 65.7 | 81.3‡ | 63.0‡ | 88.7 | 79.9 | 74.4 | | | |
| mLUKE-W | 561M | 79.6 / | - | 72.7 / | - | 65.2 / 48.5‡ | 67.7‡ | 83.0 | 61.2‡ | 88.2‡ | 79.4‡ | 74.6 |
| Wiki-CL | 550M | 72.1 / 56.9 70.8 / 50.5 | 73.2 / 57.3 | 64.7 | - | - | 88.4 | 79.2 | - | | | |
| KMLM | 550M | 77.3 / 61.7 72.1 / 53.7 | 67.9 / 50.4 | 66.7‡ | 83.2 | 66.1‡ | 88.0 | 79.2 | 75.1 | | | |
| Our MRC Formulation | | | | | | | | | | | | |
| XLM-Rbase | 270M | 70.8 / 56.9 64.4 / 47.9 | 50.8 / 38.2 | 57.9 | 79.2 | 60.0 | 85.0 | 73.3 | 67.7 | | | |
| mPMRbase | 270M | 74.0 / 59.5 65.3 / 48.7 | 63.4 / 49.0 | 66.6 | 81.7 | 62.1 | 86.1 | 73.6 | 71.6 | | | |
| XLM-R | 550M | 77.1 / 61.3 71.5 / 53.9 | 67.4 / 51.6 | 63.6 | 81.4 | 66.1 | 86.9 | 78.6 | 74.1 | | | |
| mPMR | 550M | 79.2 / 64.4 73.1 / 55.4 | 74.7 / 58.3 | 70.7 | 84.1 | 68.2 | 88.0 | 79.3 | 77.2 | | | |
in the query to the [MASK] token. The MRC context consists of C words surrounding the anchor.
Stochastic Answer Position. As mentioned by Ko et al. (2020), the model is prone to overfitting to the position shortcut if the answer in the context exhibits a fixed position pattern. In our case, suppose that the MRC context consists of C/2 words on both the left and right sides of the anchor, the model may learn the shortcut that the middle part of the context is likely to be the answer. To prevent such position bias, we propose a stochastic answer position method, which allows the answer to be presented in any position within the context.
Specifically, given an anchor in a Wikipedia article, the context comprises ξ words preceding the anchor and the C − ξ words following the anchor, where ξ is a random integer ranging from 0 to C
and varies across different contexts. In accordance with PMR, we treated all text spans identical to the anchor in the current context as valid answers.
## 3 Experimental Setup
Implementation Details. In mPMR, the encoder is loaded from XLM-R (Conneau et al., 2020a) and the extractor is randomly initialized. Both components are then continually pre-trained using the multilingual MRC data that we constructed. More hyper-parameters can be found in Appendix A.1.
Downstream XLU Tasks. We evaluated mPMR
on a series of span extraction tasks, including Extractive Question Answering (EQA), Named Entity Recognition (NER), and Aspect-Based Sentiment Analysis (ABSA). We also evaluated our mPMR on two sequence classification tasks. We followed Xu et al. (2022) to convert all tasks into MRC formulation to effectively leverage the knowledge that is acquired during MRC-style pre-training. For EQA,
we used XQuAD (Artetxe et al., 2020), MLQA
(Lewis et al., 2020), and TyDiQA (Clark et al.,
2020). For NER, we used WikiAnn (Pan et al.,
2017) and CoNLL (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003). SemEval16
(Pontiki et al., 2016) was used for ABSA task. Regarding the sequence classification, we used XNLI
(Conneau et al., 2018) and PAWS-X (Yang et al.,
2019). Additional dataset information and concrete examples are provided in Appendix A.2 Baselines. We compared mPMR with recent methods on improving cross-lingual representations, including 1) models pre-trained on a large number of languages: XLM-R (Conneau et al.,
2020a), mT5 (Xue et al., 2021), and VECO (Luo et al., 2021); 2) models that exploited multilingual entity information: Wiki-CL (Calixto et al., 2021),
and mLUKE-W (Ri et al., 2022); and 3) Model that utilized multilingual relation information: KMLM
(Liu et al., 2022). For a fair comparison, all models have approximately the same parameter size.
## 4 Results And Analyses
XLU Performance. Table 1 shows the results on a variety of XLU tasks. mPMR outperforms all previous methods with an absolute improvement of 2.1 F1 over the best baseline (i.e. KMLM).
mPMR shows greater improvements over previ-
| Index | Model | #Lang | PAWS-X | XQuAD | WikiAnn | Avg. |
|---------|-------------------------------------------------|---------|-------------|-------------|-------------|-------------|
| #1 | XLM-Rbase | 0 | 85.0 | 70.8 | 57.9 | 71.2 |
| #2 | #1 + MRC data in English | 1 | 85.2 (0.2↑) | 71.0 (0.2↑) | 59.5 (1.6↑) | 71.9 (0.7↑) |
| #3 | #2 + Stochastic Answer Position | 1 | 85.5 (0.3↑) | 73.0 (2.0↑) | 60.0 (0.5↑) | 72.8 (0.9↑) |
| #4 | #3 + MRC data in more languages | 10 | 85.9 (0.4↑) | 73.5 (0.5↑) | 64.7 (4.7↑) | 74.7 (1.9↑) |
| #5 | #4 + MRC data in even more languages (mPMRbase) | 24 | 86.1 (0.2↑) | 74.0 (0.5↑) | 66.6 (1.9↑) | 75.6 (0.9↑) |
Table 2: The process of retrofitting XLM-R into mPMR using multilingual MRC data (English→10 languages→24 languages) and our Stochastic Answer Position method. Each row accumulates modifications from all rows above.
| Label | Sentence 1 | Sentence 2 | | | | | | | |
|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|----|------|----------|----|----|------|------------------------------------------------|
| Entailment | Rami | Nieminen | ( | born | February | 25 | , | 1966 | ) Rami Nieminen ( born 25 February 1966 ) is a |
| is a Finnish footballer. | Finnish former footballer. | | | | | | | | |
| Contradiction | In 1938 he became the Government Anthropologist of the Egyptian-Anglo Sudan and conducted fieldwork with the Nuba. In 1938 he became the government anthropologist of the anglo-Egyptian Sudan and led fieldwork with the Nuba . | | | | | | | | |
| Entailment | Stipsits 出生于科尔新堡,并在维也纳施塔莫斯多 | 什蒂普西奇出生于德国科恩堡,在维也纳斯塔莫 | | | | | | | |
| 夫度过了他的童年。 | 斯多夫度过了他的童年。 | | | | | | | | |
| Contradiction | 纳舒厄白银骑士团队加入了夏季大学联盟,是本 | Nashua Silver Knights 队是当前夏季联赛的一部 | | | | | | | |
| 市的现役球队。 | 分,也是该市的大学体育队。 | | | | | | | | |
| Entailment | これらの見方は、福音主義的、清教徒的、プロ テスタント的な動きが出現するとともに、しば しば表明されてきました。 | これらの見解は多くの場合、新教徒、清教徒、 福音主義者が出現するなかで示されてきた。 | | | | | | | |
| Contradiction | 1954 年にスリナムに戻った後、弁護士としてパ | 1954 年、パラマリボに戻ると、彼はスリナムで | | | | | | | |
| ラマリボに定住した。 | 弁護士として定住しました。 | | | | | | | | |
ous methods on span extraction tasks. In particular, mPMR achieves up to 7.3 and 7.1 F1 improvements over XLM-R on TyDiQA and WikiAnn respectively. Such significant improvements probably come from the following two facts: (1) WikiAnn comprises a larger number of target languages (i.e.
40). Therefore, existing methods may struggle to align these low-resource languages with English due to a lack of language-specific data. (2) TyDiQA is a more challenging cross-lingual EQA task with 2x less lexical overlap between the query and the answer than MLQA and XQuAD (Hu et al., 2020).
Our mPMR, which acquires target-language span extraction capability from both MRC-style pretraining and English-only QA fine-tuning, achieves larger performance gains on more challenging task.
mPMR Pre-training. To reflect the impact of our MRC-style data and Stochastic Answer Position method on pre-training, we present a stepby-step analysis of the retrofitting process starting from XLM-R in Table 2. Our findings suggest that the significant improvements observed are largely due to the inclusion of multilingual MRC data. Introducing English MRC data (model \#2) gives marginal improvements because model \#2
![3_image_0.png](3_image_0.png)
can only rely on cross-lingual representations to transfer the knowledge acquired during MRC-style pre-training. When using MRC data on more languages (model \#4 and \#5), we can observe significant improvements on XLU tasks. This can be attributed to the NLU capability directly inherited from MRC-style pre-training in target languages.
Additionally, with our Stochastic Answer Position method (model \#3), mPMR becomes more robust to position bias and thus improves XLU tasks.
Explainable Sentence-pair Classification. Inspired by PMR (Xu et al., 2022), we investigated if the extraction capability of mPMR can be leveraged to explain sentence-pair classification. Note
![4_image_1.png](4_image_1.png)
that sentence-pair classification focuses on the inference between the two sentences. If we construct the query with only the task label as PMR does, such query does not solely correspond to any meaningful span in the context, and thus is hard to guide the span extraction. Therefore, we leveraged another template "[CLS] label Sen-1 [SEP] Sen-2
[SEP]", where the two sentences are represented separately in the query and the context. In this template, we can extract the exact span from Sen-2 that leads to a contraction or entailment relation (i.e.,
the task label) with Sen-1. Specifically, we passed the sentence pair to the model twice, with each sentence of the pair being designated as the Sen-2 respectively, and extract the context span with the highest probability score from both sentences.
As shown in Table 3, the extracted spans are indeed important rationales that determine the relationship between two sentences. Such a finding confirms that the extraction capability of mPMR can be appropriately used for explaining the sentence-pair classification process. While the extraction capability may affect the learning of sequence classification during fine-tuning, resulting in a 0.4 Acc. decrease on XNLI.
mPMR Fine-tuning. We investigated the effects of mPMR on XLU fine-tuning. Figure 2 shows that mPMR converges faster than XLM-R on WikiAnn with an extremely low loss value even fine-tuned for 500 steps. In terms of test set performance, mPMR outperforms XLM-R comprehensively and exhibits greater stability. As a result, mPMR provides a better starting point for addressing XLU
tasks compared to XLM-R. More examples from XQuAD and PAWS-X are provided in Figure 3 and 4.
![4_image_0.png](4_image_0.png)
## 5 Conclusions
This paper presents a novel multilingual MRC-style pre-training method, namely mPMR. mPMR provides a unified solver for cross-lingual span extraction and sequence classification and enables direct transfer of NLU capability from pre-training to downstream tasks. mPMR clearly improves the previous baselines and provides a possible solution to explain the sentence-pair classification process.
## Limitations
We identify the following two limitations of our work:
- Different from raw text, constructing MRCstyle data from Wikipedia requires the existence of hyperlinks. This idea works well for resource-rich languages, such as English and Chinese. While such an idea is less effective for languages with few hyperlink annotations in Wikipedia because a small amount of MRCstyle training data is difficult to guide the learning of NLU capability in those languages.
A possible solution is to explore other data resources to automatically construct large-scale MRC data for pre-training.
- As observed in Table 1, the improvements of sequence classification tasks are less significant than those of span extraction tasks. We suggest that the existence of anchors is not a strong relevance indicator between our constructed query and context. Such a finding is also observed in Chang et al. (2020). Therefore, constructing more relevant query-context pairs for sequence classification pre-training can possibly remedy this issue.
## References
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama.
2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Giusepppe Attardi. 2015. Wikiextractor. https://
github.com/attardi/wikiextractor.
Gábor Berend. 2022. Combating the curse of multilinguality in cross-lingual WSD by aligning sparse contextualized word representations. In *Proceedings* of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies.
Deng Cai, Xin Li, Jackie Chun-Sing Ho, Lidong Bing, and Wai Lam. 2021. Multilingual AMR parsing with noisy knowledge distillation. In Findings of the Association for Computational Linguistics: EMNLP
2021.
Deng Cai, Xin Li, Jackie Chun-Sing Ho, Lidong Bing, and Wai Lam. 2022. Retrofitting multilingual sentence embeddings with Abstract Meaning Representation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.
Iacer Calixto, Alessandro Raganato, and Tommaso Pasini. 2021. Wikipedia entities as rendezvous across languages: Grounding multilingual language models by predicting Wikipedia hyperlinks. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies.
Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In *International Conference on Learning Representations*.
Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. *Transactions of the* Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020a. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*.
Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In Proceedings of
the 2018 Conference on Empirical Methods in Natural Language Processing.
Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020b. Emerging cross-lingual structure in pretrained language models.
In *Proceedings of the 58th Annual Meeting of the* Association for Computational Linguistics.
Bosheng Ding, Junjie Hu, Lidong Bing, Mahani Aljunied, Shafiq Joty, Luo Si, and Chunyan Miao. 2022.
GlobalWoZ: Globalizing MultiWoZ to develop multilingual task-oriented dialogue systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020.
Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In International Conference on Machine Learning.
Xiaoze Jiang, Yaobo Liang, Weizhu Chen, and Nan Duan. 2022. Xlm-k: Improving cross-lingual language model pre-training with multilingual knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence.
Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, and Jaewoo Kang. 2020. Look at the first sentence: Position bias in question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evaluating cross-lingual extractive question answering.
In *Proceedings of the 58th Annual Meeting of the* Association for Computational Linguistics.
Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, and Rui Yan. 2020a. Unsupervised domain adaptation of a pretrained cross-lingual language model. In *Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,*
IJCAI 2020.
Xin Li, Lidong Bing, Wenxuan Zhang, Zheng Li, and Wai Lam. 2020b. Unsupervised cross-lingual adaptation for sequence tagging and beyond. *arXiv preprint* arXiv:2010.12405.
Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty, and Luo Si. 2022. Enhancing multilingual language model with massive multilingual knowledge triples.
In *Proceedings of the 2022 Conference on Empirical* Methods in Natural Language Processing.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, and Luo Si. 2021. VECO:
Variable and flexible cross-lingual pre-training for
language understanding and generation. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers).
Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers).
Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. 2022.
Lifting the curse of multilinguality by pre-training modular transformers. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud María Jiménez-Zafra, and Gül¸sen Eryigit. ˘
2016. SemEval-2016 task 5: Aspect based sentiment analysis. In *Proceedings of the 10th International* Workshop on Semantic Evaluation (SemEval-2016).
Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
2022. mLUKE: The power of entity representations in multilingual pretrained language models. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers).
Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In *COLING-02: The 6th* Conference on Natural Language Learning 2002
(CoNLL-2002).
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task:
Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations.
Weiwen Xu, Xin Li, Yang Deng, Wai Lam, and Lidong Bing. 2023. Peerda: Data augmentation via modeling peer relation for span identification tasks. In *The 61th* Annual Meeting of the Association for Computational Linguistics.
Weiwen Xu, Xin Li, Wenxuan Zhang, Meng Zhou, Lidong Bing, Wai Lam, and Luo Si. 2022. From clozing to comprehending: Retrofitting pre-trained language model to pre-trained machine reader. *arXiv* preprint arXiv:2212.04755.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies.
Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
Wenxuan Zhang, Ruidan He, Haiyun Peng, Lidong Bing, and Wai Lam. 2021. Cross-lingual aspectbased sentiment analysis with aspect term codeswitching. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Meng Zhou, Xin Li, Yue Jiang, and Lidong Bing. 2022a.
Enhancing cross-lingual prompting with mask token augmentation. *arXiv preprint arXiv:2202.07255*.
Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, and Chunyan Miao. 2023. Improving self-training for cross-lingual named entity recognition with contrastive and prototype learning. In *The 61th Annual* Meeting of the Association for Computational Linguistics.
Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Luo Si, and Chunyan Miao. 2022b. ConNER: Consistency training for cross-lingual named entity recognition.
In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.
## A Appendix A.1 More Implementation Details
We collect the 2022-08-01 dump3 of Wikipedia articles for the 24 languages in consideration. The statistics of each language can be found in Table 4. Then for each article, we extract the plain text with anchors via WikiExtractor (Attardi, 2015).
Word tokenization is performed using spaCy4if the language is supported, otherwise, we utilize PyThaiNLP5for Thai and Sacremoses6for remaining languages. For each anchor entity, we construct 10 answerable MRC examples and 10 unanswerable MRC examples as described in Sec. 2.2. Anchor entities with low frequency (below 10 occurrences for English entities and 5 occurrences for entities in other languages) were excluded.
In mPMR, we use Huggingface's implementations of XLM-R (Wolf et al., 2020). During the pre-training stage, the query length Q is set to 50 words, and the context length C is set to 200 words.
Both are computed before the subword segmentation. We follow the default learning rate schedule and dropout settings used in XLM-R. We use AdamW (Loshchilov and Hutter, 2019) as our optimizer. We train both mPMRbase and mPMR on 4 A100 GPU. The learning rate is set to 1e-5, and the effective batch size for each step is set to 256 and 80 for mPMRbase and mPMR respectively in order to maximize the usage of the GPU memory. We use the average scores of XQuAD, CoNLL, and PAWSX to select the best mPMR checkpoint. In fact, we continually pre-train mPMRbase and mPMR for 250,000 and 100,000 steps. The training speed is around 6250 steps per hour. The hyper-parameters of mPMRlarge on downstream XLU tasks can be found in Table 5.
## A.2 Downstream Xlu Tasks
We evaluate mPMR on XLU tasks including both span extraction (EQA, NER, and ABSA) and sequence classification (sentence pair classification).
We follow (Xu et al., 2022) to convert all tasks into MRC formulation and tackle them accordingly.
We show concrete examples for each task in Table 6. Specifically, we evaluate the performance of EQA on three benchmarks: XQuAD (Artetxe et al., 2020), MLQA (Lewis et al., 2020), and Ty-3https://dumps.wikimedia.org/enwiki/latest 4https://github.com/explosion/spaCy 5https://github.com/PyThaiNLP/pythainlp 6https://github.com/alvations/sacremoses DiQA (Clark et al., 2020) covering 11, 7, and 9 languages respectively. For NER evaluation, we use the WikiAnn dataset (Pan et al., 2017) restricted to the 40 languages from XTREME (Hu et al.,
2020), as well as the CoNLL dataset with 4 languages (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003); We also evaluate the XLU
performance of SemEval16 ABSA on 6 languages
(Pontiki et al., 2016), where we collect the data from Li et al. (2020b); Zhang et al. (2021). Regarding the sequence classification task, we evaluate XNLI (Conneau et al., 2018) and PAWS-X (Yang et al., 2019) with 15 and 7 languages respectively.
## A.3 Mpmr Performance Per Language
We show the detailed results for each language in each task in Table 7 (XQuAD), Table 8 (MLQA),
Table 9 (TyDiQA), Table 10 (WikiAnn), Table 11
(CoNLL), Table 12 (SemEval16), Table 13 (PAWSX), and Table 14 (XNLI).
| Language | # Entities | # MRC examples | Language | # Entities | # MRC examples |
|------------|--------------|------------------|------------|--------------|------------------|
| ar | 118,292 | 2,020,502 | ko | 94,616 | 1,597,076 |
| bn | 25,081 | 410,634 | nl | 251,323 | 4,185,913 |
| de | 864,746 | 14,795,826 | pl | 283,925 | 4,765,015 |
| el | 56,383 | 946,114 | pt | 216,695 | 3,648,603 |
| en | 966,197 | 19,303,940 | ru | 432,437 | 7,342,472 |
| es | 412,476 | 7,044,972 | sv | 169,030 | 2,808,214 |
| fi | 113,118 | 1,960,636 | sw | 4,857 | 65,724 |
| fr | 595,879 | 10,164,216 | te | 11,005 | 170,664 |
| hi | 15,350 | 242,078 | th | 31,676 | 522,434 |
| id | 70,960 | 1,164,662 | tr | 71,294 | 1,175,276 |
| it | 376,417 | 6,421,850 | vi | 68,665 | 1,147,772 |
| ja | 423,884 | 7,338,308 | zh | 259,785 | 4,438,004 |
| Total | 5,934,091 | 103,680,905 | | | |
| Dataset | XQuAD | MLQA | TyDiQA | WikiAnn | CoNLL | SemEval16 | PAWS-X | XNLI |
|---------------|---------|--------|----------|-----------|---------|-------------|----------|--------|
| Query Length | 64 | 64 | 64 | 32 | 32 | 32 | 64 | 64 |
| Input Length | 384 | 384 | 384 | 192 | 192 | 192 | 192 | 192 |
| Batch Size | 8 | 8 | 8 | 16 | 16 | 32 | 16 | 32 |
| Learning Rate | 3e-5 | 3e-5 | 2e-5 | 1e-5 | 1e-5 | 2e-5 | 5e-5 | 3e-5 |
| Epoch | 3 | 3 | 10 | 10 | 10 | 20 | 10 | 3 |
Table 5: Hyper-parameters settings in fine-tuning XLU tasks.
| Task | Example Input | Example Output | |
|-----------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|--------------------------------------------------------------------|---------------------------------------------|
| Question: Who lost to the Broncos in the divisional round? | | | |
| Ori. | Context: The Broncos defeated the Pittsburgh Steelers in the divisional round, 23–16, by scoring 11 points in the final three minutes | Answer: "Pittsburgh Steelers" | |
| of the game. | | | |
| EQA | | | |
| (XSQuAD) | [CLS] Who lost to the Broncos in the divisional round ? [SEP] [SEP] | | |
| PMR | The Broncos defeated the Pittsburgh Steelers in the divisional round, | (17,18) - "Pittsburgh Steelers" | |
| 23–16 , by scoring 11 points in the final three minutes of the game . [SEP] | ("Japan", LOC); | | |
| Ori. | Two goals in the last six minutes gave holders Japan an uninspiring | ("Syria", LOC); | |
| 2-1 Asian Cup victory over Syria on Friday. | ("Asian Cup", MISC) | | |
| [CLS] "ORG" . Organization entities are limited to named corporate, governmental, or other organizational entities. [SEP] [SEP] Two | ∅ | | |
| goals in the last six minutes gave holders Japan an uninspiring 2-1 Asian Cup victory over Syria on Friday . [SEP] [CLS] "PER" . Person entities are named persons or family . [SEP] [SEP] Two goals in the last six minutes gave holders Japan an uninspiring 2-1 Asian Cup victory over Syria on Friday . [SEP] | ∅ | | |
| [CLS] "LOC" . Location entities are the name of politically or geographically defined locations such as cities , countries . [SEP] [SEP] Two goals in the last six minutes gave holders Japan an uninspiring 2-1 Asian Cup victory over Syria on Friday . [SEP] | | | |
| NER | | | |
| (CoNLL) | PMR | (32,32) - "Japan"; (40,40) - "Syria" | |
| [CLS] "MISC" . Examples of miscellaneous entities include events , nationalities , products and works of art . [SEP] [SEP] Two goals in | (34,35) - "Asian Cup" | | |
| the last six minutes gave holders Japan an uninspiring 2-1 Asian Cup victory over Syria on Friday . [SEP] | | | |
| Ori. | Nice ambience, but highly overrated place. | ("ambience", POS); ("place", NEG) | |
| [CLS] "POS" . For aspect terms of positive sentiment . [SEP] [SEP] Nice ambience , but highly overrated place . [SEP] | (13,13) - "ambience" | | |
| [CLS] "NEG" . For aspect terms of negative sentiment . [SEP] [SEP] Nice ambience , but highly overrated place . [SEP] | (18,18) - "place" | | |
| [CLS] "NEU" . For aspect terms of neutral sentiment . [SEP] [SEP] Nice ambience , but highly overrated place . [SEP] | ∅ | | |
| ABSA | | | |
| (SemEval16) | PMR | Hypothesis: The Tabaci River is a tributary of the River Leurda in | |
| Ori. | Romania. Premise: The Leurda River is a tributary of the River Tabaci in Romania. | Contradiction | |
| [CLS] Contradiction . The hypothesis is a sentence with a contradictory meaning to the premise . [SEP] [SEP] Hypothesis : The Tabaci | (0,0) - "[CLS]" | | |
| River is a tributary of the River Leurda in Romania . Premise : The Leurda River is a tributary of the River Tabaci in Romania . [SEP] | | | |
| Sen. Pair | | | |
| Classification (PAWS-X) | PMR | [CLS] Entailment . | The hypothesis is a sentence with a similar |
| meaning as the premise . [SEP] [SEP] Hypothesis : The Tabaci | ∅ | | |
| River is a tributary of the River Leurda in Romania . Premise : The Leurda River is a tributary of the River Tabaci in Romania . [SEP] | | | |
| Table 6: MRC examples of XLU tasks. We use English examples here for demonstration purposes. Ori. indicates | | | |
Table 6: MRC examples of XLU tasks. We use English examples here for demonstration purposes. Ori. indicates
the original data format of these tasks.
| Model | en | ar | de | el | es | hi | ru | th | tr | vi | zh | Avg. |
|-----------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------------------|-------------------------|--------|
| XLM-Rbase 82.2 / 72.0 | 65.5 / 49.9 | 73.9 / 59.7 | 71.2 / 56.3 | 76.3 / 59.4 | 66.4 / 52.0 | 73.7 / 58.9 | 64.7 / 54.6 | 67.0 / 52.8 | 73.3 / 54.7 | 65.0 / 55.9 70.8 / 56.9 | | |
| mPMRbase | 84.4 / 73.4 | 69.6 / 53.2 | 76.4 / 61.5 | 74.9 / 58.4 | 77.4 / 60.2 | 69.2 / 54.5 | 75.2 / 58.8 | 69.2 / 57.6 | 70.4 / 55.8 | 74.8 / 55.8 | 71.8 / 65.5 74.0 / 59.5 | |
| XLM-R | 86.5 / 75.6 | 72.4 / 54.8 | 79.3 / 63.0 | 79.2 / 61.6 | 82.0 / 62.9 | 76.1 / 59.1 | 79.0 / 62.9 | 72.2 / 59.8 | 75.4 / 60.8 | 79.7 / 60.8 | 68.2 / 58.2 77.3 / 61.7 | |
| mPMR | 87.6 / 76.5 | 75.9 / 60.0 | 81.5 / 65.0 | 80.8 / 63.9 | 82.8 / 65.1 | 76.5 / 60.3 | 80.9 / 65.3 | 75.5 / 65.5 | 76.7 / 61.3 | 81.5 / 62.2 | 71.5 / 63.4 79.2 / 64.4 | |
Table 7: XQuAD results (F1 / EM) for each language.
Model en ar de es hi vi zh Avg.
XLM-Rbase 79.3 / 67.2 55.4 / 38.1 62.0 / 49.1 66.8 / 50.2 59.4 / 44.8 66.1 / 46.7 61.8 / 39.5 64.4 / 47.9
mPMRbase 81.1 / 68.9 58.5 / 41.0 63.6 / 50.5 68.5 / 52.1 60.3 / 46.4 68.3 / 49.2 56.6 / 32.9 65.3 / 48.7
XLM-R 83.4 / 71.0 64.9 / 45.8 69.6 / 54.8 74.1 / 56.8 70.7 / 53.4 73.3 / 53.0 64.4 / 42.4 71.5 / 53.9 mPMR 84.0 / 71.4 66.4 / 47.0 70.3 / 56.2 74.5 / 57.1 71.4 / 54.1 74.7 / 54.4 70.5 / 47.3 73.1 / 55.4
Model en ar bn fi id ko ru sw te Avg.
XLM-Rbase 66.8 / 57.3 55.7 / 42.0 31.5 / 20.4 52.6 / 40.3 69.1 / 55.6 36.3 / 27.9 54.8 / 36.5 53.0 / 34.7 37.4 / 28.8 50.8 / 38.2
mPMRbase 71.1 / 61.6 66.3 / 52.6 56.5 / 41.6 65.5 / 53.1 73.9 / 63.7 50.4 / 38.8 64.4 / 37.9 57.4 / 41.1 65.3 / 50.4 63.4 / 49.0
XLM-R 71.3 / 60.7 69.3 / 52.3 66.2 / 53.1 64.3 / 51.3 76.5 / 62.5 58.3 / 46.7 64.7 / 43.4 68.6 / 53.1 67.3 / 41.1 67.4 / 51.6
mPMR 76.4 / 65.2 76.0 / 58.0 72.3 / 55.8 74.4 / 56.5 84.1 / 71.3 62.2 / 50.7 72.5 / 43.2 76.5 / 63.1 77.7 / 60.8 74.7 / 58.3
Model en af ar bg bn de el es et eu fa fi fr he hi hu id it ja jv
XLM-Rbase 84.2 75.3 47.3 79.0 66.3 77.5 75.3 78.0 69.6 56.0 38.1 70.4 81.4 50.8 67.9 72.4 51.0 79.6 19.6 63.9 mPMRbase 85.1 80.7 57.6 80.2 71.9 81.2 77.6 79.5 79.1 71.3 49.6 80.4 82.4 65.2 71.7 82.2 58.6 83.5 43.2 72.0
XLM-R 85.4 81.1 53.9 84.0 73.8 82.3 82.8 80.4 68.8 54.8 64.2 75.9 81.4 59.3 72.9 76.4 59.3 84.6 13.2 71.2
mPMR 86.0 81.7 56.1 85.9 79.6 82.3 82.3 75.5 82.7 69.6 75.2 84.1 82.0 66.5 75.9 84.0 59.9 86.1 49.1 72.4
ka kk ko ml mr ms my nl pt ru sw ta te th tl tr ur vi yo zh
XLM-Rbase 58.7 40.6 34.3 50.8 46.0 63.8 40.6 81.5 80.0 65.4 76.1 43.0 46.4 4.2 71.9 68.7 45.7 70.9 1.5 23.0
mPMRbase 72.2 45.1 52.9 62.4 59.4 68.1 57.4 83.7 81.5 71.8 77.3 50.5 57.4 3.0 74.2 80.3 55.7 75.2 31.6 49.9
XLM-R 59.9 41.7 41.3 56.8 58.2 76.7 29.6 86.1 85.2 72.2 77.6 52.3 51.6 7.1 78.8 70.9 64.0 80.0 27.2 22.4
mPMR 77.3 46.8 57.9 70.6 68.1 73.8 57.8 86.0 83.6 72.8 79.8 62.6 58.1 3.8 83.0 80.3 76.2 83.6 36.1 54.4
Model en de es nl Avg.
XLM-Rbase 91.3 71.0 78.7 75.7 79.2 mPMRbase 91.9 74.3 80.8 79.7 81.7
XLM-R 92.8 73.7 81.6 77.7 81.4
mPMR 93.5 75.0 85.0 83.1 84.1
Model en es fr nl ru tr Avg.
XLM-Rbase 76.5 65.4 55.6 61.2 56.1 45.4 60.0 mPMRbase 77.6 68.6 56.4 62.2 59.5 48.4 62.1
XLM-R 82.4 71.3 60.3 67.4 61.2 49.1 66.1
mPMR 82.8 71.9 64.7 67.4 66.9 55.7 68.2
Table 8: MLQA results (F1 / EM) for each language.
Table 12: SemEval16 results (F1 Score) for each language.
Table 13: PAWS-X accuracy scores (Acc.) for each language.
Table 9: TyDiQA-GoldP results (F1 / EM) for each language.
Table 10: WikiAnn results (F1 Score) for each language.
Table 11: CoNLL results (F1 Score) for each language.
| Model | en | de | es | fr | ja | ko | zh | Avg. |
|-----------|------|------|------|------|------|------|------|--------|
| XLM-Rbase | 94.3 | 87.7 | 89.1 | 88.7 | 77.0 | 76.6 | 81.3 | 85.0 |
| mPMRbase | 94.3 | 88.4 | 90.1 | 88.9 | 79.0 | 79.4 | 82.4 | 86.1 |
| XLM-R | 95.2 | 89.3 | 91.0 | 90.9 | 79.6 | 79.9 | 82.5 | 86.9 |
| mPMR | 95.2 | 90.6 | 90.3 | 91.3 | 81.2 | 82.9 | 84.6 | 88.0 |
| Model | en | ar | bg | de | el | es | fr | hi | ru | sw | th | tr | ur | vi | zh | Avg. |
|-----------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|--------|
| XLM-Rbase | 84.6 | 71.0 | 76.8 | 75.6 | 74.9 | 77.9 | 76.9 | 68.9 | 74.1 | 64.4 | 71.1 | 72.4 | 65.2 | 73.2 | 73.0 | 73.3 |
| mPMRbase | 84.2 | 71.5 | 77.2 | 75.5 | 75.5 | 78.6 | 76.9 | 69.5 | 74.7 | 62.5 | 71.4 | 71.6 | 65.5 | 74.3 | 74.0 | 73.6 |
| XLM-R | 88.2 | 77.0 | 81.7 | 81.2 | 81.2 | 84.2 | 81.7 | 74.9 | 78.9 | 70.8 | 75.7 | 77.4 | 70.6 | 78.0 | 77.7 | 78.6 |
| mPMR | 88.3 | 77.9 | 82.9 | 82.2 | 81.0 | 83.5 | 82.2 | 75.2 | 79.8 | 71.2 | 76.1 | 78.9 | 71.6 | 78.9 | 79.0 | 79.3 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section Limitations A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 3
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3, Appendix A.1 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix A.1, Appendix A.2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A.1
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3, Appendix A.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3, Appendix A.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wang-etal-2023-mospc | {MOSPC}: {MOS} Prediction Based on Pairwise Comparison | https://aclanthology.org/2023.acl-short.132 | As a subjective metric to evaluate the quality of synthesized speech, Mean opinion score(MOS) usually requires multiple annotators to score the same speech. Such an annotation approach requires a lot of manpower and is also time-consuming. MOS prediction model for automatic evaluation can significantly reduce labor cost. In previous works, it is difficult to accurately rank the quality of speech when the MOS scores are close. However, in practical applications, it is more important to correctly rank the quality of synthesis systems or sentences than simply predicting MOS scores. Meanwhile, as each annotator scores multiple audios during annotation, the score is probably a relative value based on the first or the first few speech scores given by the annotator. Motivated by the above two points, we propose a general framework for MOS prediction based on pair comparison (MOSPC), and we utilize C-Mixup algorithm to enhance the generalization performance of MOSPC.The experiments on BVCC and VCC2018 show that our framework outperforms the baselines on most of the correlation coefficient metrics, especially on the metric KTAU related to quality ranking. And our framework also surpasses the strong baseline in ranking accuracy on each fine-grained segment. These results indicate that our framework contributes to improving the ranking accuracy of speech quality. | # Mospc: Mos Prediction Based On Pairwise Comparison
Kexin Wang, Yunlong Zhao, Qianqian Dong, Tom Ko, Mingxuan Wang ByteDance, China
{wkx, zhaoyunlong.123, dongqianqian, tom.ko, wangmingxuan.89}@bytedance.com
## Abstract
As a subjective metric to evaluate the quality of synthesized speech, Mean opinion score (MOS)
usually requires multiple annotators to score the same speech. Such an annotation approach requires a lot of manpower and is also timeconsuming. MOS prediction model for automatic evaluation can significantly reduce labor cost. In previous works, it is difficult to accurately rank the quality of speech when the MOS
scores are close. However, in practical applications, it is more important to correctly rank the quality of synthesis systems or sentences than simply predicting MOS scores. Meanwhile, as each annotator scores multiple audios during annotation, the score is probably a relative value based on the first or the first few speech scores given by the annotator. Motivated by the above two points, we propose a general framework for MOS prediction based on pair comparison (MOSPC), and we utilize *C-Mixup* algorithm to enhance the generalization performance of MOSPC. The experiments on BVCC
and VCC2018 show that our framework outperforms the baselines on most of the correlation coefficient metrics, especially on the metric KTAU related to quality ranking. And our framework also surpasses the strong baseline in ranking accuracy on each fine-grained segment.
These results indicate that our framework contributes to improving the ranking accuracy of speech quality.
## 1 Introduction
Speech quality evaluation metrics are designed to reflect the speech quality of synthesized speech.
Speech quality evaluation metrics include objective metrics (Kubichek, 1993; Kim, 2005; Malfait et al., 2006) and subjective metrics (Wester et al., 2016a).
MOS prediction is the task of constructing an automatic evaluation metric by fitting the subjective evaluation metric MOS. The training process of previous works mainly focus on predicting the MOS of a single speech. By reviewing the annotation process of MOS, we found that comparison may be a potential scoring strategy employed by some of the annotators. Specifically, in the dataset VCC2018, each annotator scored an average of 226 speech. As each annotator annotates multiple speech in succession, the scores given by some annotators may be relative scores after comparison
(e.g., the first or first few utterances scored by the annotator may be used as a benchmark). Moreover, compared with predicting the specific MOS
score values of the speech samples, ranking the quality of speech samples has more practical application value and is often more difficult when speech samples have close MOS scores. Many previous works (Cooper et al., 2022) have raised the problem of generalization, and the performance will be significantly degraded when facing the outof-distribution (OOD) problems. Motivated by the above points, we propose a MOS prediction model based on pairwise comparison (**MOSPC**). Our contributions can be summarized as follows:
- We propose a general framework for MOS
prediction based on pair comparison, which forces the model to pay more attention to correctly rank the quality of two speech samples.
To verify that MOSPC contributes to speech quality ranking, we test the ranking accuracy on the validation set on fine-grained MOS
score segments. Then we utilize the C-Mixup algorithm to enhance the performance of generalization on BVCC.
- Our proposed framework outperforms the baselines on BVCC and VCC2018 on most of the correlation coefficient metrics, especially on the metric KTAU related to quality ranking. And our framework surpasses the strong baseline in ranking accuracy on each fine-grained segment. These results indicate that our framework contributes to improving ranking accuracy. The model trained with 1547 VCC2018 and BVCC outperforms baselines on the OOD datasets VCC2016 and BC2019 in zero-shot experiments respectively. In addition, we analyze the performance of our model for fine-grained OOD categories, such as unseen-system, unseen-listener and unseenspeaker.
## 2 Related Work
A classic work in MOS prediction task is MOSNET (Lo et al., 2019), which adopts the model structure of CNN-BiLSTM and proposes a loss function combining frame-level loss and utterance-level loss. Due to the need for manual annotation, few data can be used in the MOS prediction task. To reduce data waste, MBNET (Leng et al., 2021) proposes a MOS predictor consisting of a meanNet and a biasNet. LDNET (Huang et al.,
2022) observed that MBNet removes biasNet at inference and only retains meanNet, which is inefficient. Therefore, LDNET improves MBNET
by adopting an encoder-decoder structure to reduce the waste of parameters. DDOS (Tseng et al.,
2022) proposes to eliminate the domain mismatch between self-supervised learning (ssl) model and MOS prediction data, and adds score distribution of each utterance to model learning. UTMOS (Saeki et al., 2022) is based on ensemble learning of strong and weak learners. Fusion-SSL (Yang et al., 2022)
uses late fusion, and fuses the results of 7 ssl models to predict MOS value. Cooper et al. (2022)
makes a analysis of the OOD problem of MOS
prediction. The OOD problems in MOS prediction mainly include unseen-system, unseen-listener, unseen-speaker in the test and validation sets.
Our proposed MOSPC adopts dynamic pairwise comparison. Compared with the previous methods (Lo et al., 2019; Leng et al., 2021; Huang et al.,
2022; Yang et al., 2022), our method pays more attention to correctly evaluating the relative quality of speech.
## 3 Method
In this section, we will introduce the overall structure of MOSPC and the implementation of pairwise comparison, as well as the C-Mixup algorithm used to enhance generalization performance.
## 3.1 Preliminary
Given a dataset D including N speech samples, denoted as D = {[x1, y1], [x2, y2], . . . , [xN , yN ]},
![1_image_0.png](1_image_0.png)
![1_image_1.png](1_image_1.png)
xi and yi denote the ith speech sample and its ground truth. We denote the kth ssl model as fk, k ∈ {1, 2*, . . . ,* 7}, then the predicted MOS of the kth ssl model for input xi can be represented as mki = fk(xi). F represents the fusion model, which consists of 7 ssl models and a fusion layer.
mi = F(xi) denotes the predicted MOS made by the fusion model.
## 3.2 Mospc 3.2.1 Fusion Model
Our model is based on Fusion-SSL (Yang et al., 2022). The overall model structure is shown in Figure 1. The fusion model mainly consists of 7 ssl models: wav2vec_small, wav2vec_large, hubert_base, wav2vec_large(lv60), *wavlm_base*, wavlm_base+, *wavlm_large* and a fusion layer.
The fusion layer is a fully connected layer. During inference, speech xiis fed to ssl model f1, f2*, . . . , f*7 separately, and the MOS values m1i, m2i*, . . . , m*7i are obtained. Then the MOS
values are concatenated and fed into the fusion layer to predict MOS value mi. During training, we leverage pairwise comparison to force the model to pay more attention to the relative quality of speech.
## 3.2.2 Training In Stages
Pair Comparison Our proposed training process is shown in Figure 2. We dynamically make pairs in each batch and constrain each speech sample to form at most two pairs in order to prevent overfitting. The speech samples xi and xj are input into the ssl model respectively, then MOS scores mki
| VCC2018 | BVCC | | | | | | | |
|-------------------------------------------------------------------------|-------------------------|-------------------------|-------------------------|-------------------------|-------------------|-------------------------|-------------------|----|
| utterance-level | system-level | utterance-level | system-level | | | | | |
| MSE LCC SRCC KTAU MSE LCC SRCC KTAU MSE LCC SRCC KTAU MSE LCC SRCC KTAU | | | | | | | | |
| MOSNET | 0.538 0.643 0.589 | - | 0.084 0.957 0.888 | - | 0.816 0.294 0.263 | - | 0.563 0.261 0.266 | - |
| LDNET | 0.441 0.664 0.626 | 0.465 0.022 0.978 0.932 | 0.825 0.338 0.774 0.773 | 0.582 0.139 0.896 0.893 | 0.714 | | | |
| MBNET | 0.426 0.680 0.647 | - | 0.029 0.977 0.949 | - | 0.433 0.727 0.753 | 0.564 0.228 0.844 0.870 | 0.685 | |
| Fusion-SSL 0.359 0.740 0.711 | 0.542 0.018 0.991 0.984 | 0.914 0.156 0.902 0.901 | 0.735 0.051 0.960 0.962 | 0.848 | | | | |
| MOSPC | 0.352 0.748 0.721 | 0.551 0.020 0.993 0.988 | 0.938 0.148 0.906 0.906 | 0.742 0.054 0.960 0.962 | 0.841 | | | |
Table 1: Results on VCC2018 and BVCC. The left side of the table shows the results of our proposed MOSPC and baselines on VCC2018. The right side of the table shows the results of our proposed MOSPC and baselines on BVCC.
and mkj are predicted, and the loss function L*pair* is calculated jointly. All 7 ssl models are trained in such a pair comparison manner. Loss function L*pair* consists of three parts: the relative ranking loss L*rank* and the L1 loss of two speech samples denoted by Ld1 and Ld2 respectively:
$L_{pair}=(1-\beta)*L_{rank}+\beta*(L_{d1}+L_{d2})$ (1) ...
where β is a hyperparameter, the model learns to predict MOS scores by optimizing Ld1,Ld2, and learns to rank two audios by optimizing L*rank* (Burges et al., 2005). Refer to Appendix A for more details of L*rank*.
C-Mixup We observed degradation in generalization performance in the experiments on the BVCC
dataset. Therefore, after experiments in Section 5.2 and 5.3, for each ssl model trained on BVCC,
we adopt C-Mixup (Yao et al., 2022; Cheng et al.,
2023) to enhance the generalization performance.
C-Mixup proportionally combines in-set samples in pairs to construct pseudo-out-of-set samples to improve the generalization performance of the model.
Refer to Appendix B for details of C-Mixup algorithm. To distinguish the model from the one trained without C-Mixup, we named the model trained with C-Mixup as **MOSPC-C**.
## 4 Dataset
The datasets adopted in this paper include main track data and out-of-domain data. Main track data include VCC2018 and BVCC, and out-of-domain data include VCC2016, BC2019 and ASV2019.
See Appendix C for details of datasets.
## 5 Experiments And Discussion
In this section, we will compare the performance of MOSPC with the baselines (Lo et al., 2019; Leng et al., 2021; Huang et al., 2022) and strong baseline Fusion-SSL (Yang et al., 2022) on the datasets BVCC and VCC2018. We test the generalization performance on BC2019 and VCC2016. We also list the ranking accuracy of fine-grained MOS segments.
## 5.1 Experiment Settings
We leverage a fusion layer and 7 ssl models to form the overall model. Each ssl model was trained in the pair comparison manner with SGD optimizer for 1000 epochs. We applied early stopping based on the L1 loss of the validation set with 20 epochs patience, and set the learning rate to 1e-4, batch size of 8. The hyperparameter β was set to be 0.6.
After training on the BVCC in a pair comparison manner, we also use the C-Mixup algorithm to enhance the generalization performance of the model.
When trained with the C-Mixup algorithm, We set the bandwidth to be 1.0, α to be 2.0. We implemented our models in Fairseq (Ott et al., 2019). All experiments were performed on 7 32GB GPUs.
## 5.2 Mos Prediction Results
The left side of Table 1 shows the results of MOSPC and baselines on VCC2018. MOSPC outperforms baselines in all correlation coefficient metrics of utterance-level. At the system-level, all correlation coefficient metrics outperform baselines except the MSE, which was slightly higher by 0.002.
The remarkable thing is that the KTAU of systemlevel surpasses the baselines significantly. KTAU
is a correlation coefficient metric used to indicate the ranking correlation between the predicted value and the ground truth. These results indicate that our framework contributes to the ranking correctness improvement, which is in line with our motivation.
The right side of Table 1 shows the results of our proposed MOSPC and baselines on BVCC.
The results show that our model outperforms previous works on all correlation coefficient metrics at utterance-level, especially on KTAU. At the system-level our framework matches the strong baseline performance on LCC and SRCC. As there are unseen-system samples in the BVCC validation set, the performance of system-level will be affected by the samples from unseen systems. These results also imply that the pair-wise training may impair the generalization performance. To solve this problem, we adopted C-Mixup algorithm to improve the generalization performance of our model trained on BVCC.
## 5.3 **Ranking Accuracy On Fine-Grained Mos** Segments
To prove that our proposed framework contributes to the improvement of speech ranking accuracy, we analyzed the ranking accuracy of speech quality on fine-grained MOS score segments. As shown in Table 2, on BVCC and VCC2018, we divided the ground truth 1-5 into four segments with a score interval of 1. On each fine-grained MOS segments and the overall MOS segment 1-5, we calculated the ranking accuracy on each speech pair (xi, xj )
with ground truth |yi − yj | ∈ (0, 1]. This is because, from our motivation, we pay more attention to whether our proposed framework can accurately rank speech with different but close score values.
Table 2: Ranking accuracy on fine-grained MOS segments. "1-2","2-3","3-4" and "4-5" are the fine-grained segments and "1-5" is the overall segment.
| BVCC | | | | |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----|-----|-----|-----|
| 1-2 | 2-3 | 3-4 | 4-5 | 1-5 |
| Fusion-SSL 0.728 0.724 0.739 0.675 0.778 MOSPC 0.731 0.737 0.742 0.679 0.787 VCC2018 Fusion-SSL 0.482 0.469 0.515 0.509 0.493 MOSPC 0.489 0.473 0.517 0.514 0.494 | | | | |
The top half of Table 2 shows the fine-grained MOS segments ranking results on the validation set of BVCC. The bottom half of Table 2 shows the fine-grained MOS segments ranking results on the validation set of VCC2018. The result shows that our proposed framework outperforms the strong baseline in ranking accuracy on each segment on both BVCC and VCC2018. These results indicate that our framework contributes to improving the ranking accuracy of speech samples with different but close MOS scores.
## 5.4 Ood Experiments
We first analyze the generalization performance of models trained with VCC2018 on VCC2016.
As shown in Table 3, since VCC2016 only has system-level labels, we only present the systemlevel results. Our proposed framework outperforms previous works in all metrics, and the improvement is also significant in the KTAU metric, which again proves that our proposed framework contributes to correctly ranking the relative quality of speech.
Table 3: Zero-shot experiment results on VCC2016.
| VCC2016 system-level | | | | |
|------------------------|-------|-------|-------|-------|
| MSE | LCC | SRCC | KTAU | |
| MBNET | 0.207 | 0.931 | 0.906 | - |
| LDNET | 0.215 | 0.939 | 0.896 | 0.768 |
| Fusion-SSL | 0.209 | 0.971 | 0.889 | 0.768 |
| MOSPC | 0.121 | 0.983 | 0.935 | 0.832 |
As mentioned before, from the experiments on the BVCC validation set we found that the pairwise training method may lead to a decrease in generalization performance, so we leveraged the C-Mixup algorithm to improve the generalization performance on the BVCC after experiments in section 5.2 and 5.3. Table 4 lists the zero-shot results of BC2019. The zero-shot results indicate that after training with C-Mixup, the generalization performance improved significantly, and the robustness to the unseen-system and multi-languages OOD
challenges is also improved.
Table 4: Zero-shot experiment results on BC2019.
MOSPC-C indicates the model trained with C-Mixup algorithm
| BC2019 | |
|-------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------|
| utterance-level | system-level |
| LCC SRCCKTAU LCC SRCCKTAU | |
| LDNET | 0.384 0.365 0.252 0.500 0.473 0.354 |
| DDOS | 0.678 0.694 0.502 0.766 0.797 0.637 |
| Fusion-SSL0.718 0.642 0.469 0.803 0.792 0.601 MOSPC 0.704 0.709 0.523 0.731 0.778 0.594 MOSPC-C 0.756 0.711 0.521 0.816 0.851 0.667 | |
We followed (Cooper et al., 2022) analyzing the performance of our model on the fine-grained OOD
categories of unseen-system, unseen-listener and unseen-speaker with ASV2019 and BC2019. We first adopted ASV2019 and BC2019 to fine-tune the model trained on BVCC respectively. As shown in table 5, we report the mean and standard deviations of squared errors for the unseen categories on utterance-level. The results indicate that our
| unseen-speaker | unseen-system | unseen-listener | | |
|------------------|-----------------|-------------------|-------------|-------------|
| ASV2019 | ASV2019 | BC2019 | ASV2019 | |
| Fusion-ssl | 1.104±1.641 | 1.114±1.707 | 0.191±0.225 | 1.032±1.558 |
| MOSPC | 1.098±1.602 | 1.124±1.690 | 0.189±0.213 | 1.041±1.572 |
| MOSPC-C | 1.089±1.587 | 1.103±1.673 | 0.179±0.217 | 1.030±1.547 |
Table 5: Analysis of fine-grained OOD catagories. Mean and standard deviations of squared errors for fine-grained OOD catagories of unseen-speaker, unseen-system and unseen-listener are shown.
proposed method performs better on the category unseen-listener than on unseen-speaker and unseensystem.
## 6 Conclusion
This paper proposes a general framework for MOS
prediction based on pairwise comparisons (MOSPC) to solve the problem that it is difficult for MOS prediction models to correctly rank speech quality when the MOS scores are close. The main track experiment results show that MOSPC outperforms baselines on most of the correlation coefficient metrics, especially on the metric KTAU related to speech quality ranking. Moreover, MOSPC
surpasses the strong baseline in ranking accuracy on each fine-grained segment. These results indicate that training in a pair comparison manner contributes to improving ranking accuracy. We leverage C-Mixup algorithm to enhance the generalization performance. On the OOD datasets VCC2016 and BC2019, our method outperforms baselines on all metrics. We also analyze the performance on fine-grained OOD categories. Our method performs better for the unseen-listener OOD category than for the unseen-speaker and unseen-system OOD categories.
## 7 Limitation
MOSPC can improve ranking accuracy on each fine-grained MOS score segment, but at the same time, the training method based on pair comparison may impair the generalization performance. As there are unseen-systems in the BVCC validation set, the system-level results of BVCC are affected by the generalization performance degradation. We introduced the C-Mixup algorithm to enhance the generalization performance, which increased the complexity of the experiment to some extent.
## References
Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender.
2005. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pages 89–96.
Xuxin Cheng, Qianqian Dong, Fengpeng Yue, Tom Ko, Mingxuan Wang, and Yuexian Zou. 2023. M 3 st:
Mix at three levels for speech translation. In ICASSP
2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE.
Erica Cooper, Wen-Chin Huang, Tomoki Toda, and Junichi Yamagishi. 2022. Generalization ability of mos prediction networks. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8442–8446.
IEEE.
Rohan Kumar Das, Tomi Kinnunen, Wen-Chin Huang, Zhenhua Ling, Junichi Yamagishi, Yi Zhao, Xiaohai Tian, and Tomoki Toda. 2020. Predictions of subjective ratings and spoofing assessments of voice conversion challenge 2020 submissions. *arXiv preprint* arXiv:2009.03554.
Wen-Chin Huang, Erica Cooper, Junichi Yamagishi, and Tomoki Toda. 2022. Ldnet: Unified listener dependent modeling in mos prediction for synthetic speech.
In *ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing*
(ICASSP), pages 896–900. IEEE.
Doh-Suk Kim. 2005. Anique: An auditory model for single-ended speech quality estimation. *IEEE Transactions on Speech and Audio Processing*, 13(5):821–
831.
Simon King, Robert AJ Clark, Catherine Mayo, and Vasilis Karaiskos. 2008. The blizzard challenge 2008.
Simon King and Vasilis Karaiskos. 2014. The blizzard challenge 2013.
Simon Kinga and Vasilis Karaiskosb. 2009. The blizzard challenge 2009. In *The Blizzard Challenge 2009* Workshop.
Simon Kinga and Vasilis Karaiskosb. 2010. The blizzard challenge 2010. In *The Blizzard Challenge 2010* Workshop.
Simon Kinga and Vasilis Karaiskosb. 2011. The blizzard challenge 2011. In The Blizzard Challenge 2011 Workshop.
Simon Kinga and Vasilis Karaiskosb. 2016. The blizzard challenge 2016. In The Blizzard Challenge 2011 Workshop.
Robert Kubichek. 1993. Mel-cepstral distance measure for objective speech quality assessment. In *Proceedings of IEEE pacific rim conference on communications computers and signal processing*, volume 1, pages 125–128. IEEE.
Yichong Leng, Xu Tan, Sheng Zhao, Frank Soong, Xiang-Yang Li, and Tao Qin. 2021. Mbnet: Mos prediction for synthesized speech with mean-bias network. In *ICASSP 2021-2021 IEEE International* Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 391–395. IEEE.
Chen-Chou Lo, Szu-Wei Fu, Wen-Chin Huang, Xin Wang, Junichi Yamagishi, Yu Tsao, and Hsin-Min Wang. 2019. Mosnet: Deep learning based objective assessment for voice conversion. arXiv preprint arXiv:1904.08352.
Jaime Lorenzo-Trueba, Junichi Yamagishi, Tomoki Toda, Daisuke Saito, Fernando Villavicencio, Tomi Kinnunen, and Zhenhua Ling. 2018a. The voice conversion challenge 2018: Promoting development of parallel and nonparallel methods. arXiv preprint arXiv:1804.04262.
Jaime Lorenzo-Trueba, Junichi Yamagishi, Tomoki Toda, Daisuke Saito, Fernando Villavicencio, Tomi Kinnunen, and Zhenhua Ling. 2018b. The voice conversion challenge 2018: Promoting development of parallel and nonparallel methods. arXiv preprint arXiv:1804.04262.
Ludovic Malfait, Jens Berger, and Martin Kastner. 2006.
P. 563—the itu-t standard for single-ended speech quality assessment. IEEE Transactions on Audio, Speech, and Language Processing, 14(6):1924–1934.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*,
pages 48–53.
Takaaki Saeki, Detai Xin, Wataru Nakata, Tomoki Koriyama, Shinnosuke Takamichi, and Hiroshi Saruwatari. 2022. Utmos: Utokyo-sarulab system for voicemos challenge 2022. arXiv preprint arXiv:2204.02152.
Tomoki Toda, Ling-Hui Chen, Daisuke Saito, Fernando Villavicencio, Mirjam Wester, Zhizheng Wu, and Junichi Yamagishi. 2016. The voice conversion challenge 2016. In *Interspeech*, pages 1632–1636.
Massimiliano Todisco, Xin Wang, Ville Vestman, Md.
Sahidullah, Héctor Delgado, Andreas Nautsch, Junichi Yamagishi, Nicholas Evans, Tomi H Kinnunen, and Kong Aik Lee. 2019. ASVspoof 2019: future horizons in spoofed and fake audio detection. In Proc. Interspeech, pages 1008–1012.
Wei-Cheng Tseng, Wei-Tsung Kao, and Hung-yi Lee.
2022. Ddos: A mos prediction framework utilizing domain adaptive pre-training and distribution of opinion scores. *arXiv preprint arXiv:2204.03219*.
Xin Wang, Junichi Yamagishi, Massimiliano Todisco, Héctor Delgado, Andreas Nautsch, Nicholas Evans, Md Sahidullah, Ville Vestman, Tomi Kinnunen, Kong Aik Lee, Lauri Juvela, Paavo Alku, Yu-Huai Peng, Hsin-Te Hwang, Yu Tsao, Hsin-Min Wang, Sébastien Le Maguer, Markus Becker, Fergus Henderson, Rob Clark, Yu Zhang, Quan Wang, Ye Jia, Kai Onuma, Koji Mushika, Takashi Kaneda, Yuan Jiang, Li-Juan Liu, Yi-Chiao Wu, Wen-Chin Huang, Tomoki Toda, Kou Tanaka, Hirokazu Kameoka, Ingmar Steiner, Driss Matrouf, Jean-François Bonastre, Avashna Govender, Srikanth Ronanki, Jing-Xuan Zhang, and Zhen-Hua Ling. 2020. ASVspoof 2019:
a large-scale public database of synthesized, converted and replayed speech. *Computer Speech &*
Language, page 101114.
Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, et al. 2018. Espnet: End-to-end speech processing toolkit. *arXiv preprint arXiv:1804.00015*.
Mirjam Wester, Zhizheng Wu, and Junichi Yamagishi.
2016a. Analysis of the voice conversion challenge 2016 evaluation results. In *Interspeech*, pages 1637–
1641.
Mirjam Wester, Zhizheng Wu, and Junichi Yamagishi.
2016b. Analysis of the voice conversion challenge 2016 evaluation results. In *Interspeech*, pages 1637–
1641.
Zhengdong Yang, Wangjin Zhou, Chenhui Chu, Sheng Li, Raj Dabre, Raphael Rubino, and Yi Zhao. 2022.
Fusion of self-supervised learned models for mos prediction. *arXiv preprint arXiv:2204.04855*.
Huaxiu Yao, Yiping Wang, Linjun Zhang, James Zou, and Chelsea Finn. 2022. C-mixup: Improving generalization in regression. arXiv preprint arXiv:2210.05775.
Yi Zhao, Wen-Chin Huang, Xiaohai Tian, Junichi Yamagishi, Rohan Kumar Das, Tomi Kinnunen, Zhenhua Ling, and Tomoki Toda. 2020. Voice conversion challenge 2020: Intra-lingual semi-parallel and cross-lingual voice conversion. *arXiv preprint* arXiv:2008.12527.
## A Details Of The Relative Ranking Loss
L*rank* was introduced by rankNet(Burges et al.,
2005). L*rank* is similar in form to cross entropy loss:
$$L_{r a n k}=-L*l o g(P)-(1-L)*l o g(1-P)\tag{2}$$
$$\mathbf{\Sigma}(2)$$
![6_image_0.png](6_image_0.png)
where L*rank* maps the outputs mki, mkj into probability P via a logistic function:
$$P={\frac{e^{m_{k i}-m_{k j}}}{1+e^{m_{k i}-m_{k j}}}}$$
The value of L depends on the ground truths of two speech samples.
$$L={\left\{\begin{array}{l l}{0,}&{y_{i}<y_{j}}\\ {0.5,}&{y_{i}=y_{j}}\\ {1,}&{y_{i}>y_{j}}\end{array}\right.}$$
$$\quad(4)$$
## B C-Mixup
For ssl model fk and input speech sample (xi, yi),
we need to sample another instance (xj , yj ) from the training set. C-Mixup first constructs a sampling probability distribution based on a symmetric Gaussian kernel for each audio sample xi:
$$P((x_{j},y_{j})\mid(x_{i},y_{i}))\propto e x p(-{\frac{d(i,j)}{2\sigma^{2}}})\quad0$$
where d(*i, j*) = d(yi, yj ) = ∥yi − yj∥
22 represents the distance between yi and yj , and σ represents the bandwidth which is a hyperparameter. Subsequently, these conditional probabilities are normalized into a probability mass function that sums to one, and another sample is selected by sampling through the probability mass function. Figure 3 illustrates the training process of C-Mixup. Each ssl model in this work contains two parts: feature extractor and encoder. xi and xj are fed into the feature extractor respectively to obtain embedding ei and ej . Then ei and ej are proportionally combined to construct the embedding eˆij of pseudo-out-of-set sample:
$${\hat{e}}_{i j}=\lambda*e_{i}+(1-\lambda)*e_{j}$$
where λ ∼ Beta(*α, α*), and α is the parameter of the Beta distribution. α is a hyperparameter.
The remaining models take the pseudo-out-of-set embedding eˆij as input to predict MOS score mˆ ij ,
Figure 3: Illustration of the training process of C-Mixup.
and compute the L1 loss with yˆij . yˆij is constructed in the same way as eˆij :
$${\hat{y}}_{i j}=\lambda*y_{i}+(1-\lambda)*y_{j}\qquad\qquad(7)$$
$$(3)$$
consistent with the main track, each ssl model is trained with C-Mixup algorithm separately.
## C Details Of Dataset C.1 Main Track Data C.1.1 Vcc2018
Samples in VCC2018 were all sampled from Voice Conversion Challenge 2018(Lorenzo-Trueba et al.,
2018a), including 20580 English speech samples synthesized by 38 systems. A total of 267 professional annotators participated in the speech labeling. Each speech was scored by four annotators, and the four integer scores were averaged as the label. For the sake of comparison, We split VCC2018 into training sets with a size of 13580, validation sets with a size of 3000 and test sets with a size of 4000.
$$({\boldsymbol{S}})$$
## C.1.2 Bvcc
BVCC integrates data from multiple synthetic speech competitions, including Blizzard Challenge(King et al., 2008; Kinga and Karaiskosb, 2009, 2010, 2011; King and Karaiskos, 2014; Kinga and Karaiskosb, 2016), the Voice Conversion Challenge(Toda et al., 2016; Wester et al.,
2016b; Lorenzo-Trueba et al., 2018b; Zhao et al.,
2020; Das et al., 2020) and publicly-available samples from systems implemented in ESPnet(Watanabe et al., 2018). BVCC includes a total of 7106 English speech samples submitted by 187 systems. We split BVCC into training, validation and test sets with a rate of 70%, 15% and 15%.
Each speech was scored by eight annotators, and the eight integer scores were averaged as the label. Unlike VCC2018, BVCC has samples from unseen systems in its validation set.
## C.2 Out-Of-Domain Data C.2.1 Vcc2016
In order to compare with previous works, we adopt VCC2016 to test the OOD performance of models trained with VCC2018. VCC2016 includes 26028 speech samples synthesized by 20 systems.
VCC2016 has only system-level labels and without utterance-level labels.
## C.2.2 Bc2019
We adopt BC2019 to test the OOD performance of models trained with BVCC. Samples in BC2019 are all sampled from Blizzard Challenge 2019, and are Chinese TTS synthesized speech rated by Chinese native speakers. Since all samples of BVCC
are in English, BC2019 can be used as a crosslanguage OOD case to test the generalization performance of models trained with BVCC. BC2019 has provided 136 labeled samples for training, 136 samples for validation, and additional 540 unlabeled data.
## C.3 Asv2019
We follow (Cooper et al., 2022) utilizing ASV2019(Wang et al., 2020; Todisco et al., 2019)
to analyze the performance of our model on finegrained OOD experiments. Samples in ASV2019 are all in English and sampled from the human assessment results data on the ASVspoof2019 database LA scenario. As scores in human assessment results data are distributed from 0 to 9, We linearly project the scores to 1-5.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 7.
✓ A2. Did you discuss any potential risks of your work?
section 7 and section 3.2.2 C-Mixup.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
section 1.
✓ A4. Have you used AI writing assistants when working on this paper?
System: Youdao and Grammarly, Assistance purely with the language of the paper.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 5.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
section 5.1.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 5.1.
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
To make a reasonable comparison with our baselines, we provide results of a single run.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? section 3 and section 5.1.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
lin-etal-2023-li | {LI}-{RAGE}: Late Interaction Retrieval Augmented Generation with Explicit Signals for Open-Domain Table Question Answering | https://aclanthology.org/2023.acl-short.133 | Recent open-domain TableQA models are typically implemented as retriever-reader pipelines. The retriever component is usually a variant of the Dense Passage Retriever, which computes the similarities between questions and tables based on a single representation of each. These fixed vectors can be insufficient to capture fine-grained features of potentially very big tables with heterogeneous row/column information. We address this limitation by 1) applying late interaction models which enforce a finer-grained interaction between question and table embeddings at retrieval time. In addition, we 2) incorporate a joint training scheme of the retriever and reader with explicit table-level signals, and 3) embed a binary relevance token as a prefix to the answer generated by the reader, so we can determine at inference time whether the table used to answer the question is reliable and filter accordingly. The combined strategies set a new state-to-the-art performance on two public open-domain TableQA datasets. | # Li-Rage: Late Interaction Retrieval Augmented Generation With **Explicit** Signals For Open-Domain Table Question Answering
Weizhe Lin∗2**, Rexhina Blloshmi**1, Bill Byrne1,2, Adrià de Gispert1, **and Gonzalo Iglesias**1 1Amazon Alexa AI
2University of Cambridge [email protected] {blloshmi, willbyrn, agispert, gjii}@amazon.com
## Abstract
Recent open-domain TableQA models are typically implemented as retriever-reader pipelines.
The retriever component is usually a variant of the Dense Passage Retriever, which computes the similarities between questions and tables based on a single representation of each.
These fixed vectors can be insufficient to capture fine-grained features of potentially very big tables with heterogeneous row/column information. We address this limitation by 1) applying late interaction models which enforce a finer-grained interaction between question and table embeddings at retrieval time. In addition, we 2) incorporate a joint training scheme of the retriever and reader with explicit table-level signals, and 3) embed a binary relevance token as a prefix to the answer generated by the reader, so we can determine at inference time whether the table used to answer the question is reliable and filter accordingly. The combined strategies set a new state-to-the-art performance on two public open-domain TableQA datasets.
## 1 Introduction
Tabular data is ubiquitous on the Web. Opendomain Table Question Answering (TableQA), the task of answering questions grounded in tables, is increasingly attracting attention of both public and commercial research, for its value in real-world applications. Research TableQA pipelines are typically implemented with two components: a retriever and a reader. The retriever chooses a small set from the entire pool of table candidates, while the reader generates answers processing each table candidate. State-of-the-art implementations use transformer-based models for both components. In particular, the retriever is built with variants of Dense Passage Retriever (Karpukhin et al., 2020, DPR), which computes question-table similarity by using single vector representations of the question and the table. Retriever and reader can be trained
∗Work done as an intern at Amazon Alexa AI.
separately (Herzig et al., 2021) or jointly (Pan et al., 2022) via Retrieval Augmented Generation loss (Lewis et al., 2020b, RAG). We observe three limitations which we address in this paper.
First, a table can be very large and might contain heterogeneous information across rows/columns; encoding into a fixed size vector risks information loss, which can have an impact in QA quality. One way to alleviate this issue is to replace DPR with a Latent Interaction (LI) model, which encodes text into token-level representations. In particular, we find ColBERT (Khattab and Zaharia, 2020) to be very effective, even if not pretrained for tables.
Second, RAG uses only an implicit signal to guide the retriever. Recently, Lin and Byrne (2022)
proposed RAGE loss (Retrieval Augmented Generation with Explicit Signals) for visual QA, which in our setting rewards the retriever with table-level signals from the reader model in joint training.
Third, we observe empirically that the reader does not always rank answers coming from the gold table at the top. As our reader is a sequence-tosequence model, we propose a simple modification to the training data: we prepend binary relevance tokens ('yes/no') to the answer itself. The reader learns to generate a first token indicating whether the table is relevant to the question or not.
Using these techniques, we build an end-to-end framework, LI-RAGE, and achieve state-of-theart results on two benchmarks for open-domain TableQA, NQ-TABLES (Herzig et al., 2021) and E2E-WTQ (Pan et al., 2021). 1 2 Related Work While open-domain TableQA is yet a relatively unexplored problem, with only a few applications in the past couple of years, there has been extensive work on table retrieval and TableQA separately. In table retrieval, recent advances in ma1We make our code available at: https://github.com/
amazon-science/robust-tableqa 1557 chine learning have enabled extracting deep features for tables with Transformers (Vaswani et al.,
2017), by designing models to parse complex tabular structure (Herzig et al., 2021; Wang et al.,
2021), or by simply linearizing tables with interleaving tokens to preserve its structure (Pan et al.,
2022; Wang et al., 2022). In TableQA, until recently researchers assumed gold tables were given and focused on developing models that understood and answered questions over tables, i.e. the readers. Earlier models generated commands in logical forms (e.g. SQL queries) that were executable over tables (Yu et al., 2018; Lin et al., 2019; Xu et al.,
2018), while recent state-of-the-art models directly predict the answers from the input question and table by either classification (Herzig et al., 2020; Yang et al., 2022, TaPas) or autoregressive generation (Liu et al., 2022, TaPEx). Following these advances, in open-domain TableQA the best performing systems are based on a retriever-reader pipeline (Herzig et al., 2021; Pan et al., 2022).
Herzig et al. (2021, DTR) leverages TaPas (Herzig et al., 2020) to both initialize a DPR-like retriever and the reader. T-RAG (Pan et al., 2022) uses DPR
as retriever of rows/columns by decomposing the table and generates the answer via a sequence-tosequence reader (Lewis et al., 2020a), applying the RAG loss to refine the retriever with implicit signals during end-to-end TableQA fine-tuning. Unlike DTR and T-RAG, CLTR (Pan et al., 2021)
employs only retrieval of rows and columns and obtains the answer cell by intersecting the top-scored ones. In this work we focus mainly on the retriever, and unlike previous work that relies on single vector embeddings, we leverage late interaction retrievers (Khattab and Zaharia, 2020) to achieve a finer-grained interaction between questions and tables. In contrast to T-RAG and CLTR, we do not need to decompose the table into rows and columns, but retrieve a whole table from the corpus, ensuring that the reader is given all the relevant information.
In addition, we explore different techniques for explicitly refining the retriever during end-to-end TableQA achieving superior performance.
## 3 Methodology
Given a question q, the tasks are to find the *gold* table t∗from a table corpus T, i.e. table retrieval
(§ 3.1), and to derive the answer denotations S (1 or more cells from the table), i.e. question answering over the retrieved tables (§ 3.2). We assume that labeled datasets consisting of triples {(q, S, t∗)} are available to us. We flatten the tables into sequences with interleaving special tokens that encode its structure (see Appendix A).
## 3.1 Table Retrieval
In order to exploit question-table similarity at a finer-grained level than when using DPR models, we leverage LI models to encode and retrieve tables for a question. We use ColBERT, which consists of a question encoder Fq and a table encoder Ft, to encode questions and tables at the *token level*:
$$\mathbf{Q}={\mathcal{F}}_{q}(q)\in{\mathcal{R}}^{l_{q}\times d};\mathbf{T}={\mathcal{F}}_{t}(t)\in{\mathcal{R}}^{l_{t}\times d},$$
lt×d, (1)
where lq and lt are input token lengths of q and t.
The relevance score accounts for the interactions between all question and table token embeddings:
$$r(q,t)=\sum_{i=1}^{l_{q}}\operatorname*{max}_{j=1}^{l_{t}}\mathbf{Q}_{i}\mathbf{T}_{j}^{\top}$$
$$(1)$$
$${\mathrm{(2)}}$$
j(2)
LI models extract multi-dimensional question/table embeddings and token-level similarity, as opposed to finding the similarity of single embeddings for the whole question/table in DPR, thus capturing a finer-grained interaction between them.
To train the model we exploit the gold (positive)
table t∗for each question q, i.e. explicitly considering the table-level ground truth. We use in-batch negative sampling for training, per Karpukhin et al.
(2020). All documents in a training batch other than t∗are considered negative for q, and denoted as N(q). We train with the contrastive loss LCL:
$$-\sum_{(q,t^{*})}\log{\frac{\exp\left(r(q,t^{*})\right)}{\exp\left(r(q,t^{*})\right)+\sum_{z\in\overline{{{n}}}(q)}\exp\left(r(q,z)\right)}}{}$$
To this end, for each q, the retriever outputs K top-scoring tables {tk}
K
k=1. Finally, following RAG, we obtain their (approximate2) conditional probability pθ(·|q) with the retriever parameters θ:
$$p_{\theta}(t_{k}|q)={\frac{\exp(r(q,t_{k}))}{\sum_{j=1}^{K}\exp(r(q,t_{j}))}}\qquad(4)$$
## 3.2 Retrieval-Based Tableqa
For the TableQA task we make use of a sequenceto-sequence Transformer-based model that directly 2because we sum over the top-K tables instead of all tables, assuming their probabilities are small and irrelevant.
produces an answer for a given question and table.
The TableQA model pϕ takes as input a sequence composed of the question q and each of the retrieved tables tk as described in §3.1, and generates an answer yk for each input table tk:
$$y_{k}=\operatorname*{argmax}_{y}p_{\phi}(y|q,t_{k})$$
Finally, the model returns the answer associated with the highest probability/confidence:
$${\widehat{y}},{\widehat{t}}={\underset{y,t_{k}}{\operatorname{argmax}}}\,p_{\phi}(y|q,t_{k})$$
pϕ(y|*q, t*k)(6)
## 3.3 Joint Training Of Retrieval And Tableqa
We train both modules jointly using a compositional loss (Lin and Byrne, 2022, RAGE), which considers signals from table relevance and answer prediction, as follows:
$$-\sum_{(q,s)}\left(\sum_{k=1}^{K}\log p_{\phi}(s_{k}^{*}|q,t_{k})+\sum_{k\in\mathcal{P}^{+}(q,s)}p_{\theta}(t_{k}|q)\right)\tag{7}$$
where s∗k is a concatenation of all comma-separated answers in S and P+(q, S) = {k : yk = s∗k ∧
tk = t∗} is a subset of the retrieved K tables, which contains those tables that satisfy (1) being a gold table relevant to answering the question;
(2) the answer generator successfully produces the correct answer from that table. The core idea is to leverage the signal from model prediction to decide which tables are beneficial to producing the correct answer. Their scores are dynamically adjusted during training, which tailors the retriever to better serve the answer generation.
## 3.4 Learned Table Relevance
The answer generator is trained to produce s∗k for each input (*q, t*k) pair. Ideally, we would assume that the answer generated from the gold table t∗is also associated with the highest probability from the answer generator. However, it might happen that an answer derived from a non-gold retrieved table may achieve higher confidence than the answer derived from a gold retrieved table. We propose a simple yet effective approach to improve this process: we add a *binary relevance token* preceding s∗k as 'yes' if tk = t∗, 'no' otherwise. This design aims at guiding the model to prioritize reliable answer sources at training time. At generation time, if the leading generation of a (*q, t*k) pair is 'yes', we consider tk to be a more reliable answer source and prioritize it over other input tables—that generate
'no' instead—when selecting the final prediction.
We rely on the confidence scores if the leading token of all the candidates is 'no'.
$$(5)$$
## 4 Experimental Setup
$$(6)$$
Datasets and metrics. We evaluate our system on two benchmarks, i.e. NQ-TABLES (Herzig et al., 2021) and E2E-WTQ (Pan et al., 2021).3 NQ-TABLES contains generally hard questions extracted from the NaturalQuestions (Kwiatkowski et al., 2019) dataset, comprising the questions that can be answered from tables rather than plain text.
For this benchmark, we evaluate the models using: Token F1, i.e. token-wise F1 score; and exact match (EM) or accuracy, i.e. whether predictions match the annotations.
E2E-WTQ contains look-up questions that require cell selection operation and is a subset of WikiTableQuestions (Pasupat and Liang, 2015). In E2E-WTQ train/valid/test splits are the same as in WikiTableQuestions, with questions limited to those that do not aggregations across multiple table cells. We evaluate models via accuracy4.
In addition, we report Recall@K for the retrieval performance in both, which measures whether the gold table is among the top-K retrieved tables.5 System configurations. For the table retrieval component, we conduct contrastive experiments using both DPR and LI. We first fine-tune the official pretrained DPR or ColBERTv2 model on each dataset before using them in the joint retrieverreader training. We do not train the TableQA model from scratch, instead we warm-start the training with TaPEx, a state-of-the-art pre-trained model for tabular data understanding based on BART (Lewis et al., 2020a). Since the E2E-WTQ is very small and not enough for learning a robust TableQA
model, we additionally fine-tune TaPEx on its superset, i.e. WikiTableQuestions. Note that no test samples are leaked due to this as the dataset splits of E2E-WTQ are the same as WikiTableQuestions.
We select the best checkpoints based on the validation set. We set K=5 since it shows the best balance between performance and latency by both RAG
and RAGE. Training details, computational cost 3Dataset statistics are shown in Appendix B. 4Also named as Hit@1 in Pan et al. (2021, 2022)
5We do not report metrics such as P@K, N@K, MAP
used by T-RAG and CLTR, which decompose tables, being incompatible with our setting (see Appendix C).
| Models | NQ-TABLES | E2E-WTQ | | | |
|-----------------------------------------|-------------|-----------|----------|----------|---------|
| Token F1 | EM | Recall@K | Accuracy | Recall@K | |
| DTR+hn (Herzig et al., 2021) | 47.70 | 37.69 | 81.13@10 | - | - |
| CLTR (Pan et al., 2021) | - | - | - | 46.75 | - |
| T-RAG (Pan et al., 2022) | 50.92 | 43.06 | 85.40@10 | 50.65 | - |
| RAG | 39.67 | 38.33 | 69.16@5 | 38.05 | 61.29@5 |
| DPR-RAGE | 49.68 | 43.02 | 84.35@5 | 48.79 | 59.68@5 |
| LI-RAGE | 54.17 | 46.15 | 87.90@5 | 62.10 | 81.85@5 |
| (w/o joint training) | 53.53 | 45.52 | 85.21@5 | 59.27 | 81.45@5 |
| (w/o relevance tokens) | 50.56 | 42.53 | 86.90@5 | 53.69 | 81.75@5 |
| (w/o joint training & relevance tokens) | 49.83 | 42.19 | 85.21@5 | 50.16 | 81.45@5 |
Table 1: End-to-end TableQA performance on NQ-TABLES and E2E-WTQ. Best performances are in **bold**.
| Models | NQ-TABLES | E2E-WTQ | | | | | | |
|----------------------|-------------|-----------|-------|-------|-------|-------|-------|-------|
| K=1 | K=5 | K=10 | K=50 | K=1 | K=5 | K=10 | K=50 | |
| BM25 | 17.62 | 35.97 | 43.80 | 61.00 | 58.09 | 74.27 | 79.67 | 87.55 |
| DPR-RAGE | 58.29 | 84.35 | 90.72 | 97.08 | 33.61 | 59.68 | 66.80 | 88.38 |
| (w/o joint training) | 53.07 | 84.25 | 90.62 | 97.81 | 32.78 | 58.47 | 66.39 | 88.38 |
| LI-RAGE | 59.12 | 87.90 | 92.81 | 97.60 | 68.46 | 81.85 | 85.89 | 93.36 |
| (w/o joint training) | 53.75 | 85.21 | 90.10 | 97.71 | 66.13 | 81.45 | 84.27 | 93.55 |
and software solution are provided in Appendix D.
Comparison systems. We compare with models from the literature, i.e. DTR, CLTR, **T-RAG** (see
§2), and **BM25**—sparse retrieval baseline. Moreover, we build the following model variants:
LI-RAGE: our main system that leverages ColBERT as retriever, TaPEx as answer generator, RAGE loss for joint training and the binary relevance token in output. We also ablate the system showing the effectiveness of each feature. When disabling joint training, i.e., for ablating the model, the retriever is not updated.
DPR-RAGE: similar to LI-RAGE, except for the retriever being a DPR model.
RAG: we train the RAG (Lewis et al., 2020b) in TableQA data, initializing the retriever and answer generator with our fine-tuned DPR and TaPEx, respectively. Different from DPR-RAGE, RAG does not produce the binary relevance token and updates the retriever only with the RAG loss, which is an implicit signal from the reader.
## 5 Results And Discussions 5.1 Main Results
As shown in Table 1, LI-RAGE achieves the best performance across the board on both datasets, with more than 3 points improvements in Token F1 and EM in NQ-TABLES, and 11.45 points in E2E-WTQ with respect to previously best reported results in the literature. We attribute these results to the high performance of the LI retriever. On NQTABLES it obtains the best recall rate (87.90%)
when only 5 tables are retrieved, as opposed to the previous models that achieve a lower recall rate with K = 10 tables, and also performs better when compared with RAG and DPR-RAGE, by a large margin.
Effects of Joint Training. Similar to the observation of Lin and Byrne (2022), joint training with RAGE improves over the frozen system on both retrieval and TableQA performance. As shown in Table 1, joint training improves the end-to-end TableQA performance on both datasets by ∼0.62.83%, and shows a superior retrieval ability especially on NQ-TABLES (85.21 to 87.90).
Effects of Binary Relevance Tokens. As shown in Table 1, removing the binary relevance tokens greatly reduces system performance, by around 3.6% Token F1 and EM in NQ-TABLES and 8.4% in E2E-WTQ accuracy.
Effects of LI. We report the retrieval performance in Table 2. LI-RAGE achieves the highest recall, outperforming BM25 in both datasets, and DPR
by ∼3% on NQ-TABLES and by over 20-30%
Recall@5/1 on E2E-WTQ. The large margin on E2E-WTQ is because it contains generally long tables with diverse information, and LI models prove beneficial in learning richer table representations.
## 5.2 Remarks Of Design Rationale
We tailor our solution for TableQA, with the specific design of two main components, i.e., adding a relevance token and modifying the RAGE loss.
Relevance token. In open-domain QA, openended questions may have multiple correct answers and can be answered by different passages. As a result, increasing the number of retrieved passages
(K) often improves the retrieval performance by enlarging the coverage of search. However, this is not the case for tables; in open-domain TableQA,
the question often has only one gold table and most of the questions focus on a particular cell in the gold table. In our experiments, increasing K decreased the performance when K > 5 since presenting more tables to the answer generator only increases confusion and chance of mistakes (overconfident on some wrongly retrieved tables). When using relevance tokens as per our design, increasing K does not adversely impact the performance since irrelevant tables are dropped. In addition, we also explored alternative strategies that leverage retrieval scores to determine document reliability.
The first strategy predicts the final answer from the table with the highest retrieval score. This setting achieves 41.04 EM on NQ-TABLES, which is even lower than our ablated LI-RAGE *w/o joint training*
& relevance tokens attaining 42.19 EM (see Table 1). A second strategy weights predictions from different tables with the corresponding retrieval score, i.e., by multiplying the retrieval score (from the retriever) with the answer confidence (from the answer generator) when using K=5. This again performs poorer than our ablated LI-RAGE w/o joint training & relevance tokens that uses only answer generator confidence, achieving 40.91 EM
on NQ-TABLES and 42.19 EM, respectively. In summary, relevance tokens work better than document retrieval scores or combination of retriever and reader scores.
RAGE loss. We modify the original RAGE
loss (Lin and Byrne, 2022) to adapt it to the domain of tables. In particular, we dropped the third term in the equation, which penalizes documents when they do not contain gold answers and also do not contribute to successful question-answering.
Enabling this term in the loss, penalizes K − 1 documents in most cases, which leads to collapsed performance of the retriever in joint training for TableQA. This is motivated by the same fact that gold tables are relatively sparse in TableQA and penalizing wrong documents leads to instability of training and quick retriever overfitting. Disabling this term instead, softens the RAGE loss by only awarding "good" tables and distinguishing good tables from bad ones, which improved the performance by around 1% EM on NQ-TABLES.
## 6 Conclusion
We introduce a novel open-domain TableQA framework, LI-RAGE, that leverages late interaction retrievers to enable finer-grained interaction between questions and tables. Additionally, LI-RAGE incorporates the RAGE loss and binary relevance tokens which enable significant improvements over the state-of-the-art in two challenging TableQA tasks.
## 7 Limitations
Our proposed system was tested on two opendomain TableQA datasets, with one of them (E2EWTQ) being relatively small compared to the other.
Also, the current open-domain TableQA datasets are limited to look-up questions. They do not cover more complicated questions that involve multiple cells and complex table operations, such as SUM/MAX/MIN/SUBTRACT in some questions of WikiSQL and WikiTableQuestion. Therefore, the effectiveness of our system should be further evaluated on more complicated datasets of larger scale in the future. Another limitation lies in the token length limit of modern Transformer models.
The best-achieving models typically accept up to 1024 tokens (e.g. BART, the base model of TaPEx).
This limitation becomes more obvious when tables grow longer and the information being sought go beyond the limit. We believe that, with better approaches addressing this limitation, our system can achieve better performance. The solution can be either applying sampling strategies to pick the rows and columns that are most relevant to answering the question, or increasing the capacity of future Transformer models.
## References
Jonathan Herzig, Thomas Müller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain question answering over tables via dense retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 512–519, Online. Association for Computational Linguistics.
Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos.
2020. TaPas: Weakly supervised table parsing via pre-training. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4320–4333, Online. Association for Computational Linguistics.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019.
Billion-scale similarity search with GPUs. *IEEE*
Transactions on Big Data, 7(3):535–547.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics.
Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR
'20, page 39–48, New York, NY, USA. Association for Computing Machinery.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks. *Advances in Neural Information Processing Systems*, 33:9459–9474.
Kevin Lin, Ben Bogin, Mark Neumann, Jonathan Berant, and Matt Gardner. 2019. Grammarbased neural text-to-sql generation. *arXiv preprint* arXiv:1905.13326.
Weizhe Lin and Bill Byrne. 2022. Retrieval augmented visual question answering with outside knowledge.
In *Proceedings of the 2022 Conference on Empirical* Methods in Natural Language Processing. Association for Computational Linguistics.
Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2022.
TAPEX: Table pre-training via learning a neural SQL
executor. In International Conference on Learning Representations.
Feifei Pan, Mustafa Canim, Michael Glass, Alfio Gliozzo, and Peter Fox. 2021. CLTR: An end-to-end, transformer-based system for cell-level table retrieval and table question answering. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 202–209, Online. Association for Computational Linguistics.
Feifei Pan, Mustafa Canim, Michael Glass, Alfio Gliozzo, and James Hendler. 2022. End-to-end table question answering via retrieval-augmented generation. *arXiv preprint arXiv:2203.16714*.
Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470–
1480, Beijing, China. Association for Computational Linguistics.
Keshav Santhanam, Omar Khattab, Christopher Potts, and Matei Zaharia. 2022a. Plaid: An efficient engine for late interaction retrieval. In *Proceedings* of the 31st ACM International Conference on Information Knowledge Management, CIKM '22, page 1747–1756, New York, NY, USA. Association for Computing Machinery.
Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022b. ColBERTv2: Effective and efficient retrieval via lightweight late interaction. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3715–3734, Seattle, United States. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 6000–6010, Red Hook, NY,
USA. Curran Associates Inc.
Fei Wang, Kexuan Sun, Muhao Chen, Jay Pujara, and Pedro Szekely. 2021. Retrieving complex tables with multi-granular graph representation learning. In *Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information*
Retrieval, SIGIR '21, page 1472–1482, New York, NY, USA. Association for Computing Machinery.
Zhiruo Wang, Zhengbao Jiang, Eric Nyberg, and Graham Neubig. 2022. Table retrieval may not necessitate table-specific model design. In *Proceedings of* the Workshop on Structured and Unstructured Knowledge Integration (SUKI), pages 36–46, Seattle, USA.
Association for Computational Linguistics.
Xiaojun Xu, Chang Liu, and Dawn Song. 2018. SQLNet: Generating structured queries from natural language without reinforcement learning.
Jingfeng Yang, Aditya Gupta, Shyam Upadhyay, Luheng He, Rahul Goel, and Shachi Paul. 2022.
TableFormer: Robust transformer modeling for tabletext encoding. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 528–537, Dublin, Ireland. Association for Computational Linguistics.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics.
## A Table Linearization
In the retriever component, the input table is linearized into a sequence with separation tokens interleaving the table elements to make the input structure-aware, e.g. "<SOT> [table title]
<EOT> <BOC> mountain peak <SOC> elevation <EOC> <BOR> red slate mountain <SOR>
13,162 ft <EOR> <BOR> ...".
In the reader component, the TaPEx tokenizer linearizes the table with structure-aware separation, for example, "*[HEAD] mountain peak | elevation*
[ROW] 1 : red slate mountain | 13 , 162 ft [ROW]
2 ...".
| Dataset | Train | Dev | Test | #Tables |
|-----------|---------|-------|--------|-----------|
| NQ-TABLES | 9,594 | 1,068 | 966 | 169,898 |
| E2E-WTQ | 851 | 124 | 241 | 2,108 |
## B Dataset Statistics
Table 3: Dataset statistics.
Table 4: Hyperparameters for DPR and LI training.
Table 5: Hyperparameters for LI-RAGE training.
| Parameter | Value |
|-------------------------|--------------------------------|
| Negative samples | 4 (per positive sample) |
| Total GPUs | 8 |
| Learning rate | 0.0001 |
| Optimizer | Adam |
| Batch size (per device) | 8 (DPR) / 6 (LI) |
| Grad. accum. steps | 4 |
| Training steps | 6000 (NQ-TABLES) 600 (E2E-WTQ) |
## C Cltr And T-Rag Evaluation
| Parameter | Value | Value |
|--------------------|----------------|----------|
| (NQ-TABLES) | (E2E-WTQ) | |
| Warmup steps | 0 | |
| Epochs | 20 | 15 |
| Reader LR | 0.00002 | 0.000015 |
| Retriever LR | 0.00001 | |
| LR decay | Linear | None |
| Optimizer | AdamW | |
| Total GPUs | 8 | |
| Batch size | 1 (per device) | |
| Grad. accum. steps | 4 | |
| Weight decay | 0.01 | |
| Label smoothing | 0.1 | |
In these open-domain TableQA datasets, each question is associated with only one gold table. As a result, Precision@K in retrieval has a certain upper bound at 1K
. Therefore, evaluating the retriever with Recall@K is more reasonable in this case.
We confirmed with the authors of CLTR and TRAG that they decomposed tables into single rows and columns to form the table database. In evaluating their systems on the E2E-WTQ dataset, the authors reported some retrieval metrics including Precision@K (P@K) which goes beyond the 1K
limit (e.g. T-RAG achieved 0.7806 P@5). This is because they reported a hit for a retrieved row/column as long as it belongs to the gold table. With different setups for table corpus, the retrieval metrics of their systems are not directly comparable.
Therefore, we compare Recall@K with BM25 and DPR only, and compare the end-to-end TableQA accuracy with CLTR and T-RAG (which is called Hit@1 in their papers).
| Models | Training Speed | Training | Training Time | Inference Speed ↑ | Inference |
|------------|------------------|------------|--------------------|---------------------|-------------|
| (iter/sec) | Batch Size | (mins) | (sec/iter) | Batch Size | |
| DPR | 1.10 | 8 | 60 (NQ)/ 10 (WTQ) | - | - |
| LI | 1.75 | 6 | 60 (NQ)/ 10 (WTQ) | - | - |
| DPR-RAGE | 2.1 | 1 | 300 (NQ)/ 35 (WTQ) | 1.22 | 4 |
| LI-RAGE | 0.74 | 1 | 450 (NQ)/ 50 (WTQ) | 1.40 | 4 |
| Parameter | Value |
|--------------------|----------------|
| Warmup steps | 1000 |
| Epochs | 40 |
| Learning Rate | 0.00002 |
| LR decay | Linear |
| Optimizer | AdamW |
| Total GPUs | 8 |
| Batch size | 1 (per device) |
| Grad. accum. steps | 4 |
| Weight decay | 0.01 |
| Label smoothing | 0.1 |
Table 6: Computational cost for DPR/LI retriever models and LI-RAGE and DPR-RAGE.
## D Technical Details D.1 Hyperparameters
The training hyperparameters are shown in Table 4, 5, and 7. The tuning of hyperparameters was performed on validation performance.
DPR: The dimension of the extracted table/question embeddings is d = 768.
LI: The dimension of the extracted table embeddings is lt × d = lt × 128, where lt depends on the length of input tables. Following Santhanam et al.
(2022b), the dimension of the extracted question embeddings is fixed to lq × d = 32 × 128. We pad the questions with less tokens than lq.
## D.2 Indexing And Dynamic Retrieval
DPR. Following Lewis et al. (2020b), onedimensional table embeddings are pre-extracted with the DPR model that has been finetuned on the retrieval task. The FAISS system (Johnson et al., 2019) is used to index all table embeddings which enables fast nearest neighbour search with sub-linear time complexity. In training LI-RAGE,
question embeddings are dynamically extracted from the retriever, and tables with highest scores are retrieved using the precomputed index.
LI. Khattab and Zaharia (2020) proposed the first version of ColBERT, and Santhanam et al. (2022b)
introduced ColBERTv2, which is an enhanced version of ColBERT. Santhanam et al. (2022a) developed an efficient search engine, PLAID, for ColBERTv2, which significantly improved the retrieval latency. We redirect readers to the aforementioned papers for more details. We started from the official ColBERTv2 implementation 6and refactored the code base. We integrated ColBERTv2 into our training framework, so that fast and dynamic retrieval can be done during end-to-end joint training.
## D.3 Computational Cost
In Table 6 we report computational cost of the proposed models. It is clear that time spent on the training of LI is not significantly increased compared to DPR training. This is because both models use contrastive learning in training. But we note that the index building time of LI is around 5 mins while that of DPR only takes 40 seconds.
In terms of joint training, the end-to-end training time of LI-RAGE is longer. This is due to (1)
slightly slower dynamic retrieval during end-toend training; (2) refining the retriever via larger multi-dimensional embeddings in comparison to one-dimensional embeddings used in DPR-RAGE. However, the inference speed is not affected much
(from 1.22 sec/iteration to 1.40). This suggests that when deployed as real applications, LI-RAGE does not bring significant increase in computation.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And 4
✓ B1. Did you cite the creators of artifacts you used?
Section 3 and 4, and Appendix B and D
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. We do not distribute artifacts with this submission. Upon acceptance we will release code.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. We use public datasets from the literature B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix B
## C ✓ **Did You Run Computational Experiments?** Section 4
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
li-etal-2023-well | How Well Apply Simple {MLP} to Incomplete Utterance Rewriting? | https://aclanthology.org/2023.acl-short.134 | Incomplete utterance rewriting (IUR) aims to restore the incomplete utterance with sufficient context information for comprehension. This paper introduces a simple yet efficient IUR method. Different from prior studies, we first employ only one-layer \textbf{M}LP architecture to mine latent semantic information between joint utterances for \textbf{IUR} task (\textbf{MIUR}). After that, we conduct a joint feature matrix to predict the token type and thus restore the incomplete utterance. The well-designed network and simple architecture make our method significantly superior to existing methods in terms of quality and inference speedOur code is available at \url{https://github.com/IMU-MachineLearningSXD/MIUR}. |
## How Well Apply Simple Mlp To Incomplete Utterance Rewriting?
Jiang Li1,2, Xiangdong Su1,2 ∗, Xinlan Ma1,2**, Guanglai Gao**1,2 1 College of Computer Science, Inner Mongolia University, Hohhot, China 2 National & Local Joint Engineering Research Center of Intelligent Information Processing Technology for Mongolian, Hohhot, China [email protected], [email protected], [email protected], [email protected]
## Abstract
Incomplete utterance rewriting (IUR) aims to restore the incomplete utterance with sufficient context information for comprehension. This paper introduces a simple yet efficient IUR
method. Different from prior studies, we first employ only one-layer MLP architecture to mine latent semantic information between joint utterances for IUR task (**MIUR**). After that, we conduct a joint feature matrix to predict the token type and thus restore the incomplete utterance. The well-designed network and simple architecture make our method significantly superior to existing methods in terms of quality and inference speed1.
## 1 Introduction
Multi-turn dialogue modeling is a research area focusing on developing systems that can engage in multiple conversation turns with humans. This type of modeling is often used in the field of humanmachine interaction to improve the ability of artificial intelligence systems to communicate with humans in a natural and intuitive way. One of the challenges of multi-turn dialogue modeling is to accurately understand and respond to the context and meaning of the conversation, as well as to handle incomplete or ambiguous utterances that may be used for brevity or to convey meaning. As shown in Table 1, the incomplete utterance u3 refers to the semantic of "新冠肺炎" (COVID-19) with "那" (that). The limited context provided by a single utterance, such as u3, can lead to referential ambiguity and semantic incompleteness in downstream applications like retrieval-based dialogue systems, as demonstrated in a study by Ni et al.
(2022). In addition, Su et al. (2019) has revealed that coreference and ellipsis are prevalent in more than 70% of utterances, particularly in pro-drop
∗Corresponding author 1Our code is available at https://github.com/
IMU-MachineLearningSXD/MIUR
| Turn | Utterance (Translation) 你知道新冠肺炎吗 |
|----------------|--------------------------------------------|
| u1 | Do you know COVID-19 是的,我知道 |
| u2 | Yes, I know 那是什么 |
| u3 | What is that |
| 新冠肺炎是什么 | |
| u 0 3 | What is COVID-19 |
Table 1: An example of incomplete utterance rewriting.
u1 and u2 denote the context utterances. u3 is the incomplete utterance. u03 is the rewritten utterance.
languages like Chinese. These linguistic phenomena in conversation present a significant challenge for the development of practical conversational AI
systems.
To address this issue, recent works (Kumar and Joshi, 2016; Su et al., 2019; Pan et al., 2019; Xu et al., 2020) proposed the Incomplete Utterance Rewriting (IUR) task, which aims to transform an incomplete or context-dependent statement into a self-contained, semantically equivalent one that can be understood without any additional context. As shown in Table 1, IUR (u3 → u03
) task makes the downstream dialogue modeling more precise.
Despite previous works achieving promising results, the speed of autoregressive generation remains a limiting factor. To improve the speed, Huang et al. (2021) fuses the sequence labeling and non-autoregressive generation, which predicts missing elements in incomplete utterance and rewritten utterance. In addition, Liu et al.
(2020) formulates IUR as semantic segmentation task based on U-Net (Ronneberger et al., 2015) and achieves better performance at a faster speed. However, above mentioned models are still not simple enough.
In this paper, we propose a simple yet efficient 1567
![1_image_0.png](1_image_0.png)
solution that our model first employs MLP architecture to simultaneously mine the semantic associations between the context utterances and the incomplete utterance, and capture attention information between them. After MLP architecture, we obtain the joint feature maps and further construct the token-pair edit matrix. Finally, the above matrix is edited according to prediction edit type tokens to generate the final rewritten utterance. Experiments show that our approach achieves better performance on several datasets across different domains and languages with low resource costs and a much faster inference speed.
## 2 Methodology
In this section, we elaborate on our proposed approach. As shown in Figure 1, our method mainly consists of two modules: MLP backbone network and joint feature matrix. For a multi-turn dialogue utterances (u1, u2*, ..., u*t), we concatenate all the context utterances to produce an m-length word sequence c = (c1, c2*, ..., c*m) and employ a special mask [SEP] to separate different context utterances. Meanwhile, all the incomplete utterances are denoted as an n-length word sequence x = (x1, x2*, ..., x*n).
## 2.1 Mlp Backbone Network
We first concatenate the context utterances and the incomplete utterances to construct a joint m + n length word sequence H =
(c1, c2, ..., cm, x1, x2*, ..., x*n). Besides, pretrained language models have been found to be highly effective in various natural language processing tasks. Hence, we employ BERT (Devlin et al., 2019) to initialize the word vector matrix H, where H ∈ R
(m+n)×768. MLP backbone network contains two MLP blocks. Specifically, the first MLP
block is responsible for mining the global semantic association information between context utterances c and incomplete utterance x. The second MLP
block aims to learn the confidence level for each word embedding. This further enables the model to focus on important word information. It is important for the follow-up edit type classification, including substitute, insert and none. Each MLP
block contains two fully-connected layers and a nonlinearity applied independently. For clarity and simplicity, we exclude the transposition process and the whole process can be represented as:
$$\begin{array}{l}{{\bf I}_{*,i}={\bf H}_{*,i}+{\bf W}_{2}\sigma({\bf W}_{1}L N({\bf H}_{*,i})),}}\\ {{{\bf K}_{j,*}={\bf I}_{j,*}+{\bf W}_{4}\sigma({\bf W}_{3}L N({\bf I}_{j,*})),}}\end{array}\quad(1)$$
where i = 1, 2*, ..,* 768, j = 1, 2*, .., m* + n and σ represents GELU (Hendrycks and Gimpel, 2016).
In addition, MLP backbone contains other standard architectural components: skip-connections (He et al., 2016) and LayerNorm (LN) (Ba et al., 2016).
In contrast to the approach taken by Tolstikhin et al. (2021), who treated the word vector matrix H as an image and employed 1 × 1 convolution on 1568 non-overlapping image patches, we directly input the word vector matrix H into the MLP backbone network. Our operation avoids the loss of semantic spatial information resulting from 1×1 convolution.
Furthermore, since the number of words in each utterance varies, we utilize padding operation and copy mechanism (Gu et al., 2016; Zeng et al., 2018)
to maintain a consistent sequence length. It is worth noting that our approach employs a one-layer MLP
backbone network.
## 2.2 Joint Feature Matrix
Furthermore, to further capture the relevance between word embeddings, we employ three similarity functions: dot product similarity (dot Sim.),
cosine similarity (cos Sim.), and linear similarity (*linear* Sim.). The word-to-word embeddings relevance between each context utterance's word embedding Kcm and each incomplete utterance's word embedding Kxn are captured using a 3dimensional joint feature matrix J(cm, xn) represented as follows:
$$\begin{array}{c}{{\mathbf{J}(c_{m},x_{n})=[\mathbf{K}_{c_{m}}\cdot\mathbf{K}_{x_{n}};\cos(\mathbf{K}_{c_{m}},\mathbf{K}_{x_{n}});}}\\ {{l i n e a r(\mathbf{K}_{c_{m}},\mathbf{K}_{x_{n}})].}}\end{array}\tag{2}$$
Finally, we employ BatchNorm (Ioffe and Szegedy, 2015) on joint feature matrix J(cm, xn)
to expedite and stabilize the training process. The batch is obtained by computing the mean and variance of the batch activation, which captures global information. After applying the BatchNorm operation, the matrix J(cm, xn) is flattened, and each feature vector is mapped to one of three token types: Substitute, Insert, or None. This generates the token-pair edit matrix.
## 2.3 Supervised Label
Prior to training our model in the supervised fashion, we need to create word-level labels through the following process to construct our training set.
Specifically, we first calculate the longest common subsequence (LCS) between the incomplete utterance and the rewritten utterance. Then, we align the incomplete utterance, the rewritten utterance, and the LCS using a greedy strategy. Finally, we identify the corresponding tokens in the rewritten utterance and mark them accordingly. Please refer to Algorithm 1 in Appendix A for a detailed description.
## 3 Experiments 3.1 Experimental Setup
Datasets We conduct the experiments on three IUR benchmarks from different domains and languages, including RESTORATION-200K (Pan et al., 2019), REWRITE (Su et al., 2019) and CANARD (Elgohary et al., 2019). The statistics of the datasets are shown in Appendix B.
Baselines We compare the performance of our method with the following baselines: (i)
Generation models need to generate rewritten utterances from scratch, including Seq2Seq model L-Gen (Bahdanau et al., 2015), the hybrid pointer generator network L-Ptr-Gen (See et al.,
2017), the basic transformer models T-Gen and T-Ptr-Gen (Vaswani et al., 2017), Syntactic (Kumar and Joshi, 2016), PAC (Pan et al., 2019), L-Ptr-λ and T-Ptr-λ (Su et al., 2019). The above models are limited by the speed of generation. (ii) **Structure** aware models contain RUN(Liu et al., 2020) and SARG (Huang et al., 2021).
For more information about other experimental setups, please see Appendix B.
## 3.2 Main Results
Table 2 shows the experimental results on RESTORATION-200K. Our proposed approach, MIUR, achieves competitive results compared to all previous State-of-the-Art methods as shown in Table 2. The results indicate MIUR can effectively mine the semantic information between utterances with two types of MLP architecture. Furthermore, we discovered that MIUR places more emphasis on rewriting precision (Pn) metrics. The first MLP
architecture captures global semantic associations between context utterances and incomplete utterance, while the second MLP architecture focuses more on significant word embedding information.
Our approach effectively combines two different MLPs and provides an effective guideline for the subsequent construction of the joint feature map matrix, leading our approach to concentrate more on essential word information and to pursue higher rewriting precision. Additionally, we achieve comparable Recalln results to the baselines. The experimental results of REWRITE and CANARD also come to the same conclusion, which can be found in Appendix C.
Model P1 R1 F1 P2 R2 F2 P3 R3 F3 B1 B2 R1 R2
Syntactic 67.4 37.2 47.9 53.9 30.3 38.8 45.3 25.3 32.5 84.1 81.2 89.3 80.6 L-Gen 65.5 40.8 50.3 52.2 32.6 40.1 43.6 27.0 33.4 84.9 81.7 88.8 80.3
L-Ptr-Gen 66.6 40.4 50.3 54.0 33.1 41.1 45.9 28.1 34.9 84.7 81.7 89.0 80.9
PAC 70.5 58.1 63.7 55.4 45.1 49.7 45.2 36.6 40.4 89.9 86.3 91.6 82.8 T-Ptr-λ♥ - - 51.0 - - 40.4 - - 33.3 90.3 87.4 90.1 83.0 SARG♥ - - 62.4 - 52.5 - - 46.3 92.2 89.6 92.1 **86.0** RUN 73.2 **64.6** 68.6 59.5 **53.0** 56.0 50.7 45.1 47.7 92.3 89.6 92.4 85.1
MIUR (Ours) **76.4** 63.7 **69.5 62.7** 52.7 **57.3 54.3 45.9 49.7 93.0 90.1 92.6** 85.7
Table 2: Experimental results on RESTORATION-200K. All results are taken from the original papers. Dashes:
results are not reported in the responding literature. ♥: results are derived from (Huang et al., 2021).
## 3.3 Inference Speed
Table 3 presents a comparison of the inferential speed of our model with the baselines. All models were implemented in PyTorch and run on a single NVIDIA V100. We can observe that the proposed MIUR achieves the fastest inference speed compared with the SOTA methods. Specifically, MIUR's speed is 3.14 times faster than that of L-Gen (n_Beam=1). Moreover, Compared with RUN in the second place, MIUR achieves 20% improvement in the inference speed. This enhanced performance can be attributed to the fact that our model employs only a one-layered MLP backbone to capture inter-utterances semantic information, without utilizing other modules. The simplified architecture, thus, contributes to the model's faster inference speed without compromising the performance.
Model Speedup
L-Gen (n_Beam=1) 1.00 × L-Ptr-Net (n_Beam=1) 0.57 × L-Ptr-Gen (n_Beam=1) 0.93 ×
T-Gen (n_Beam=1) 0.25 ×
T-Ptr-Net (n_Beam=1) 0.13 × T-Ptr-Gen (n_Beam=1) 0.14 × SARG (n_Beam=1) 2.63 × RUN 2.61 ×
MIUR (Ours) **3.14** ×
Table 3: The inference speed comparison between MIUR and baselines on RESTORATION-200K.
n_Beam stands for the beam size in beam search, not applicable for RUN and MIUR.
## 3.4 Ablation Study
To verify the effectiveness of MLP architecture in our model, we conduct a thorough ablation study in Table 4. Notably, EM and P2 metrics significantly decreased when the model did not use MLP
backbone architecture. The results again prove that MLP backbone can effectively mine latent semantic information between utterances and provide more precise guidance for the follow-up edit type classification. In addition, MIUR uses only one type of MLP architecture alone can also lead to performance degradation. Since the first MLP architecture can effectively mine the semantic associations between context utterances and incomplete utterance, and the second MLP architecture increased focus on capturing attention information between utterances. It's only with the full MLP structure that MIUR can capture semantic information more accurately and to a wider extent.
Table 4: The ablation results on REWRITE dataset.
As mentioned in Section 2.1, we perform an ablation study about using two different padding strategies to ensure consistent sequence length.
Table 5 indicates that the model obtains a small performance improvement using copy mechanism, which further increases the semantic interaction between utterances. But this operation limits inference speed. Given a tiny improvement using copy mechanism, our model employs zero padding method.
| w/o MLP | MLP 1 | MLP 2 | EM | P2 | R2 | F2 | B2 |
|-----------|---------|---------|------|------|------|------|------|
| ✔ | ✔ | 67.7 | 86.1 | 78.6 | 82.2 | 91.2 | |
| ✔ | 66.4 | 84.8 | 78.3 | 81.4 | 90.6 | | |
| ✔ | 66.6 | 85.4 | 78.1 | 81.6 | 90.7 | | |
| ✔ | 65.1 | 82.4 | 77.3 | 80.1 | 90.5 | | |
| Padding Strategy | EM | P2 | R2 | F2 | Speedup |
|--------------------|-------|-------|-------|-------|-----------|
| zero padding | 67.73 | 86.12 | 78.63 | 82.21 | 1.00 × |
| copy mechanism | 67.81 | 86.22 | 78.69 | 82.33 | 0.96 × |
Table 5: The ablation results on REWRITE dataset.
## 3.5 More Discussion For Mlp
To further investigate whether our proposed MLP
backbone can effectively mine the semantic associations between utterances, we visualize the word embeddings composed of the context utterances and the incomplete utterance in Figure 2. The yaxis represents our selection of 40 words consisting of the context utterances and the incomplete utterance. The x-axis represents the features of the first 100 dimensions of our intercepted word embeddings. It is not difficult to notice that word embeddings appear more distinctly characterized by vertical stripes after MLP backbone. Consequently, this further indicates that semantic information between words is more closely related, and our method can effectively learn the semantic relatedness between words after passing through the MLP network we designed.
![4_image_0.png](4_image_0.png)
## 4 Conclusion & Future Work
In this paper, we propose a simple yet effective IUR method. We utilize one-layer MLP structure to mine the inter-utterance semantic information from different perspectives. This improves the ability to predict the correct token between incomplete utterance and rewritten utterance. Benefiting from the fact that our model effectively employs MLP to IUR task, allowing our approach to achieve significant results in terms of performance and inference speed. This study represents the first preliminary exploration of the use of MLP on IUR task. In the future, we will investigate on extending our approach to other dialogue areas.
## Limitations
One limitation of current token-pair edit matrix based incomplete utterance rewriting models is that they are only able to select tokens that have appeared in the context utterances. Thus, these models, including our own, are unable to generate new words, such as conjunctions and prepositions, to improve metrics such as fluency. However, this can be addressed by incorporating an additional word dictionary as proposed by Liu et al. (2020) to improve fluency for out-of-vocabulary words (OOV).
In addition, we will consider combining generative models (GPT (Radford et al., 2019), T5 (Raffel et al., 2020) etc.) to assist in the recovery of the incomplete utterances in the future works.
## Acknowledgement
This work was funded by National Natural Science Foundation of China (Grant No. 61762069), Key Technology Research Program of Inner Mongolia Autonomous Region (Grant No. 2021GG0165),
Key R&D and Achievement Transformation Program of Inner Mongolia Autonomous Region
(Grant No. 2022YFHH0077), The Central Government Fund for Promoting Local Scientific and Technological Development (Grant No. 2022ZY0198),
Big Data Lab of Inner Mongolia Discipline Inspection and Supervision Committee (Grant No. 215005206043).
## References
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. *arXiv preprint* arXiv:1607.06450.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In *3rd International* Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Ahmed Elgohary, Denis Peskov, and Jordan BoydGraber. 2019. Can you unpack that? learning to rewrite questions-in-context. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 5918–5924, Hong Kong, China. Association for Computational Linguistics.
Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li.
2016. Incorporating copying mechanism in sequenceto-sequence learning. In *Proceedings of the 54th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–
1640.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pages 770–
778.
Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). *arXiv preprint* arXiv:1606.08415.
Mengzuo Huang, Feng Li, Wuhe Zou, and Weidong Zhang. 2021. Sarg: A novel semi autoregressive generator for multi-turn incomplete utterance restoration.
In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13055–13063.
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International conference on machine learning*, pages 448–456. PMLR.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *3rd International Conference on Learning Representations,*
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Vineet Kumar and Sachindra Joshi. 2016. Nonsentential question resolution using sequence to sequence learning. In *Proceedings of COLING 2016,*
the 26th International Conference on Computational Linguistics: Technical Papers, pages 2022–2031, Osaka, Japan. The COLING 2016 Organizing Committee.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81.
Qian Liu, Bei Chen, Jian-Guang Lou, Bin Zhou, and Dongmei Zhang. 2020. Incomplete utterance rewriting as semantic segmentation. In *Proceedings of the*
2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2846–2857, Online. Association for Computational Linguistics.
Jinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, and Erik Cambria. 2022. Recent advances in deep learning based dialogue systems: A systematic survey.
Artificial intelligence review, pages 1–101.
Zhufeng Pan, Kun Bai, Yan Wang, Lianqiang Zhou, and Xiaojiang Liu. 2019. Improving open-domain dialogue systems via multi-turn incomplete utterance restoration. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 1824–1833, Hong Kong, China. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Olaf Ronneberger, Philipp Fischer, and Thomas Brox.
2015. U-net: Convolutional networks for biomedical image segmentation. In *Medical Image Computing* and Computer-Assisted Intervention (MICCAI), volume 9351 of *LNCS*, pages 234–241. Springer. (available on arXiv:1505.04597 [cs.CV]).
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, and Jie Zhou. 2019. Improving multi-turn dialogue modelling with utterance ReWriter. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 22–31, Florence, Italy. Association for Computational Linguistics.
Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. 2021. Mlp-mixer: An all-mlp architecture for vision. Advances in Neural Information Processing Systems, 34:24261–24272.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Kun Xu, Haochen Tan, Linfeng Song, Han Wu, Haisong Zhang, Linqi Song, and Dong Yu. 2020. Semantic Role Labeling Guided Multi-turn Dialogue ReWriter. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 6632–6639, Online. Association for Computational Linguistics.
Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism.
In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 506–514.
## A Constructing Supervised Labels
We describe here the algorithm for building wordlevel supervised labels. Taking Table 1 as an example, given U as "那是什么" (What is that) and U0as "新冠肺炎是什么" (What is COVID-19).
Their longest common subsequence (LCS) is "是 什么" (What is). Hence, "那" (that) is marked as [DEL] in U and "新冠肺炎" (COVID-19) is marked as [ADD] in U0. Correspondingly, the edit type (supervised label) is *Substitute*.
## B Other Experimental Setups
Evaluation Following the previous works, we apply BLEUn (Bn) (Papineni et al., 2002), ROUGEn (Rn) (Lin, 2004), EM (exact match),
Rewriting Precisionn, Recalln and F-scoren
(Pn, Rn, Fn) (Pan et al., 2019) as the automatic evaluation metrics.
Implementation Details We implement our proposed model via pytorch . All experiments are trained on a single NVIDIA Tesla V100. We use Adam (Kingma and Ba, 2015) optimizer and employ grid search to find the best hyperparameters based on the performance on the validation datasets. The learning rate is set to 1e − 5 for all datasets. The best models are selected by early stopping on the validation datasets, and the max epoch is 100.
## C Additional Experimental Results
Table 7 and Table 8 show the experimental results on REWRITE and CANARD, respectively. Our Algorithm 1: Construct Supervised Labels Input: U: the incomplete utterance U0: the rewritten utterance Output: L: the supervised label 1 Computing the LCS between U and U0.
2 for wx ∈ U do 3 if wx ∈/ LCS **then**
$$\mathbf{\tau}_{L\to L}$$
$\sum_{n=1}^{\infty}\frac{n^{2}}{n^{2}}$ 4.
4 *mark*(wx) = [DEL]
5 end 6 end 7 for w0x ∈ U0 do 8 if w0x ∈/ LCS **then**
$\mathbb{U}_{x}$) = ...
9 *mark*(w0x) = [ADD]
10 end 11 end 12 The same mark is combined into one span.
13 Comparing U and U0at the span level.
14 for (sx, s0x) ∈ (*U, U*0) do 19 end
$\left|\begin{array}{l}L=Substitute\\ \\ L=Insert\end{array}\right.$ and...
17 **else**
20 end 21 **return** L
method also achieves competitive results on all scores. The results again demonstrate the effectiveness of our model.
| RESTORATION-200K | REWRITE | CANARD | |
|--------------------|-----------|----------|---------|
| Language | Chinese | Chinese | English |
| # Train | 194K | 18K | 32K |
| # Dev | 5K | 2K | 4K |
| # Test | 5K | - | 6K |
| Avg. Con length | 25.8 | 17.7 | 85.4 |
| Avg. Inc length | 8.6 | 6.5 | 7.5 |
| Avg. Rew length | 12.4 | 10.5 | 11.6 |
Table 6: Statistics of three experimented datasets. "Avg" for average, "Con" for context utterance, "Inv" for incomplete utterance, "Rew" for rewritten utterance.
15 if sx = [DEL] and s0x = [ADD] **then**
| Model | EM | B2 | B4 | R2 | RL |
|-------------|------|------|------|------|------|
| L-Gen | 47.3 | 81.2 | 73.6 | 80.9 | 86.3 |
| L-Ptr-Gen | 50.5 | 82.9 | 75.4 | 83.8 | 87.8 |
| L-Ptr-Net | 51.5 | 82.7 | 75.5 | 84.0 | 88.2 |
| L-Ptr-λ | 42.3 | 82.9 | 73.8 | 81.1 | 84.1 |
| T-Gen | 35.4 | 72.7 | 62.5 | 74.5 | 82.9 |
| T-Ptr-Gen | 53.1 | 84.4 | 77.6 | 85.0 | 89.1 |
| T-Ptr-Net | 53.0 | 83.9 | 77.1 | 85.1 | 88.7 |
| T-Ptr-λ | 52.6 | 85.6 | 78.1 | 85.0 | 89.0 |
| RUN | 66.4 | 91.4 | 86.2 | 90.4 | 93.5 |
| MIUR (Ours) | 67.7 | 91.2 | 86.4 | 90.7 | 93.7 |
Table 7: Experimental results on REWRITE.
Table 8: Experimental results on CANARD.
| Model | B1 | B2 | B4 | R1 | R2 | RL |
|-------------|------|------|------|------|------|------|
| Copy | 52.4 | 46.7 | 37.8 | 72.7 | 54.9 | 68.5 |
| Rronoun Sub | 60.4 | 55.3 | 47.4 | 73.1 | 63.7 | 73.9 |
| L-Ptr-Gen | 67.2 | 60.3 | 50.2 | 78.9 | 62.9 | 74.9 |
| RUN | 70.5 | 61.2 | 49.1 | 79.1 | 61.2 | 74.7 |
| MIUR (Ours) | 71.3 | 63.4 | 51.7 | 81.6 | 64.5 | 77.4 |
## D Effect Of Batchnorm
To further explore the validity of BatchNorm for our model, we conducted controlled experiments on REWRITE. As shown in Figure 3, Figure 3(a)
indicates the loss of training on REWRITE dataset with BN and without. Figure 3(b) shows the EM
metrics of REWRITE validation set with BN and without. We can observe that the incorporation of BatchNorm after the construction of the joint feature matrix leads to faster convergence and enhances the model's ability to learn global semantic information efficiently.
![7_image_0.png](7_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
5 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 1
✗ B1. Did you cite the creators of artifacts you used?
While the paper itself does not include explicit citations to the creators of the artifacts used, the corresponding Git code repository's README.md file mentions the appropriate citations and attributions.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The code repository provided follows the Apache-2.0 license, which governs the terms and conditions for using and distributing the code.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✗ **Did You Run Computational Experiments?** Left Blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
cassotti-etal-2023-xl | {XL}-{LEXEME}: {W}i{C} Pretrained Model for Cross-Lingual {LEX}ical s{EM}antic chang{E} | https://aclanthology.org/2023.acl-short.135 | The recent introduction of large-scale datasets for the WiC (Word in Context) task enables the creation of more reliable and meaningful contextualized word embeddings.However, most of the approaches to the WiC task use cross-encoders, which prevent the possibility of deriving comparable word embeddings.In this work, we introduce XL-LEXEME, a Lexical Semantic Change Detection model.XL-LEXEME extends SBERT, highlighting the target word in the sentence. We evaluate XL-LEXEME on the multilingual benchmarks for SemEval-2020 Task 1 - Lexical Semantic Change (LSC) Detection and the RuShiftEval shared task involving five languages: English, German, Swedish, Latin, and Russian.XL-LEXEME outperforms the state-of-the-art in English, German and Swedish with statistically significant differences from the baseline results and obtains state-of-the-art performance in the RuShiftEval shared task. | # Xl-Lexeme: Wic Pretrained Model For Cross-Lingual Lexical Semantic Change
Pierluigi Cassotti, Lucia Siciliani, Marco de Gemmis, Giovanni Semeraro and **Pierpaolo Basile**
University of Bari Aldo Moro
{firstname.lastname}@uniba.it
## Abstract
The recent introduction of large-scale datasets for the WiC (Word in Context) task enables the creation of more reliable and meaningful contextualized word embeddings. However, most of the approaches to the WiC task use crossencoders, which prevent the possibility of deriving comparable word embeddings. In this work, we introduce XL-LEXEME, a Lexical Semantic Change Detection model. XL-LEXEME
extends SBERT, highlighting the target word in the sentence. We evaluate XL-LEXEME on the multilingual benchmarks for SemEval-2020 Task 1 - Lexical Semantic Change (LSC) Detection and the RuShiftEval shared task involving five languages: English, German, Swedish, Latin, and Russian. XL-LEXEME outperforms the state-of-the-art in English, German and Swedish with statistically significant differences from the baseline results and obtains state-of-the-art performance in the RuShiftEval shared task.
## 1 Introduction And Motivation
Lexical Semantic Change (LSC) Detection is the task of automatically identifying words that change their meaning over time. The LSC Detection task implicitly aims to disambiguate synchronic word sense occurrences and then find differences in the word sense frequencies in different periods. Word Sense Disambiguation (WSD) is a longstudied task in Natural Language Processing (Navigli, 2009), which consists of associating the correct sense to a word occurring in a specific context.
WSD involves some crucial issues, such as relying on a fixed sense inventory. Fixed sense inventories ignore the diachronic aspect of language because they can miss older unused senses or be outdated and missing new senses.
The Word in Context task (WiC) (Pilehvar and Camacho-Collados, 2019) aims to overcome these issues. In this work, we train a model on the WiC
task and then use it to perform LSC Detection. In the WiC task, given the word w and two different contexts C1, C2, the systems have to determine whether the meaning of w is the same in the two contexts or not. Our approach is grounded on the assumption that models trained on the WiC
tasks are robust enough to transfer the knowledge learned in a synchronic setting to a diachronic one. We summarise the main contribution of this work as follows: (i) We propose a pre-trained biencoder model, called XL-LEXEME, on a largescale dataset for the WiC task, which allows us to obtain comparable lexical-based representations;
(ii) We assert the effectiveness of XL-LEXEME
despite the computational limitation compared to the cross-encoder architecture for the LSC Detection task; (iii) Experiments on the LSC Detection task show that XL-LEXEME outperforms state-ofthe-art LSC Detection models for English, German, Swedish, and Russian.
## 2 Related Work
LSC Detection systems can be categorized based on the distributional embeddings used to tackle the LSC Detection task. One category is represented by those approaches that adopt type-base
(i.e., static) embeddings. UWB (Prazák et al., 2020; Prazák et al., 2021) represents an example of this category of systems. First, it employs word2vec Skip-gram with Negative Sampling (Mikolov et al., 2013) to compute a semantic space for each corpus.
It uses techniques like the Canonical Correlation Analysis (Hardoon et al., 2004) and the Orthogonal Transformation (Hamilton et al., 2016) to align the abovementioned spaces. Therefore, the cosine similarity between the vectors representing the word in two different spaces is used to detect the semantic shift.
With the increasing use of contextualized word embeddings, numerous approaches employing BERT-base models have been developed for LSC Detection (Montanelli and Periti, 2023; Laicher 1577 et al., 2021). In TempoBERT (Rosin et al., 2022),
the authors exploit the concept of Masked Language Modeling (MLM), where the goal is to train a language model to predict a masked portion of text given the remaining part. In particular, they employ this technique to encode the concept of time into a BERT model. This is done by concatenating a specific token representing time to the text sequence. At inference time, TempoBERT can be used to predict the year of a sentence, masking the time reference, or to predict a masked token of the sentence conditioned by the time reference. In the same line of research, in Temporal Attention
(Rosin and Radinsky, 2022), the authors investigate the effect of modifying the model instead of the input sentence like in TempoBERT. This is done by extending the model's attention mechanism to consider the time when computing the weight of each word. The time dimension is encoded using a different query embedding matrix for each timestamp. Another kind of approach exploits the information coming from other tasks to perform LSC
Detection. GlossReader represents an example
(Rachinskiy and Arefyev, 2021), where a model based on XML-R (Conneau et al., 2020b) is first trained on English SemCor (Miller et al., 1994)
with glosses from WordNet 3.0 (Miller, 1992)
to perform WSD. Exploiting the zero-shot crosslingual characteristics of XML-R, the authors used the same model to perform LSC Detection in the Russian language. With DeepMistake (Arefyev et al., 2021), the authors take advantage of the WiC
task instead of WSD. They train a cross-encoder with XML-R as an underlying Language Model on the MCL-WiC training and development set and fine-tune on the RuSemShift dataset (Rodina and Kutuzov, 2020). DeepMistake, differently from XL-LEXEME, relies on the cross-encoder architecture and exploits only the MCL-WiC training dataset.
## 3 Xl-Lexeme
Generally, for pairwise sentence similarity tasks, BERT models use a cross-encoder, in which the pairwise sequences are jointly encoded, and the overall vectors are used for the classification. However, in several tasks, the cross-encoder is not suitable since it cannot provide a distinct meaningful representation for each sentence. An approach to overcome this issue involves pooling the BERT output encoded vectors, which often results in worse performance. Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) overcomes the limitation of cross-encoders using a Siamese Network, i.e., the weights of the underlying networks are shared.
SBERT encodes the two sequences separately in the BERT model exploiting the Siamese architecture. The sequence-level representation is obtained by averaging the output encoded vectors, which are directly compared using similarity measures such as cosine similarity.
Meanwhile, cross-encoders perform better since they are trained to profit from the attention over the whole input. In this work, we introduce XLLEXEME1 which mirrors models for pairwise sequence similarity tasks and adapts them to the WiC
task, giving prominence to the target word, i.e.
the word for which we want to detect the LSC.
The model takes as input two sequences s1 and s2.
The sequences are tokenized using subwords tokenizer, such as Sentence Piece (Kudo and Richardson, 2018), and the special tokens <t> and </t> are used as target word delimiters (Xie et al., 2021):
$$\begin{array}{l}{{s_{1}=w_{1},...,\lhd\rhd,w_{i}^{t},...,w_{i+k}^{t},\lhd\rhd,...,w_{N}}}\\ {{s_{2}=w_{1},...,\lhd\rhd,w_{j}^{t},...,w_{j+p}^{t},\lhd\rhd,...,w_{M}}}\end{array}\tag{1}$$
where N and M represent the number of subwords of the sequence s1 and s2 respectively, while w t i
, ..., wt i+k and w t j
, ..., wtj+p are the subwords of the target words. In the following, we describe the baseline cross-encoder and XLLEXEME based on a bi-encoder. For the crossencoder, the two input sequences are concatenated by the special token [SEP] in an overall sequence s = [CLS] s1 [SEP] s2 [SEP]. If the length of s, i.e. N + M + 3, is greater than the maximum sequence length λ, then the sequence s is cut such that the length of s1 and s2 is less than λ∗ =
λ−3 2.
To comply with the maximum length, the left and right contexts of the sequence are truncated. For instance, s1 is truncated as follows:
$$s_{1}=w_{n_{0}},...,\lhd,w_{i}^{t},...,w_{i+k}^{t},\lhd,...,w_{n_{1}}\ \ (2)$$
where n0 = max(0, i − 1 −
λ∗−k−2 2) and n1 =
min(*N, i* + k + 1 + λ∗−k−2 2). The truncated sequence has a length *γ < λ*. The encoded representations of each subword (v1, v2*, ..., v*γ) are 1The XL-LEXEME code is available on GitHub https://github.com/pierluigic/xl-lexeme.
The XL-LEXEME model is available in the Hugging Face Model Hub https://huggingface.co/
pierluigic/xl-lexeme.
summed to get the encoded representation of the overall sequence, i.e. s enc =Pγ i vi. Finally, the vector s enc is used to compute the logits:
$$l o g i t=\log\sigma(W s^{e n c})$$
logit = log σ(W senc) (3)
where W ∈ IR1×d. The model is trained to minimize the Binary Cross-entropy loss function.
XL-LEXEME is a bi-encoder that encodes the input sequences using a Siamese Network into two different vector representations. Each sequence is tokenized and truncated according to the maximum length λ∗, using Equation (2). We thus obtain the new lengths γ1, γ2. The vector representation is computed as the sum of the encoded subwords
(v1, v2*, ..., v*γ), i.e. s enc 1 =Pγ1 ivi and s enc P2 =
γ2 jvj .
XL-LEXEME is trained to minimize the Contrastive loss (Hadsell et al., 2006):
$$\ell={\frac{1}{2}}\left[y\cdot\delta^{2}+(1-y)\cdot\operatorname*{max}(0,m-\delta)^{2}\right]\quad{\mathrm{(4)}}$$
where we adopt a margin m = 0.5. We use as default distance δ the cosine distance between the encoded representations of s1 and s2, i.e.
δ = cos(s enc 1, senc 2). The main advantage of XLLEXEME concerning models based on the crossencoder architecture is efficiency. The time cost can be directly derived from the different architectures that exploit XL-LEXEME and the crossencoder baseline. The self-attention time complexity O(N2 ∗ d) depends on the vector dimension d and the sequence length, which is N for the cross-encoder and N
2for XL-LEXEME. For XL-LEXEME, the time complexity is reduced to O(( N
2
)
2 ∗ 2d).
## 4 Experimental Setting 4.1 Lexical Semantic Change Detection
SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection (Schlechtweg et al.,
2020) is the first task on Unsupervised Lexical Semantic Change Detection in English, German, Swedish, and Latin languages. For each language, two corpora represent two different periods (T0, T1). Moreover, a set of target words, annotated using the DUREL framework (Schlechtweg et al.,
2018), are provided. SemEval-2020 Task 1 involves two subtasks. The binary classification task requires assigning a label (changed/stable) to each target word. The ranking task sorts the target words according to their degree of semantic change. In this work, we focus on Subtask 2, and for the sake of simplicity, we refer to SemEval-2020 Task 1 Subtask 2 as SemEval-2020 Task 1.
RuShiftEval, different from SemEval-2020 Task 1, involves three sub-corpora extracted from the Russian National Corpus spanning three periods. Models are evaluated on the resulting three test sets, namely RuShiftEval1 (pre-Soviet and Soviet), RuShiftEval2 (Soviet and post-Soviet),
and RuShiftEval3 (pre-Soviet and post-Soviet).
RuShiftEval provides participants with development data that can be used for tuning models.
RuShiftEval aims to corroborate if training data can improve LSC Detection models. The development data rely on the RuSemShift dataset (Rodina and Kutuzov, 2020), which includes two sets of 70 target words for the pre-Soviet to Soviet period and Soviet to post-Soviet period, respectively. The dataset also includes annotated pairwise sentences, which can be used for training the models.
## 4.2 Training Details
XL-LEXEME and the cross-encoder are trained using XLM-RoBERTa (XLM-R) (Conneau et al.,
2020a) large as the underlying Language Model2 and using an NVIDIA GeForce RTX 3090. As for training data, the model uses the training data of MCL-WiC (Martelli et al., 2021), AM2ICO (Liu et al., 2021), and XL-WiC datasets (Raganato et al.,
2020) merged with the randomly sampled 75% of the respective development data of each dataset.
The remaining 25% of the development data is used to fine-tune hyper-parameters. Moreover, we augment training data for the cross-encoder by swapping the order of sentences in the training set (Martelli et al., 2021).
We use AdamW optimizer and linear learning warm-up over the 10% of training data. We perform a grid search for the hyper-parameters optimization, tuning the learning rate in {1e-6, 2e-6, 5e-6, 1e-5, 2e-5} and the weight decay {0.0, 0.01}.
Table 3 (Appendix A) shows the selected hyperparameters. We sample 200 sentences containing the target word for each language and each period.
The sampling is repeated ten times, and the results are averaged over the ten iterations. We use the same methodology of Rachinskiy and Arefyev
(2021) for sampling sentences from the RuShiftEval corpora. We sample sentences in which we find the exact match with the target words with no pre2The XLM-R model is fine-tuned during the training.
processing of the SemEval dataset. The LSC score is computed as the average distance between the vectors over the two different periods:
$$\mathrm{LSC}(s^{t_{0}},s^{t_{1}})={\frac{1}{N\cdot M}}\sum_{i=0}^{N}\sum_{j=0}^{M}\delta(s_{i}^{t_{0}},s_{j}^{t_{1}})\quad(5)$$
where δ is the distance measure, i.e. δ = 1 −
log σ(W senc) for the cross-encoder baseline and δ = cos(s enc 1, senc 2) for XL-LEXEME.
## 5 Results
Table 1 and Table 2 report the results on the SemEval-2020 Task 1 Subtask 2 and the results on the RuShiftEval test set. The results of the best systems are in bold. XL-LEXEME achieve the best score for English, German, Swedish, RuShiftEval1, RuShiftEval2, and RuShiftEval3. XL-LEXEME
achieves a strong Spearman correlation for English and Swedish languages and a solid correlation on the German dataset, obtaining a significative correlation (p < 0.001). XL-LEXEME obtains no significant results in the Latin language since the predicted scores for the target words are not correlated with the test set. Latin is underrepresented in the training data of XLM-R, and there are no similar languages in the WiC dataset that we use for training XL-LEXEME. Moreover, the Latin dataset is more challenging as it involves the first corpus written in ancient Latin, which differs in many aspects from modern Latin. For this reason, XL-LEXEME
could be ineffective in ancient languages and, in general, in languages that are not widely covered by the WiC dataset.
We report the statistical significance of the difference between the performance of XL-LEXEME
concerning the other models. The statistical significance of the difference is computed using Fisher's z-transformation (Press, 2002). XL-LEXEME obtains stronger correlations than the cross-encoder, but the differences are not significant. The correlations obtained on the English and the German datasets are significantly different (p < 0.05) for all the systems that participated in the SemEval2020 Task 1 but not for TempoBERT and Temporal Attention. On the other side, TempoBERT
and Temporal Attention obtain a Spearman correlation on English and German that is not statistically different from the systems on the SemEval-2020 Task 1 leaderboard. In the Swedish language, XLLEXEME is the only one obtaining a significantly different correlation from the Count baseline results. XL-LEXEME showed its effectiveness also in Swedish, although the WiC dataset does not cover this language. Presumably, Swedish benefits from the presence of other languages descending from the Old Norse language, namely Danish and Norwegian.
XL-LEXEME obtains competitive results for the Russian language in the RuShiftEval leaderboard. Contrary to XL-LEXEME, Deep Mistake and Gloss Reader are fine-tuned on the RuSemShift dataset. The differences between XL-LEXEME
and the best two systems in the leaderboard are not statically significant. Moreover, in Table 2, the results of XL-LEXEME fine-tuned on the RuSemShift are shown. Although the fine-tuned model achieves the best correlation scores in the three datasets, the difference between DeepMistake and GlossReader is not significant.
## 6 Conclusion
In this work, we introduced XL-LEXEME, a model for LSC Detection. XL-LEXEME is pre-trained on a large WiC dataset to mirror sentence-level encoders focusing on specific words in contexts.
We evaluated our model on two Lexical Semantic Change Detection datasets: SemEval-2020 Task 1 and RuShiftEval. XL-LEXEME outperforms stateof-the-art models for LSC Detection in English, German, Swedish, and Russian datasets, with significant differences from the baselines. The XLLEXEME effectiveness and efficiency make it reliable for LSC Detection on large diachronic corpora.
## 7 Limitations
While the vector representations obtained using XL-LEXEME for different languages are potentially comparable, lying on the same geometric space, the evaluation of cross-lingual semantic changes cannot be performed for lacking crosslingual LSC Detection resources. SemEval 2020 Task 1 datasets consist of small sets of target words, i.e., the number of target words for English, German, Latin, and Swedish is 37, 48, 40, and 31, respectively. The example of the Latin language highlights that XL-LEXEME can perform poorly on languages that are underrepresented in the training set of XLM-R and not covered by the WiC
dataset. Generally, at the moment is not possible to state precisely how and how much XL-LEXEME
| SemEval-2020 Task 1 Subtask 2 Leaderboard | Temporal BERT | cross-encoder | XL-LEXEME | | | | | | | |
|---------------------------------------------|-----------------------------------|-----------------|-------------|--------|--------|-----------|----------|--------|--------|--------|
| Lang. | UG_Student Jiaxin _Intern & Jinan | cs2020 | UWB | Count | Freq. | TempoBERT | Temporal | | | |
| baseline baseline | Attention | | | | | | | | | |
| EN | 0.422 | 0.325 | 0.375 | 0.367 | 0.022 | -0.217 | 0.467 | †0.520 | †0.752 | 0.757 |
| DE | 0.725 | 0.717 | 0.702 | 0.697 | 0.216 | 0.014 | - | †0.763 | †0.837 | 0.877 |
| SV | †0.547 | †0.588 | †0.536 | †0.604 | -0.022 | -0.150 | - | - | †0.680 | 0.754 |
| LA | 0.412 | 0.440 | 0.399 | 0.254 | 0.359 | †0.020 | 0.512 | 0.565 | †0.016 | -0.056 |
| Avg. | 0.527 | 0.518 | 0.503 | 0.481 | 0.144 | -0.083 | - | - | 0.571 | 0.583 |
Table 1: Results (Spearman correlation) on the SemEval-2020 Task 1 Subtask 2 test set. The symbol † indicates there is no statistical difference with the correlation obtained by XL-LEXEME.
| RuShiftEval Leaderboard | cross-encoder | XL-LEXEME | XL-LEXEME (Fine-tuned) | | | | |
|---------------------------|-----------------|-------------|--------------------------|----------|--------|-------|-------|
| Dataset | GlossReader | DeepMistake | UWB | Baseline | | | |
| RuShiftEval1 | †0.781 | †0.798 | 0.362 | 0.314 | †0.727 | 0.775 | 0.799 |
| RuShiftEval2 | †0.803 | †0.773 | 0.354 | 0.302 | †0.753 | 0.822 | 0.833 |
| RuShiftEval3 | †0.822 | †0.803 | 0.533 | 0.381 | †0.748 | 0.809 | 0.842 |
| Avg. | 0.802 | 0.791 | 0.417 | 0.332 | 0.743 | 0.802 | 0.825 |
Table 2: Results (Spearman correlation) on the RuShiftEval test set. The symbol † indicates there is no statistical difference with the correlation obtained by XL-LEXEME.
performance is affected by the language distribution in the XLM-R training set and the WiC dataset.
## Acknowledgements
We acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), Spoke 6 - Symbiotic AI (CUP H97G22000210007) under the NRRP MUR program funded by the NextGenerationEU.
This work has in part been funded by the research program Change is Key! supported by Riksbankens Jubileumsfond (under reference number M21-0021).
## References
Nikolay Arefyev, Daniil Homskiy, Maksim Fedoseev, Adis Davletov, Vitaly Protasov, and Alexander Panchenko. 2021. DeepMistake: Which Senses are Hard to Distinguish for a WordinContext Model. In Computational Linguistics and Intellectual Technologies - Papers from the Annual International Conference "Dialogue" 2021, volume 2021-June. Section:
20.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020a. Unsupervised Cross-lingual Representation Learning at Scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8440–8451. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal,
Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020b. Unsupervised Cross-lingual Representation Learning at Scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8440–8451. Association for Computational Linguistics.
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006.
Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(CVPR 2006), 17-22 June 2006, New York, NY, USA,
pages 1735–1742. IEEE Computer Society.
William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 1489–1501, Berlin, Germany. Association for Computational Linguistics.
David R. Hardoon, Sandor Szedmak, and John ShaweTaylor. 2004. Canonical Correlation Analysis: An Overview with Application to Learning Methods.
Neural Computation, 16(12):2639–2664.
Taku Kudo and John Richardson. 2018. SentencePiece:
A simple and language independent subword tokenizer and detokenizer for Neural Text Processing.
In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium.
Association for Computational Linguistics.
Severin Laicher, Sinan Kurtyigit, Dominik Schlechtweg, Jonas Kuhn, and Sabine Schulte im Walde. 2021. Explaining and improving BERT performance on lexical semantic change detection. In *Proceedings of*
the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 192–202, Online. Association for Computational Linguistics.
Qianchu Liu, Edoardo Maria Ponti, Diana McCarthy, Ivan Vulic, and Anna Korhonen. 2021. AM2iCo:
Evaluating Word Meaning in Context across LowResource Languages with Adversarial Examples. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP
2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 7151–7162. Association for Computational Linguistics.
Federico Martelli, Najla Kalach, Gabriele Tola, and Roberto Navigli. 2021. SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC). In *Proceedings of the* 15th International Workshop on Semantic Evaluation
(SemEval-2021), pages 24–36, Online. Association for Computational Linguistics.
Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Workshop Track Proceedings.
George A. Miller. 1992. WordNet: A Lexical Database for English. In Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992.
George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Using a Semantic Concordance for Sense Identification.
In *Human Language Technology, Proceedings of a* Workshop held at Plainsboro, New Jerey, USA, March 8-11, 1994. Morgan Kaufmann.
Stefano Montanelli and Francesco Periti. 2023. A survey on contextualised semantic shift detection. arXiv preprint arXiv:2304.01666.
Roberto Navigli. 2009. Word Sense Disambiguation: A
Survey. *ACM Comput. Surv.*, 41(2).
Mohammad Taher Pilehvar and José Camacho-Collados.
2019. WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations.
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1267–
1273. Association for Computational Linguistics.
Ondrej Prazák, Pavel Pribán, and Stephen Taylor. 2021.
UWB@ RuShiftEval Measuring Semantic Difference as per-word Variation in Aligned Semantic Spaces.
In *Computational Linguistics and Intellectual Technologies - Papers from the Annual International Conference "Dialogue" 2021*, volume 2021-June. Section: 20.
Ondrej Prazák, Pavel Pribán, Stephen Taylor, and Jakub Sido. 2020. UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, SemEval@COLING2020, pages 246–254. International Committee for Computational Linguistics.
William H. Press. 2002. Numerical recipes in C++:
the art of scientific computing, 2nd Edition (C++ ed.,
print. is corrected to software version 2.10). Cambridge University Press.
Maxim Rachinskiy and Nikolay Arefyev. 2021. Zeroshot Crosslingual Transfer of a Gloss Language Model for Semantic Change Detection. In *Computational Linguistics and Intellectual Technologies -*
Papers from the Annual International Conference
"Dialogue" 2021, volume 2021-June. Section: 20.
Alessandro Raganato, Tommaso Pasini, José CamachoCollados, and Mohammad Taher Pilehvar. 2020. XLWiC: A Multilingual Benchmark for Evaluating Semantic Contextualization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7193–7206. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence Embeddings using Siamese BERTNetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Julia Rodina and Andrey Kutuzov. 2020. RuSemShift:
a dataset of historical lexical semantic change in Russian. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 1037–
1047, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Guy D. Rosin, Ido Guy, and Kira Radinsky. 2022. Time Masking for Temporal Language Models. In WSDM
'22: The Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event / Tempe, AZ, USA, February 21 - 25, 2022, pages 833–841.
ACM.
Guy D. Rosin and Kira Radinsky. 2022. Temporal Attention for Language Models. *CoRR*,
abs/2202.02093.
Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi.
2020. SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, SemEval@COLING2020, pages 1–23. International Committee for Computational Linguistics.
Dominik Schlechtweg, Sabine Schulte im Walde, and Stefanie Eckmann. 2018. Diachronic Usage Relatedness (DURel): A Framework for the Annotation
of Lexical Semantic Change. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 169–174, New Orleans, Louisiana. Association for Computational Linguistics.
Shuyi Xie, Jian Ma, Haiqin Yang, Lianxin Jiang, Yang Mo, and Jianping Shen. 2021. PALI at SemEval2021 Task 2: Fine-Tune XLM-RoBERTa for Word in Context Disambiguation. In *Proceedings of the* 15th International Workshop on Semantic Evaluation
(SemEval-2021), pages 713–718, Online. Association for Computational Linguistics.
## A Hyper-Parameters
| Hyper-parameter | Value |
|-----------------------------------|----------|
| hidden act | gelu |
| hidden dropout prob | 0.1 |
| hidden size | 1024 |
| initializer range | 0.02 |
| intermediate size | 4096 |
| layer norm eps | 1e-05 |
| max position embeddings | 514 |
| num attention heads | 16 |
| num hidden layers | 24 |
| position embedding type | absolute |
| vocab size | 250004 |
| learning rate cross-encoder | 1e-05 |
| XL-LEXEME | 1e-05 |
| weight decay cross-encoder | 0.01 |
| XL-LEXEME | 0.00 |
| max sequence length cross-encoder | λ = 256 |
| XL-LEXEME | λ∗ = 128 |
Table 3: XL-LEXEME and cross-encoder hyperparameters.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And Section 4
✓ B1. Did you cite the creators of artifacts you used?
References
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4 and References
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3 and Section 4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3 and Section 4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4
## C ✓ **Did You Run Computational Experiments?** Section 3 And Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 and Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
mccarthy-dore-2023-theory | Theory-Grounded Computational Text Analysis | https://aclanthology.org/2023.acl-short.136 | In this position paper, we argue that computational text analysis lacks and requires organizing principles. A broad space separates its two constituent disciplines{---}natural language processing and social science{---}which has to date been sidestepped rather than filled by applying increasingly complex computational models to problems in social science research. We contrast descriptive and integrative findings, and our review of approximately 60 papers on computational text analysis reveals that those from *ACL venues are typically descriptive. The lack of theory began at the area{'}s inception and has over the decades, grown more important and challenging. A return to theoretically grounded research questions will propel the area from both theoretical and methodological points of view. | # Theory-Grounded Computational Text Analysis
Arya D. McCarthy*♦and **Giovanna Maria Dora Dore***♠
♦Center for Language and Speech Processing, Johns Hopkins University
♠Krieger School of Arts and Sciences, Johns Hopkins University
## Abstract
In this position paper, we argue that computational text analysis lacks and requires organizing principles. A broad space separates its two constituent disciplines—natural language processing and social science—which has to date been sidestepped rather than filled by applying increasingly complex computational models to problems in social science research. We contrast descriptive and integrative findings, and our review of approximately 60 papers on computational text analysis reveals that those from
*ACL venues are typically descriptive. The lack of theory began at the area's inception and has, over the decades, grown more important and challenging. A return to theoretically grounded research questions will propel the area from both theoretical and methodological points of view.
## 1 Introduction
Computational text analysis methods—an umbrella combining natural language processing with social science—are in a honeymoon period (Lazer and Radford, 2017; van Atteveldt and Peng, 2018). Today's social scientist might reach for the tools of computer science for their speed, scale, granularity, and consistency; for instance, natural language processing offers "to analyze signals ranging from simple lexical cues to word clusters to choices of syntactic structure" (Boydstun et al., 2014). The numerical outputs tell a story that is simple, easy to make sense of, and in that regard comforting.
Conversely, today's computer scientist may see the problems of social science as answerable by objectivity and reductionism, eschewing interpretation for quantitative analysis.
The conclusion of this reasoning, and the dominant stance in computational social science, is a reliance on machines alone to answer questions in the field, surrendering to their supposed objectivity
*Equal contribution.
or impartiality. Can a machine's output go beyond descriptive catalogs of evidence, accelerating understanding of processes and motivations? From our experience, computers are nowhere near supplanting humans in interpreting social science results.1 An interdisciplinary inquiry must go farther than matching computational techniques to social science questions (O'Connor et al., 2011; Nguyen et al., 2020). It embraces synergistic methodology and connects the norms and standards of evidence from both. This means partnering computer science's preference for the structured, generalizable, and objective with the unstructured, critical, and contextual which the social sciences champion. This level of interdisciplinarity addresses the question raised by descriptive findings: *So what?*
We see theory as the solution, empowering rather than shackling investigations. What this paper advocates is not one particular theory—certainly these are myriad, and "even subject matter which has been under intensive and prolonged study remains at the unsettled periphery of research" (Nagel, 1963).
Instead, we expand on our prior work (Dore and McCarthy, 2022) to clarify calls echoed for decades by computational and social science (McDermott, 1976; Jelinek, 2005; Hajic and Haji ˇ cová ˇ , 2007; Hofman et al., 2018; Lipton and Steinhardt, 2019; Baden et al., 2021). Underlying each, we find, is the urge to return to theory, which we espouse herein.
## 2 Description Vs. Integration
We contrast descriptive findings and theoretical analysis. An example of a descriptive finding is that an apple falls, or that it falls faster when pushed than dropped, or even that it falls at a particular rate estimated with some standard error by a complex 1See, e.g., Noam Chomsky's remark on GPT-3: "You can't go to a physics conference and say: I've got a great theory. It accounts for everything and is so simple it can be captured in two words: 'Anything goes.' All known and unknown laws of nature are accommodated. . . Of course, everything impossible is accommodated also. That's GPT-3." [link]
interpolation. A theoretical analysis of the same phenomenon, credited to Newton, is that a fundamental force acts upon the apple, and that this same force governs the motion of the heavens. The theoretical analysis links the finding about the world critically to a broader body of knowledge and context.
Despite advances in causal inference in NLP,
the descriptive is all that a machine can provide to the social sciences (Feder et al., 2021). Certainly the methods of computational text analysis have advanced since the General Inquirer (Stone and Hunt, 1963) and Mosteller and Wallace's statistical inference of text authorship (1963). But methods are means, not ends. They uncover more descriptive findings in data: the rate of an apple's fall, the topics of refugees' tweets (Walk et al., 2022), the space given to marginalized groups in textbooks
(Lucy et al., 2020), or patterns of state censorship
(Bamman et al., 2012; King et al., 2013).
The foils to descriptive findings are *integrative* findings (Hofman et al., 2021), which offer causal explanations that enable future predictions—a theory, or as a 'model' in the sense of the Standard Model, rather than of a statistical model. Integrative findings can either offer new theories or couch their explanations in existing theories—but the theory is essential either way.
## 3 We Don'T Integrate
To contrast descriptive and integrative findings, we reviewed approximately 60 papers in computational text analysis published in *ACL venues. In Table 1, we describe several of these in terms of their descriptive or theory-grounded contributions.2 Descriptive papers may refer to social science theories or make generalizable claims, as when Demszky et al. (2019)
write, "The shooter's race appears to play a role in topic preference: if the shooter is white, Democrats become more likely to focus on shooter's identity," but they do not link to the two to each other.
An excellent theory-grounded quantitative work is Nelson (2021); she confirms some of the most compelling features of identity theory, specifically that identities based on race were most distinguished by cultural discourse, whereas those based on gender by the domestic and the economic discourse.
Similarly, we conducted theory-grounded quantitative work to investigate the application of the protest 2Following Lipton and Steinhardt (2019), we only describe papers by established researchers to "avoid singling out junior students. . . who lack the opportunity to reply symmetrically".
paradigm and thematic framing in how westernand Hong Kong based newspapers portray protests in Hong Kong (McCarthy et al., 2021; McCarthy and Dore, 2022). Generally, it remains challenging to find computational social science papers in
*ACL venues that go beyond description and prediction, advancing theory. Why is this? We believe it stemmed from the field's "empirical turn".3 Few remember when the meetings of ACL offered a few dozen papers, all entrenched in formalisms and linguistic theories. Arguably, 1996 was a turning point when the founders of SIGDAT
held the first EMNLP at Penn under the auspices of the ACL.4This gave a spotlight to the few but growing empiricists in the field and drew in more.
EMNLP began a half-decade of measurable reorganization the field (Anderson et al., 2012). That EMNLP remains affiliated with ACL keeps the language-focused machine learning practitioners in our tent. The slow blurring of boundaries between each *ACL conference's expectations (Church, 2020) increases this unity. Both groups belong under this tent. But without a doubt, one group's voice is becoming less heard.
Publication venues within the ACL focus on methods over theory.5Techniques are taken off the shelf without critical examination because these are "the best" (often "state of the art") for their purposes (Ethayarajh and Jurafsky, 2020). This widens the gap between theoretical and empirical work.6 Hopkins and King (2010) claim, "computer scientists may be interested in finding the needle in the haystack. . . social scientists are more commonly interested in characterizing the haystack"—evincing the value of broader context.7 Wallach (2018), quoting Hopkins and King, explains that the two groups
## Descriptive
| Chang et al. (2009) | The article presents new quantitative methods to measure semantic meaning in inferred topics. The authors emphasize the qualitative relevance of their findings as it validates the use of topics for corpus exploration and information retrieval. However, their working hypothesis and empirical findings are not connected to the extremely relevant field of communication theory. |
|------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Bamman et al. (2012) | The article presents the first large–scale analysis of political content censorship in social media. The authors miss the opportunity to relate their hypothesis and findings to censorship theory, a natural theoretical context for the research, which would strengthen the relevance and generalizability of the findings. |
| Field et al. (2018) | The article discusses media manipulation in Russia in the context of agenda-setting and framing, the tools that Russian state-owned (or heavily influenced) media outlets use to distract public attention from domestic economic politics. The authors implicitly refer to propaganda theory and autocratic theory throughout the article even though their findings are not discussed in relation to these theories. |
| Demszky et al. (2019) | The article applies "a more comprhensive NLP framework to study linguistic aspects of polarization in social media". While the article implicitly refer to theories of social conformity and social conflict, the findings are not linked or discussed (either explicitly or implicitly) to the theoretical frameworks that the authors touch on in their §1. Integrative |
| DiMaggio et al. (2013) | The article describes how topic models of newspaper articles help to study the politicization of government support for arts organizations and artists in the late 1980s in the US. The authors clearly define the theoretical context of their investigation and emphasize the relationship between theory and method throughout the paper. |
| Bamman et al. (2014) | The article validates an empirical model that "employs multiple effects to account for the influence of extra-linguistic information (such as author)" by testing specific parameters against a variety of theory-based hypotheses derived from writing styles theories of England between 1700 and 1899. |
| Nelson (2021) | The article argues that the full potential of machine learning can be better realized by "leveraging the epistemological alignment between machine learning and inductive research." The author empirically demonstrates this by anchoring in identity theory a word embedding model of first-person narratives of the nineteenth-century U.S. South. |
Table 1: Contrast between work in computational text analysis with descriptive findings versus integrative findings.
are interested in very different research questions, and that computational social science must be more than computer science with social data; it must strive for valid explanatory models. In the same vein, at ACL 2022, ACL fellow Eduard Hovy remarked that NLP must be more than "just machine learning on corpora".
Social scientists are also coming to terms with the meaning of computational techniques applied more often in social science (Bail, 2014; Biernacki, 2015; Lee and Martin, 2015; Spillman, 2015). The focus of the debates, however, is on which methods are best suited to extract meaning from text, without addressing any theoretical considerations related to the methods or whether a theoretical framework for those methods even exists. The discussions on whether computational methods make social science research more efficient, reliable, and reproducible overtake attempts at theory-building.
## 4 Moving Forward
We are not denying the value of computational approaches to analyzing text. Certainly, computing can be an instrumental approach for modeling and understanding social complexity. This does not mean that other approaches, such as historical, ethnographic, or mathematical, become irrelevant.
On the contrary, computational methods necessarily (whether awarely or not) rely on these earlier approaches to add value, in terms of improving our explanations and understanding (Radford and Joseph, 2020).
As we are a field that prioritizes methods, consider the seminal book on methods in science: Abbott (2004) taxonomizes scientific ways of knowing.
Its five broad categories are ethnography, historical narration, standard causal analysis, small-N
comparison, and formal modeling. We in NLP
myopically choose the third and fifth of these, ignoring the value of the others. But the broader point of *Methods of Discovery* is not methods. It is the research question. Any methodology should be grounded in the question, not incremental tweaks and reviewers' comfort (Church, 2020). This admits even qualitative or mixed-method approaches to text analysis.
The role of humans in scientific inquiry is nothing new. Using qualitative analysis to complement quantitative techniques has its roots in Achen and Snidal (1989)'s recommendation to use historical case studies as a complement to statistical research.8 Their plea was strengthened by Verba's work in the early 1990s (Verba et al., 1993, 1995; Verba, 1996)
and Tarrow (1995), who openly called for bridging qualitative and quantitative modes of research in social science. In doing so, they have enriched the field with critical methodological innovations
(Gerring, 2004), benefiting from the recognition that
"quantitative methods must augment humans, not replace them" (Grimmer and Stewart, 2013, 4).
The field can draw more from social science's rich tradition of inductive theory-building and interpretation to develop its theoretical approach—to prize either induction or deduction alone is a myth of scientific procedure (Thagard, 1988), but the melding of the two opens new doors. Rather than eschewing the complexity (a criticism leveled by Baden et al., 2021), it should put complexity at the center of its ontology on the basis that there are no immutable laws in social life or optimal solutions to social problems.
Skepticism can linger toward findings not drawn from the standard practices of one's own field; indeed, social science was long skeptical of computational contributions (Armstrong, 1967). We believe that this drives the hyperfocus on improving a few accepted methods instead of exploring more broadly.
If the doorway between disciplines is only narrowly open, this reflects a lack of appreciation for each field's ways of knowing. The disciplinary divide keeps computational researchers from embracing methods beyond standard causal analysis or *formal modeling*, so the interpreter-centric richness allowed by histories, ethnographies, and small-N
exploration are precluded.
## 5 Conclusion
We have explained the distinction between descriptive and theoretical findings as it pertains to computational text analysis. The bulk of work we found provided vast descriptive findings, often of high quality, but not giving back to questions of theory.
We offer several suggestions on how to 'push the pendulum back' by prioritizing theory-building or 8Expertise plays a role as well (Shing et al., 2018), which is why Mechanical Turk doesn't fill the need for qualitative analysis. This is exemplified by Radford and Joseph (2020)'s observation of "non-expert annotators provid[ing] unreliable annotations, even after a discussion period".
theory-affirming research questions and accepting whichever methods are best suited toward answering it—not only the familiar and entrenched ones.
We are not the first to advocate for a shift in the patterns of applying computational techniques to real-world problems. There is a steady drumbeat from voices in the field advocating careful approaches (Nagel, 1963; McDermott, 1976; Jelinek, 2005; Hajic and Haji ˇ cová ˇ , 2007; Hofman et al., 2018; Lipton and Steinhardt, 2019; Baden et al., 2021). What we see underlying all of thesethose writing against 'mathiness' and speculation, advocating for clear evaluation over anecdotes, criticizing textual researchers' dilution of conceptual standards, highlighting work that ties linguistic information into complex models—is an unspoken, perhaps unrealized, call for a return to theory.
Not only do we aver that incorporating theory is essential; but also, other fields have strengthened themselves when espousing organizing principles beyond those of their progenitors. Behavioral economics is a success story here. It transcended the neat (but psychosocially stripped) mathematics it draws from to acknowledge deviations from rationality and blend economics with cognitive science
(Kahneman and Tversky, 1979; Thaler, 1980; Thaler and Sunstein, 2009).
For *scientific*—not simply *engineering*—
advances to arise from the *ACL community, authors and reviewers alike must resist the temptation toward incremental, 'safe' research and follow Church (2005): "Controversial papers are great; boring unobjectionable incremental papers are not." In reviewing new research, we should privilege not only work that presents new and unusual computational methods, but also interactions between computational and humanistic approaches to answering research questions. EMNLP was founded because of reviewing biases at ACL against groundbreaking methodological advances, and since then the two have homogenized; "EMNLP reviewing is no longer much of a differentiator" (Church, 2020).
We found that theoretically grounded findings in text analysis are often published in non-*ACL
venues (Table 1), but ACL sets the standard for work involving computational text analysis and NLP. Is there no home for groundbreaking integrative or interdisciplinary work in *ACL, such that a new venue is required? Or can we adapt our standards to invite deeper connections to theory and new ways of knowing?
## Acknowledgments
This publication was made possible in part by a grant from the American Political Science Association to A.D.M. and G.M.D.D. The statements made and views expressed are solely the responsibility of the authors. A.D.M. is supported by an Amazon Fellowship and a Frederick Jelinek Fellowship.
## Limitations
The key limitation of our work is that, when conducting the review of approximately 60 papers (by searching through the ACL Anthology for works in computational social science since 2010), we encountered a skewed distribution of descriptive versus integrative works. In fact, it was relatively simple to find descriptive works, and that section of Table 1 could have been much longer. We also recognize that, due to the mixed nature of our field, scientific and integrative findings are not the only goal—our 'big tent' includes engineers as well, who value gains in performance indicators. Finally, the fact that we have few examples of papers showing a return to theory renders the possibility that our central claim is misinterpreted in a normative way as a mandate.
## References
Andrew Delano Abbott. 2004. Methods of discovery:
Heuristics for the social sciences (contemporary societies). WW Norton & Company.
Christopher H. Achen and Duncan Snidal. 1989. Rational deterrence theory and comparative case studies.
World Politics, 41(2):143–169.
Ashton Anderson, Dan Jurafsky, and Daniel A. McFarland. 2012. Towards a computational history of the ACL: 1980-2008. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 13–21, Jeju Island, Korea. Association for Computational Linguistics.
J. Scott Armstrong. 1967. Derivation of theory by means of factor analysis or Tom Swift and his electric factor analysis machine. *The American Statistician*,
21(5):17–21.
Christian Baden, Christian Pipal, Martijn Schoonvelde, and Mariken A. C. G van der Velden. 2021. Three gaps in computational text analysis methods for social sciences: A research agenda. *Communication* Methods and Measures, 0(0):1–18.
Christopher A. Bail. 2014. The cultural environment:
measuring culture with big data. *Theory and Society*,
43(3):465–482.
David Bamman, Brendan O'Connor, and Noah Smith.
2012. Censorship and deletion practices in chinese social media. *First Monday*, 17(3).
David Bamman, Ted Underwood, and Noah A. Smith.
2014. A Bayesian mixed effects model of literary character. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 370–379, Baltimore, Maryland. Association for Computational Linguistics.
Richard Biernacki. 2015. How to do things with historical texts. *American Journal of Cultural Sociology*,
3(3):311–352. Copyright - © Palgrave Macmillan, a division of Macmillan Publishers Ltd 2015; Last updated - 2018-09-25.
Amber E Boydstun, Dallas Card, Justin Gross, Paul Resnick, and Noah A Smith. 2014. Tracking the development of media frames within and across policy issues. Unpublished.
Jonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of the 22nd International Conference on Neural Information Processing Systems, NIPS'09, page 288–296, Red Hook, NY, USA. Curran Associates Inc.
Kenneth Church. 2005. Last words: Reviewing the reviewers. *Computational Linguistics*, 31(4):575–
578.
Kenneth Ward Church. 2020. Emerging trends: Reviewing the reviewers (again). *Natural Language* Engineering, 26(2):245–257.
James S. Coleman. 1986. Social theory, social research, and a theory of action. *American Journal of Sociology*, 91(6):1309–1335.
Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, and Dan Jurafsky.
2019. Analyzing polarization in social media: Method and application to tweets on 21 mass shootings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2970–3005, Minneapolis, Minnesota. Association for Computational Linguistics.
Paul DiMaggio, Manish Nag, and David Blei. 2013.
Exploiting affinities between topic modeling and the sociological perspective on culture: Application to newspaper coverage of U.S. government arts funding. *Poetics*, 41(6):570–606. Topic Models and the Cultural Sciences.
Giovanna Maria Dora Dore and Arya D. McCarthy. 2022.
Learning to play with the machines in social science research: Bringing the theory back in. In *ICML*
2022 Workshop on Human-Machine Collaboration and Teaming, Baltimore, Maryland.
Kawin Ethayarajh and Dan Jurafsky. 2020. Utility is in the eye of the user: A critique of NLP leaderboards.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 4846–4853, Online. Association for Computational Linguistics.
Amir Feder, Katherine A. Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E.
Roberts, Brandon M. Stewart, Victor Veitch, and Diyi Yang. 2021. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. *CoRR*, abs/2109.00725.
Anjalie Field, Doron Kliger, Shuly Wintner, Jennifer Pan, Dan Jurafsky, and Yulia Tsvetkov. 2018. Framing and agenda-setting in Russian news: a computational analysis of intricate political strategies. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, pages 3570–3580, Brussels, Belgium. Association for Computational Linguistics.
John Gerring. 2004. What is a case study and what is it good for? *American Political Science Review*,
98(2):341–354.
Justin Grimmer and Brandon M. Stewart. 2013. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. *Political Analysis*,
21(3):267–297.
Jan Hajic and Eva Haji ˇ cová. 2007. Some of our best ˇ
friends are statisticians. In *Text, Speech and Dialogue*,
pages 2–10, Berlin, Heidelberg. Springer Berlin Heidelberg.
C. A. R. Hoare and C. B. Jones. 1989. *Essays in Computing Science*. Prentice-Hall, Inc., USA.
Jake Hofman, Miro Dudík, and Daniel G. Goldstein.
2018. Perspective annotation for numerical representations. United States Patent Application.
Jake M Hofman, Duncan J Watts, Susan Athey, Filiz Garip, Thomas L Griffiths, Jon Kleinberg, Helen Margetts, Sendhil Mullainathan, Matthew J Salganik, Simine Vazire, et al. 2021. Integrating explanation and prediction in computational social science. *Nature*,
595(7866):181–188.
Daniel J. Hopkins and Gary King. 2010. A method of automated nonparametric content analysis for social science. *American Journal of Political Science*,
54(1):229–247.
Frederick Jelinek. 2005. Some of my best friends are linguists. *Language Resources and Evaluation*, 39(1):25–
34.
Daniel Kahneman and Amos Tversky. 1979. Prospect theory: An analysis of decision under risk. *Econometrica*, 47(2):263–291.
Gary King, Jennifer Pan, and Margaret E. Roberts. 2013.
How censorship in China allows government criticism but silences collective expression. American Political Science Review, 107(2 (May)):1–18.
David Lazer and Jason Radford. 2017. Data ex machina:
Introduction to big data. *Annual Review of Sociology*,
43(1):19–39.
Monica Lee and John Levi Martin. 2015. Coding, counting and cultural cartography. American Journal of Cultural Sociology, 3(1):1–33.
Zachary C. Lipton and Jacob Steinhardt. 2019. Troubling trends in machine learning scholarship: Some ML
papers suffer from flaws that could mislead the public and stymie future research. *Queue*, 17(1):45–77.
Li Lucy, Dorottya Demszky, Patricia Bromley, and Dan Jurafsky. 2020. Content analysis of textbooks via natural language processing: Findings on gender, race, and ethnicity in Texas U.S. history textbooks. AERA
Open, 6(3):2332858420940312.
Arya D. McCarthy and Giovanna Maria Dora Dore. 2022.
Hong Kong: Longitudinal and synchronic characterisations of protest news between 1998 and 2020. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2891–2900, Marseille, France. European Language Resources Association.
Arya D. McCarthy, James Scharf, and Giovanna Maria Dora Dore. 2021. A mixed-methods analysis of western and Hong Kong–based reporting on the 2019–2020 protests. In Proceedings of the 5th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 178–188, Punta Cana, Dominican Republic (online). Association for Computational Linguistics.
Drew McDermott. 1976. Artificial intelligence meets natural stupidity. *SIGART Bull.*, (57):4–9.
Frederick Mosteller and David L. Wallace. 1963. Inference in an authorship problem. *Journal of the* American Statistical Association, 58(302):275–309.
Ernest Nagel. 1963. The structure of science: Problems in the logic of scientific explanation. *Mind*, 72(287).
Laura K. Nelson. 2021. Leveraging the alignment between machine learning and intersectionality: Using word embeddings to measure intersectional experiences of the nineteenth century U.S. South. *Poetics*,
88:101539. Measure Mohr Culture.
Dong Nguyen, Maria Liakata, Simon DeDeo, Jacob Eisenstein, David Mimno, Rebekah Tromble, and Jane Winters. 2020. How we do things with words:
Analyzing text as social and cultural data. *Frontiers* in Artificial Intelligence, 3.
Brendan O'Connor, David Bamman, and Noah A Smith.
2011. Computational text analysis for social science:
Model complexity and assumptions. In *Proc. of the* NIPS Workshop on Comptuational Social Science and the Wisdom of Crowds.
Jason Radford and Kenneth Joseph. 2020. Theory in, theory out: The uses of social theory in machine learning for social science. *Frontiers in Big Data*, 3.
William J Rapaport. 2005. Philosophy of computer science: An introductory course. *Teaching philosophy*,
28(4):319–341.
Stuart C. Shapiro. 2001. Computer science: The study of procedures. Technical report, Department of Computer Science and Engineering, University of Buffalo.
Han-Chin Shing, Suraj Nair, Ayah Zirikly, Meir Friedenberg, Hal Daumé III, and Philip Resnik. 2018. Expert, crowdsourced, and machine assessment of suicide risk via online postings. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 25–36, New Orleans, LA. Association for Computational Linguistics.
David A. Siegel. 2018. Analyzing computational models.
American Journal of Political Science, 62(3):745–759.
Lyn Spillman. 2015. Ghosts of straw men: A reply to Lee and Martin. American Journal of Cultural Sociology, 3(3):365–379.
Philip J. Stone and Earl B. Hunt. 1963. A computer approach to content analysis: Studies using the general inquirer system. In *Proceedings of the May 21-23,*
1963, Spring Joint Computer Conference, AFIPS '63
(Spring), page 241–256, New York, NY, USA. Association for Computing Machinery.
Sidney Tarrow. 1995. Bridging the quantitativequalitative divide in political science. *American Political Science Review*, 89(2):471–474.
Paul Thagard. 1988. *Computational Philosophy of Science*. MIT Press.
Richard Thaler. 1980. Judgement And Decision Making Under Uncertainty: What Economists Can Learn From Psychology. Risk Analysis in Agriculture: Research and Educational Developments, January 16-18, 1980, Tucson, Arizona 271572, Regional Research Projects > W-149: An Economic Evaluation of Managing Market Risks in Agriculture.
Richard H. Thaler and Cass R. Sunstein. 2009. *Nudge:*
Improving decisions about health, wealth, and happiness. Penguin.
Wouter van Atteveldt and Tai-Quan Peng. 2018. When communication meets computation: Opportunities, challenges, and pitfalls in computational communication science. *Communication Methods and Measures*,
12(2-3):81–92.
Sidney Verba. 1996. The citizen as respondent: Sample surveys and american democracy. American Political Science Review, 90(1):1–7. Presidential Address, American Political Science Association, 1995.
Sidney Verba, Kay Lehman Schlozman, Henry Brady, and Norman H. Nie. 1993. Citizen activity: Who participates? what do they say? The American Political Science Review, 87(2):303–318.
Sidney Verba, Kay Lehman Schlozman, and Henry E
Brady. 1995. Voice and equality: Civic voluntarism in American politics. Harvard University Press.
Erin Walk, Elizabeth Parker-Magyar, Kiran Garimella, Ahmet Akbiyik, and Fotini Christia. 2022. Social media narratives across platforms in conflict: Evidence from Syria. MIT Political Science Department Research Paper No. 2022-2, available at SSRN.
Hanna Wallach. 2018. Computational social science
=/ computer science + social data. *Commun. ACM*,
61(3):42–44.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Unnumbered; appears on page 5.
✗ A2. Did you discuss any potential risks of your work?
This is a position paper.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
martinez-lorenzo-etal-2023-amrs | {AMR}s Assemble! Learning to Ensemble with Autoregressive Models for {AMR} Parsing | https://aclanthology.org/2023.acl-short.137 | In this paper, we examine the current state-of-the-art in AMR parsing, which relies on ensemble strategies by merging multiple graph predictions. Our analysis reveals that the present models often violate AMR structural constraints. To address this issue, we develop a validation method, and show how ensemble models can exploit SMATCH metric weaknesses to obtain higher scores, but sometimes result in corrupted graphs. Additionally, we highlight the demanding need to compute the SMATCH score among all possible predictions. To overcome these challenges, we propose two novel ensemble strategies based on Transformer models, improving robustness to structural constraints, while also reducing the computational time. Our methods provide new insights for enhancing AMR parsers and metrics. Our code is available at [\url{https://www.github.com/babelscape/AMRs-Assemble}](\url{https://www.github.com/babelscape/AMRs-Assemble}). | # Amrs Assemble! Learning To Ensemble With Autoregressive Models For Amr Parsing
Abelardo Carlos Martínez Lorenzo1,2∗ **Pere-Lluís Huguet Cabot**1,2∗
Roberto Navigli2 1 Babelscape, Italy 2 Sapienza NLP Group, Sapienza University of Rome
{martinez,huguetcabot}@babelscape.com [email protected]
## Abstract
In this paper, we examine the current state-ofthe-art in AMR parsing, which relies on ensemble strategies by merging multiple graph predictions. Our analysis reveals that the present models often violate AMR structural constraints.
To address this issue, we develop a validation method, and show how ensemble models can exploit SMATCH metric weaknesses to obtain higher scores, but sometimes result in corrupted graphs. Additionally, we highlight the demanding need to compute the SMATCH
score among all possible predictions. To overcome these challenges, we propose two novel ensemble strategies based on Transformer models, improving robustness to structural constraints, while also reducing the computational time. Our methods provide new insights for enhancing AMR parsers and metrics. Our code is available at github.com/babelscape/AMRsAssemble.
## 1 Introduction
Semantic Parsing is the subfield of Natural Language Understanding (Navigli, 2018) that aims to encode the meaning of a sentence in a machineinterpretable structure. One of the formalisms that has gained more attention is the Abstract Meaning Representation (Banarescu et al., 2013, AMR),
which embeds the semantics of a sentence in a directed acyclic graph. In AMR, concepts are represented by nodes, and semantic relations between concepts by edges (see Figure 1). AMR parsing has been applied to various areas of NLP, including Question Answering (Lim et al., 2020; Bonial et al.,
2020; Kapanipathi et al., 2021), Text Summarization (Hardy and Vlachos, 2018; Liao et al., 2018),
Information Extraction (Rao et al., 2017), and Machine Translation (Song et al., 2019), and has been extended to non-English languages (Anchiêta and Pardo, 2020; Blloshmi et al., 2020; Oral and Ery-
∗ Equal contributions.
![0_image_0.png](0_image_0.png)
igit ˘ , 2022; Navigli et al., 2022; Martínez Lorenzo et al., 2022).
Current AMR parsing approaches are based on Transformer sequence-to-sequence (seq2seq) models (Bevilacqua et al., 2021, SPRING), which translate text into a linearized representation of the AMR graph. Recently, there have been some improvements through techniques such as pre-training on structural graph information (Bai et al., 2022),
incorporating shallow semantic information (Chen et al., 2022), modifying ancestor information during decoding (Yu and Gildea, 2022), and adding a structural graph prediction task during training
(Cheng et al., 2022). Nevertheless, in an attempt to push SMATCH (Cai and Knight, 2013) performance, there has been a recent trend towards ensemble models, which merge AMR graph predictions from multiple systems. Some examples include Graphene (Lam et al., 2021), a graph mining algorithm that searches for the largest common structure among the graph predictions, or the Maximum Bayes SMATCH Ensemble (Lee et al., 2022),
which introduces a Bayesian ensemble approach in order to create high-quality silver data. However, notwithstanding their higher performance, ensemble models are potentially more vulnerable to producing corrupted AMR graphs. For instance, Opitz and Frank (2022) highlighted that better SMATCH
scores do not always correlate with better parsing.
In this study, we conduct an investigation into the reasons why ensemble models improve their performance and in which cases they do so despite producing corrupted output. Our analysis reveals three significant drawbacks in these approaches: i)
ensemble systems do not consider structural constraints in AMR, treating AMR graphs as regular sets of triplets, ii) they rely on SMATCH, which does not impose AMR constraints, exacerbating the problem of corrupted AMR graphs produced by ensemble methods that prioritize a higher score over adherence to structural constraints, as is the case with Graphene, and *iii)* they are computationally expensive. Our findings highlight the need for more robust evaluation metrics that hold to the structural constraints of AMR.
In this paper, we present two novel ensemble strategies that address the above limitations of current approaches. In the first strategy, we follow previous *merging* methods, showing how to train a seq2seq model to combine different predictions by taking into account both the original sentence and predictions from multiple models. In our second approach, we propose using *selection* as the ensembling strategy, where we select the best graph instead of merging. Specifically, we base our method on the perplexity score of the model. Additionally, we propose a graph algorithm that checks the structural constraints in AMR graphs. Through these contributions, we aim to provide more robust and efficient solutions for ensembling AMRs.
## 2 Amrs Assemble!
(28 / schedule -81 : ARG (21 / person : name (22 / name : opt "Antonio" : opt "Banders")) : ARG (23 / premise -81 : ARG 21 : ARG (24 / movie : pops 21) : time (25 / date -entity : time "15:88"))
```
The task of AMR parsing can be framed as a
seq2seq task, where the input t = [x1, x2, ..., xm]
is a sequence of m tokens and the output g =
[g1, g2, ..., gn] is a linearized graph with n tokens.
To illustrate, the linearized representation of the
AMR graph in Figure 1 is as follows:
( z0 / schedule -01
: ARG0 ( z1 / person
: name ( z2 / name
: op1 " Antonio "
: op2 " Banderas "))
: ARG1 ( z3 / premiere -01
: ARG0 z1
: ARG1 ( z4 / movie
: poss z1 )
: time ( z5 / date - entity
: time "15:00")))
```
The goal of seq2seq AMR parsing task is to learn a function that models the conditional probability:
$$p(g|x)=\prod_{t=1}^{n}p(e_{t}|e_{<t},x),\qquad\qquad(1)$$
where e<t are the tokens of the linearized graph g before step t.
In this work, we use LongT5 (Guo et al., 2022)
as the seq2seq model, which is specialized for long sequences, making it feasible to provide sentences and linearized graphs as input.
## 2.1 Pre-Training
To enhance the structure awareness of the language model in relation to AMR graphs and ensembling techniques, we extend the graph self-supervised pre-training method proposed by Bai et al. (2022, AMRBart). Formally, we denote a sentence as t =
[x1, x2*, ..., x*m], a graph as g = [g1, g2*, ..., g*n], and a prediction by system s as ps = [p s1
, ps2
, ..., ps ls
].
We follow AMRBart noise function with a dynamic masking rate and denote the noisy text and graph as tˆ and gˆ, respectively. Moreover, let t, g, and p be the empty text, graph and prediction, respectively. As shown in Table 1, our pre-training procedure includes tasks presented by AMRBart, such as: i) empty text graph denoising (tgˆ2g), ii) text augmented graph denoising (tgˆ2g), and iii) noisy text augmented graph denoising (tˆgˆ2g). Additionally, we introduce: iv) empty text multiple graph denoising tgˆ1*...g*ˆk2g, where the target graph is generated using different graphs' masked versions, and v) noisy text augmented multiple graph denoising tˆgˆ1*...g*ˆk2g, where we also include the masked sentence.
## 2.2 Fine-Tuning
Prediction Corpus To fine-tune ensemble systems we create a corpus of multiple predictions, starting from AMR 3.0 (LDC2020T02), which consists of 59,255 human-annotated sentence-graph pairs. We create five distinct train-test splits of this dataset in such a way that each test set is one fifth of the data and mutually exclusive. We train five separate models, based on Blloshmi et al. (2021), on the corresponding training sets and use each model to generate predictions for its respective test set. By combining all of the predicted test sets, we obtain a corpus of AMR predictions. However, to train an ensemble model, it is necessary to merge predictions from multiple models. Therefore, we
| Phase | Task | Input | Output | | |
|------------------------|------------------------------------------------------------------------------------------|-------------------------------------|--------------------------|----|----|
| tgˆ2g | <s> [mask] <g> g1, ...[mask]..., gn </s> | <s> g1, g2, ..., gn </s> | | | |
| tgˆ2g | <s> x1, x2, ..., xm <g> g1, ...[mask]..., gn </s> | <s> g1, g2, ..., gn </s> | | | |
| tˆgˆ2g | <s> x1, ...[mask]..., xm <g> g1, ...[mask]..., gn </s> | <s> g1, g2, ..., gn </s> | | | |
| tgˆ1...gˆk2g | <s> [mask] <g> g1, ...[mask]1..., gn <g> ... <g> g1, ...[mask]k..., gn </s> | <s> g1, g2, ..., gn </s> | | | |
| tˆgˆ1...gˆk2g | <s> x1, ...[mask]..., xm <g> g1, ...[mask]1.., gn <g> ... <g> g1, ...[mask]k..., gn </s> | <s> g1, g2, ..., gn </s> | | | |
| Pre-training Fine-tun. | tp1...k2g | <s> x1, x2, ..., xm <g> [mask] </s> | <s> g1, g2, ..., gn </s> | | |
| tp1...k2g | <s> [mask] <g> p 1 , p1 , ..., p1 <g> ... <g> p k , pk , .., pk </s> | <s> g1, g2, ..., gn </s> | | | |
| 1 | 2 | l1 | 1 | 2 | lk |
| tp1...k2g | <s> x1, x2, ..., xm <g> p 1 , p1 , ..., p1 | k , pk , .., pk </s> | <s> g1, g2, ..., gn </s> | | |
| 1 | 2 | <g> ... <g> p 1 | 2 | | |
| l1 | lk | | | | |
generate five distinct prediction corpora by repeating this process four additional times with different train-test split sets.
Strategy Having obtained a corpus comprising multiple AMR predictions, we design a set of tasks that fine-tune the model for ensembling. The first task is AMR parsing (tp1*...k*2g), i.e., an AMR graph g is generated by using only a sentence t as input. In the second task, ensemble AMR predictions
(tp1*...k*2g), the model is provided with a random set of AMR predictions p without the corresponding sentence, so it is forced to use just graph information to ensemble. In the last task, ensemble AMR predictions using the sentence (tp1*...k*2g), the model is provided with both a random set of AMR
predictions p and the original sentence t. To ensure that the model is able to learn to merge a variety of predictions, we randomly modify the samples by changing the order and number of predictions in each epoch. As a result of this process, we obtain a model that is able to effectively integrate information from multiple sources to generate high-quality AMR graphs, without relying on the expensive SMATCH metric as has been the case for previous ensemblers.
## 2.3 Assemble! Zero & Avg
Nevertheless, using large autoregressive models to generate AMR graphs can be computationally expensive. Therefore, we propose an alternative approach that is more effective than previous merging strategies. Our method selects the best graph from a set of predictions. To achieve this, we introduce two novel scoring functions, in which we provide each predicted graph to the decoder of a model and extract their perplexity score, which can be done with a single forward pass. In the first method
(Assemble!*zero*), we leverage our trained ensemble model by providing the sentence and all the predictions in order to extract their perplexities and select the smallest one, i.e., we select prediction ps′, where:
$$s^{\prime}=\operatorname*{argmin}_{s\in\{1,\ldots,l\}}p e r p l e x i t y(t p_{1\ldots l}2p_{s}).$$
In the second method (Assemble!avg), instead of using our ensembler, we use each model that generated the predictions to extract the perplexity for all the candidates. The final output is the graph ps′
with the lowest average perplexity score, where:
$$s^{\prime}=\operatorname*{argmin}_{s\in\{1,\ldots,l\}}{\frac{1}{l}}\sum_{j\in\{1,\ldots,l\}}p e r p l e x i t y_{j}(t2p_{s}).$$
## 3 Experiments 3.1 Setup
Dataset We evaluate our model using AMR 3.0.
For pre-training, we use the same 200k silver data parsed by Bevilacqua et al. (2021, SPRING) from the Gigaword *(LDC2011T07)* corpus. For finetuning, we use the corpus described in Section 2.2.
Metric To evaluate our results, we employ the SMATCH metric, which quantifies the similarity between graphs by measuring the degree of overlap between their triplets, and SMATCH's breakdown metrics (see Appendix D). In addition, we validate our results using two novel AMR metrics: S2MATCH (Opitz et al., 2020) and WWLK
(Opitz et al., 2021), in its WWLK-k3e2n version introduced in Opitz et al. (2021).
Ensemble Baselines For our selection strategy, we use the system of Barzdins and Gosko
(2016) as a baseline, which calculates the average SMATCH score for a given graph in comparison to all the other candidates and selects the one with the highest score.
Our baseline for merging is Graphene (Lam et al., 2021), an algorithm that identifies the graph with the most nodes and edges in common among
Model Time (s) Corrupt. SMATCH S2MATCH WWLK **Unlab. NoWSD Conc. NER Neg. Wiki Reent. SRL**
Predictions
SPRING1 - 54 83.1 84.3 84.9 86.2 83.6 89.3 87.7 70.9 81.5 72.9 81.8 SPRING2 - 52 82.7 83.9 82.1 85.9 83.2 89.0 87.5 72.6 80.2 73.0 81.4 SPRING3 - 73 83.0 84.3 85.1 86.3 83.5 89.3 87.6 72.6 81.8 73.1 81.7
SPRING4 - 33 82.8 84.0 84.1 86.0 83.3 88.9 87.3 71.7 81.5 72.8 81.4
SPRING5 - 104 82.6 83.9 84,5 85.8 83.2 89.2 87.3 73.0 81.6 73.0 81.4
Best*graph* - 51 86.5 87.5 88.0 89.0 86.9 91.7 89.9 76.5 83.8 77.7 85.4
Mergers
Graphene*base* 810 374 83.6 84.8 84.9 86.6 84.1 89.8 88.0 73.5 81.2 72.3 82.4
Graphene*SMAT CH* 11,884 260 83.8 **85.0** 85.0 86.9 **84.4 89.9** 88.1 **73.8** 81.3 73.7 **82.6**
Assemble! 431 6 83.8 85.0 85.2 **87.0** 84.3 89.7 **88.3** 72.9 **81.7 74.2** 82.3
Selectors
SMATCHavg 493 51 83.7 85.0 85.3 86.8 84.2 89.7 88.1 73.3 82.0 73.9 82.4
Assemble!*zero* **256 13** 83.9 85.1 **85.4** 87.1 84.4 **89.9 88.3 74.0 82.2** 74.3 82.5
Assemble!avg 635 22 84.1 **85.3** 84.4 **87.2 84.6 89.9 88.3** 73.3 **82.2 74.6 82.8**
different graphs. Specifically, given a pivot graph gi (where i = 1, 2*, ..., k*), Graphene collects votes from the other graphs for every existing vertex and existing/non-existing edges to correct gi. We use two variants of Graphene, i) Graphene*base*, where every input graph is chosen as a pivot graph once, and the best among the modified pivot graphs is chosen as the final prediction based on average support; and ii) Graphene*smatch*, which is similar to Graphene*base* but chooses the best modified pivot graph based on average SMATCH score, similar to Barzdins and Gosko (2016).
We do not compare our approach using Maximum Bayes SMATCH Ensemble (Lee et al., 2022),
as it is a technique for producing high-quality silver data by combining SMATCH-based ensembling techniques with ensemble distillation, and its code and data are not publicly available.
Our Models We simulate an ensemble of five models obtained by training SPRING on five different seeds, and apply these models to the test split of AMR 3.0 using each of them. Assemble! and Assemble!*zero* rely on LongT5 (Guo et al., 2022)
and are trained as explained in Section 2.
## 3.2 Results
We present our results in Table 2. The *Predictions* block shows the performance of each individual system used for ensembling, which have an average SMATCH score of 82.8. The Best*graphs* row portrays the upper bound of the selection strategy, where the SMATCH score is calculated with an oracle that selects the graph with the highest SMATCH. This score is 3.4 points higher than the best predictions. The *Mergers* block presents the results of the ensembling strategies that combine predictions, where we observe that our model performs comparably to Graphene*smatch* but is 10 times faster.
Furthermore, the *Selector* block presents the results of the three different selection strategies, where the best graph is chosen out of a set of predictions. Our strategy outperforms SMATCHavg by 0.4 points while having a similar computation time. These results demonstrate the effectiveness of our proposed ensembling approaches and suggest that they may be an alternative to traditional merging methods.
## 3.3 Analysis
```
While our model is able to effectively ensemble
graphs or select the most accurate one from a set of
predictions in an efficient and competitive manner,
it is important to note that a higher SMATCH score
does not always equate to the best graph if the
graph has structural issues. This is because the
SMATCH metric simply views the graph as a set
of triplets. For example, the AMR graph illustrated
in Figure 2(a) is treated as the following triplets:
( empty , : root , z0 ) ^
( z0 , : instance , schedule -01) ^
( z0 , : ARG0 , z1 ) ^
( z1 , : instance , person ) ^
( z1 , : name , z2 ) ^
( z2 , : instance , name ) ^
( z2 , : op1 , " Antonio ") ^
( z2 , : op2 , " Banderas ") ^
( z0 , : ARG1 , z3 ) ^
( z3 , : instance , premiere -01) ^
( z3 , : ARG0 , z1 ) ^
( z3 , : ARG1 , z4 ) ^
( z4 , : instance , movie ) ^
( z4 , : poss , z1 ) ^
( z0 , : ARG3 , z5 ) ^
( z5 , : instance , date - entity ) ^
( z5 , : time , "15:00")
```
$\mathbf{a}$
* name, 22) " * instance, name) " * open1, "Antonio") " * open2, "Banderas") " * ARG1, 23) " * instance, premiere -81) " * ARG0, 21) " * ARG1, 24) " * instance, movie) " * poss, 21) " * ARG3, 25) " * instance, date -entity) " * time, "15:86")
![3_image_0.png](3_image_0.png)
SMATCH calculates the degree of overlapping
![3_image_1.png](3_image_1.png)
between two sets of triplets, but it does not consider the implicit AMR constraints. To address this problem, we develop an algorithm that checks some AMR violations in graphs: i) non-predicate nodes with :ARG relations, ii) predicate nodes with :op or
:snt relations, *iii)* compositional issues in entity
![4_image_0.png](4_image_0.png)
structures, and iv) compositional issues in connector structures. The *Corrupt.* column in Table 2 shows the number of graphs with structural problems out of 1898 graphs. This highlights the limitation of previous ensemblers, such as Graphene, which do not consider these structural constraints.
Ensembling As demonstrated in the *Corrupt.*
column of Table 2, our ensemble method has a significantly lower number of graphs with structural issues (0.3%) as compared to Graphene*base* and Graphene*smatch* (13.7-19.7%). This is because previous ensemble models are only focused on achieving a higher SMATCH metric, interpreting the graphs just as a set of triplets. This leads to ensembled graphs with violations of AMR guidelines and semantic inconsistencies. Figure 2(d) shows the Graphene*smatch* generated graph, and Figures 2(b) and 2(c) the two predictions used for ensembling. The Graphene*smatch* graph presents multiple AMR violations that are not in its predictions, e.g.,
the *premiere* node is connected to the *movie* node with two different relations because Graphene cannot decide which is the correct edge (both relations have the same probability), and one of the relations is an argument relation (i.e., :ARG), which cannot be used with non-predicate nodes since their meaning is encoded in PropBank frames.
SMATCH Graphene results in Table 2 are competitive despite having a higher percentage of structural issues in the ensembled graphs. This discrepancy can be attributed to the inherent properties of the SMATCH metric, which penalizes missing triplets more than wrong triplets. For example, the ensembled graph of Figure 2(d) obtains a higher SMATCH score than the prediction of Figure 2(b),
since, in case of doubt, selecting both triplets (relations ARG1 and mod) from node *premiere* to node *movie* results in a higher score than selecting only the wrong triplet. This illustrates how current ensemble models exploit SMATCH weaknesses to attain higher scores. In contrast, our approaches provide competitive results while also being more robust to AMR constraints.
Furthermore, as highlighted in Opitz and Frank
(2022), the current scores of AMR parsers and ensemblers (around 0.83 and 0.84, respectively) are higher than the average annotator vs. consensus inter-annotator agreement reported in Banarescu et al. (2013) (0.83 and 0.79 in newswire and web text, respectively). Additionally, WWLK results in Table 2 show how SPRING3 predictions achieve comparable results to all ensemble models. Therefore, given the issues discussed above, the suitability of SMATCH for evaluating the model's performance beyond 0.83 has to be called into question.
## 4 Conclusion
In this paper, we leveraged self-supervised pretraining and a denoising autoencoder architecture to achieve strong results in merging AMR graph predictions. We also introduced two novel approaches for ensembling that select the best prediction from a set of candidates using simple and efficient perplexity score functions. These results suggest that the selection strategy is a promising alternative for ensembling, since it achieves competitive performance while being less expensive.
Furthermore, we developed an algorithm that checks the structural AMR constraints in parsing outputs. This allowed us to perform an analysis that revealed how previous ensemble models produce higher score graphs but exploit SMATCH
weaknesses that lead to increased structural issues.
Overall, our findings highlight the need for more robust evaluation metrics and ensemble models that are designed to adhere to the structural constraints.
## 5 Limitations
Our proposed ensemble approach for training the Transformer architecture has demonstrated promising results for the task of AMR ensembling. However, there are limitations that warrant further investigation in future research.
Our first limitation is the lack of generalization, as the approach was only evaluated on AMR parsing. Therefore, the application of an autoregressive ensembling model has not yet been tested on other Natural Language Processing tasks.
Moreover, in order to properly compare each ensemble system under the same conditions, we base all our experiments using the same underlying architecture, i.e. SPRING. There needs to be an exploration of these approaches using more recent, better performing parsers. However, this will require access to such systems.
Furthermore, the computational cost is also a limitation, as even though our proposed merger method, Assemble!, is more efficient than previous ensemblers, it is still computationally expensive, and particularly when we have to ensemble long graphs from multiple predictions. Moreover, as our Assemble! model is based on LongT5, it might be challenged when working with large datasets or when running experiments on resource-constrained systems. Therefore, we encourage the use of ensembling strategies focused on selecting the best graphs instead of merging.
Lastly, as our ensemble approach is based on Transformer, results can be difficult to interpret, as it can be challenging to understand how the generated graph has been ensembled by different predictions, leading to a lack of interpretability.
In summary, the proposed ensemble approach for training the Transformer architecture has shown promising results for the task of AMR ensembling and has the potential to be applied to other tasks, however, further research is necessary to address its limitations and improve performance.
## 6 Ethics Statement
Regarding the ethical and social implications of our approach for AMR ensembling, we do not believe it could have a negative impact. However, since ethical considerations are an important aspect of any research and development project, we will discuss a few ethical considerations here.
First, one potential concern is the use of Transformer-based models, which have been shown to perpetuate societal biases present in the data used for training. Our approach relies on the use of these models, and it is crucial to ensure that the data used for training is diverse and unbiased.
Second, it is important to consider the potential impact of the proposed ensemble strategies on marginalized communities. It is possible that these strategies may inadvertently perpetuate or amplify existing biases in the data used to train and test these systems. Therefore, it is important to ensure that the proposed ensemble strategies are tested on a diverse set of data and that any biases are identified and addressed.
In conclusion, the proposed ensemble strategies in this paper can potentially have positive impact on the field of AMR parsing, however, it is important to consider the ethical implications of this research and take steps to mitigate any potential negative consequences.
## Acknowledgments
The authors gratefully acknowledge the support of the European Union's Horizon 2020 research project Knowledge Graphs at Scale (KnowGraphs) under the Marie Marie Skłodowska-Curie grant agreement No 860801.
The last author gratefully acknowledges the support of the PNRR MUR project PE0000013-FAIR. The authors sincerely thank Lorenzo Proietti and Stefano Perrella for their contribution to this project.
## References
Rafael Anchiêta and Thiago Pardo. 2020. Semantically inspired AMR alignment for the Portuguese language.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 1595–1600, Online. Association for Computational Linguistics.
Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022.
Graph pre-training for AMR parsing and generation.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 6001–6015, Dublin, Ireland.
Association for Computational Linguistics.
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In *Proceedings of the 7th Linguistic*
Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
Guntis Barzdins and Didzis Gosko. 2016. RIGA at SemEval-2016 task 8: Impact of Smatch extensions and character-level neural translation on AMR parsing accuracy. In *Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval2016)*, pages 1143–1147, San Diego, California. Association for Computational Linguistics.
Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to Rule Them Both:
Symmetric AMR semantic Parsing and Generation without a Complex Pipeline. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12564–12573.
Rexhina Blloshmi, Michele Bevilacqua, Edoardo Fabiano, Valentina Caruso, and Roberto Navigli. 2021.
SPRING Goes Online: End-to-End AMR Parsing and Generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 134–142, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Rexhina Blloshmi, Rocco Tripodi, and Roberto Navigli.
2020. XL-AMR: enabling cross-lingual AMR parsing with transfer learning techniques. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 2487–2500. Association for Computational Linguistics.
Claire Bonial, Stephanie M. Lukin, David Doughty, Steven Hill, and Clare Voss. 2020. InfoForager:
Leveraging semantic search with AMR for COVID19 research. In *Proceedings of the Second International Workshop on Designing Meaning Representations*, pages 67–77, Barcelona Spain (online). Association for Computational Linguistics.
Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In *Proceedings of the 51st Annual Meeting of the Association* for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics.
Liang Chen, Peiyi Wang, Runxin Xu, Tianyu Liu, Zhifang Sui, and Baobao Chang. 2022. ATP: AMRize then parse! enhancing AMR parsing with PseudoAMRs. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2482–2496, Seattle, United States. Association for Computational Linguistics.
Ziming Cheng, Zuchao Li, and Hai Zhao. 2022. BiBL:
AMR parsing and generation with bidirectional Bayesian learning. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 5461–5475, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Marco Damonte, Shay B. Cohen, and Giorgio Satta.
2017. An incremental parser for Abstract Meaning Representation. In *Proceedings of the 15th Conference of the European Chapter of the Association* for Computational Linguistics: Volume 1, Long Papers, pages 536–546, Valencia, Spain. Association for Computational Linguistics.
Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang.
2022. LongT5: Efficient text-to-text transformer for long sequences. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 724–
736, Seattle, United States. Association for Computational Linguistics.
Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using Abstract Meaning Representation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 768–773, Brussels, Belgium. Association for Computational Linguistics.
Pavan Kapanipathi, Ibrahim Abdelaziz, Srinivas Ravishankar, Salim Roukos, Alexander Gray, Ramón Fernandez Astudillo, Maria Chang, Cristina Cornelio, Saswati Dana, Achille Fokoue, Dinesh Garg, Alfio Gliozzo, Sairam Gurajada, Hima Karanam, Naweed Khan, Dinesh Khandelwal, Young-Suk Lee, Yunyao Li, Francois Luus, Ndivhuwo Makondo, Nandana Mihindukulasooriya, Tahira Naseem, Sumit Neelam, Lucian Popa, Revanth Gangi Reddy, Ryan Riegel, Gaetano Rossiello, Udit Sharma, G P Shrivatsa Bhargav, and Mo Yu. 2021. Leveraging Abstract Meaning Representation for knowledge base question answering. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3884–3894, Online. Association for Computational Linguistics.
Hoang Thanh Lam, Gabriele Picco, Yufang Hou, YoungSuk Lee, Lam M. Nguyen, Dzung T. Phan, Vanessa López, and Ramon Fernandez Astudillo. 2021. Ensembling Graph Predictions for AMR Parsing.
Young-Suk Lee, Ramón Astudillo, Hoang Thanh Lam, Tahira Naseem, Radu Florian, and Salim Roukos.
2022. Maximum Bayes Smatch ensemble distillation for AMR parsing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5379–5392, Seattle, United States. Association for Computational Linguistics.
Kexin Liao, Logan Lebanoff, and Fei Liu. 2018. Abstract Meaning Representation for multi-document summarization. In *Proceedings of the 27th International Conference on Computational Linguistics*,
pages 1178–1190, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Jungwoo Lim, Dongsuk Oh, Yoonna Jang, Kisu Yang, and Heuiseok Lim. 2020. I know what you asked:
Graph path learning using AMR for commonsense reasoning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2459–2471, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Abelardo Carlos Martínez Lorenzo, Marco Maru, and Roberto Navigli. 2022. Fully-Semantic Parsing and Generation: the BabelNet Meaning Representation.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1727–1741, Dublin, Ireland.
Association for Computational Linguistics.
Roberto Navigli. 2018. Natural language understanding:
Instructions for (present and future) use. In *Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July* 13-19, 2018, Stockholm, Sweden, pages 5697–5702.
ijcai.org.
Roberto Navigli, Rexhina Blloshmi, and Abelardo Carlos Martinez Lorenzo. 2022. BabelNet Meaning Representation: A Fully Semantic Formalism to Overcome Language Barriers. *Proceedings of the AAAI*
Conference on Artificial Intelligence, 36.
Juri Opitz, Angel Daza, and Anette Frank. 2021.
Weisfeiler-leman in the bamboo: Novel AMR graph metrics and a benchmark for AMR graph similarity.
Transactions of the Association for Computational Linguistics, 9:1425–1441.
Juri Opitz and Anette Frank. 2022. Better Smatch = better parser? AMR evaluation is not so simple anymore.
In Proceedings of the 3rd Workshop on Evaluation and Comparison of NLP Systems, pages 32–43, Online. Association for Computational Linguistics.
Juri Opitz, Letitia Parcalabescu, and Anette Frank. 2020.
AMR similarity metrics from principles. *Transactions of the Association for Computational Linguistics*, 8:522–538.
K. Elif Oral and Gül¸sen Eryigit. 2022. ˘ AMR alignment for morphologically-rich and pro-drop languages. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 143–152, Dublin, Ireland.
Association for Computational Linguistics.
Sudha Rao, Daniel Marcu, Kevin Knight, and Hal Daumé III. 2017. Biomedical event extraction using Abstract Meaning Representation. In *BioNLP 2017*,
pages 126–135, Vancouver, Canada,. Association for Computational Linguistics.
Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. *Transactions of the Association for Computational Linguistics*, 7:19–31.
Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zeroshot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 6397–6407, Online. Association for Computational Linguistics.
Chen Yu and Daniel Gildea. 2022. Sequence-tosequence AMR parsing with ancestor information.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 571–577, Dublin, Ireland.
Association for Computational Linguistics.
## A Model Hyper-Parameters
Table 3 lists hyperparameters and search space for the experiments with SPRING models and our Assemble!. The masking probabilities of the pretraining task are: i) tgˆ2g - 0.35%, ii) tgˆ2g - 0.35%,
iii) tˆgˆ2g - from 0.15% to 0.85% incrementing by epoch, iv) tgˆ1*...g*ˆk2g - 0.55%, and v) tˆgˆ1*...g*ˆk2g –
0.55%.
| Group | Parameter | Values |
|---------------------------------------------------------|--------------|-----------|
| Optimizer | Adafactor | |
| Batch size | 1 | |
| Dropout | 0.2 | |
| Attent. dropout | 0.0 | |
| Grad. accum. | 32 | |
| Weight decay | 0.01 | |
| LR | 0.0001 | |
| LR sched. | Inverse sqrt | |
| Beamsize | 5 | |
| Pre-training | Optimizer | Adafactor |
| Batch size | 1.0 | |
| Dropout | 0.1 | |
| Attent. dropout | 0.0 | |
| Grad. accum. | 32.0 | |
| Weight decay | 0.01 | |
| LR | 0.00001 | |
| LR | Constant | |
| Beamsize | 5 | |
| Fine-tuning | | |
| Table 3: Final hyperparameters and search space for the | | |
Table 3: Final hyperparameters and search space for the
experiments.
## B Hardware And Size Of The Model
We performed experiments on a single NVIDIA
3090 GPU with 64GB of RAM and Intel® Core™
i9-10900KF CPU. The total number of trainable parameters of SKD is 434,883,596. The pre-training phase on the silver data requires 168 hours, whereas fine-tuning requires 216 hours.
## C Blink
All systems from Table 2 use BLINK (Wu et al.,
2020) for wikification. For this purpose, we used the *blinkify.py* script from the SPRING repository.
## D Metric
To evaluate the predictions, we use the SMATCH
metric and the extra scores of Damonte et al.
(2017): i) Unlabel, compute on the predicted graphs after removing all edge labels, ii) No WSD,
compute while ignoring Propbank senses (e.g.,
duck-01 vs duck-02), *iii)* Wikification, F-score on the wikification (:wiki roles), iv) NER, F-score on the named entity recognition (:name roles), v)
Negations, F-score on the negation detection (:polarity roles), vi) Concepts, F-score on the concept identification task, *vii)* Reentrancy, computed on reentrant edges only, *viii)* Semantic Role Labeling
(SRL), computed on :ARG-i roles only.
## E Data
The AMR 3.0 data used in this paper is licensed under the *LDC User Agreement for Non-Members* for LDC subscribers, which can be found here.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5
✓ A2. Did you discuss any potential risks of your work?
Section 6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In the abstract and in the introduction section.
✓ A4. Have you used AI writing assistants when working on this paper?
Grammarly, we use to check the use of English of our paper
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2 And 3
✓ B1. Did you cite the creators of artifacts you used?
Yes, Section 1 and 3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The licenses are self-explanatory and discussed in the Appendix.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The dataset has been widely used, and was already scrutinised for personal information before our use.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We compare with previous approaches and use their implementation (Section 3)
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
liu-etal-2023-molxpt | {M}ol{XPT}: Wrapping Molecules with Text for Generative Pre-training | https://aclanthology.org/2023.acl-short.138 | Generative pre-trained Transformer (GPT) has demonstrates its great success in natural language processing and related techniques have been adapted into molecular modeling. Considering that text is the most important record for scientific discovery, in this paper, we propose MolXPT, a unified language model of text and molecules pre-trained on SMILES (a sequence representation of molecules) wrapped by text. Briefly, we detect the molecule names in each sequence and replace them to the corresponding SMILES. In this way, the SMILES could leverage the information from surrounding text, and vice versa. The above wrapped sequences, text sequences from PubMed and SMILES sequences from PubChem are all fed into a language model for pre-training. Experimental results demonstrate that MolXPT outperforms strong baselines of molecular property prediction on MoleculeNet, performs comparably to the best model in text-molecule translation while using less than half of its parameters, and enables zero-shot molecular generation without finetuning. |
## Molxpt: Wrapping Molecules With Text For Generative Pre-Training
Zequn Liu1∗
, Wei Zhang2 ∗, Yingce Xia3†, Lijun Wu3**, Shufang Xie**4, Tao Qin3, Ming Zhang1 † **and Tie-Yan Liu**3 1 Peking University; 2 University of Science and Technology of China 3 Microsoft Research AI4Science; 4 Renmin University of China
{zequnliu,mzhang_cs}@pku.edu.cn; [email protected]
{yingce.xia, lijunwu, taoqin, tyliu}@microsoft.com [email protected]
## Abstract
Generative pre-trained Transformer (GPT) has demonstrates its great success in natural language processing and related techniques have been adapted into molecular modeling. Considering that text is the most important record for scientific discovery, in this paper, we propose MolXPT, a unified language model of text and molecules pre-trained on SMILES (a sequence representation of molecules) wrapped by text. Briefly, we detect the molecule names in each sequence and replace them to the corresponding SMILES. In this way, the SMILES
could leverage the information from surrounding text, and vice versa. The above wrapped sequences, text sequences from PubMed and SMILES sequences from PubChem are all fed into a language model for pre-training. Experimental results demonstrate that MolXPT outperforms strong baselines of molecular property prediction on MoleculeNet, performs comparably to the best model in text-molecule translation while using less than half of its parameters, and enables zero-shot molecular generation without finetuning.
## 1 Introduction
Generative pre-trained Transformer (GPT), like GPT-3 (Brown et al., 2020) and ChatGPT (OpenAI, 2022), have obtained great success in natural language processing. They usually have billions of parameters and are trained on large corpus (Taylor et al., 2022; Singhal et al., 2022). By witnessing their great power, people start transferring language models to chemical (Bagal et al., 2022) and biological domains (Ferruz et al., 2022). For example, a small molecule (e.g., an oral drug) can be represented using simplified molecular-input lineentry system (SMILES) (Weininger, 1988), which is a sequence obtained by traversing the molecular graph using depth-first-search and several rules for branching, aromaticity, etc. After serializing molecules, people pre-train language models on SMILES (Bagal et al., 2022; Tong et al., 2021; Frey et al., 2022) and obtain promising results for molecular generation.
Text is the most important record for molecular science and more generally, scientific discovery (Beltagy et al., 2019). It describes detailed properties of molecules, like how to synthesize the molecule (Feng et al., 2016), whether the molecule is toxic (Juurlink et al., 2003), etc. BioGPT (Luo et al., 2022) and PubMedGPT (Bolton et al., 2022)
are two language models trained on biomedical literature. Recently, a new trend is to jointly model SMILES and scientific text so as to obtain shared representations across the two modalities. MolT5 is a T5-like (Raffel et al., 2020) model, where several spans of the text/SMILES are masked in the encoder and they should be reconstructed in the decoder. Galactica (Taylor et al., 2022) is a GPTlike (Brown et al., 2020) model pre-trained on various types of inputs, like text, SMILES, protein sequences, etc. Although those models demonstrate progress in prediction and generation tasks, they do not explicitly leverage the relation between molecules and text. An intuition is that, in scientific literature, when a molecule name appears in a sentence, the surrounding context could be a description of the molecule. This should be useful information for joint training but is ignored in those models.
To leverage such relations, in this work, we propose a novel molecule-text language model
(MolXPT), which is trained on "wrapped" sequences: Given a sentence, we detect the molecular names with named entity recognition tools, and if any, replace them to the corresponding SMILES and obtain the "wrapped" sequence between SMILES and text. We pre-train a 24-layer MolXPT (with 350M parameters) on 8M wrapped sequences, as well as 30M SMILES from PubChem 1606
![1_image_0.png](1_image_0.png)
(Kim et al., 2022) and 30M titles and abstracts from PubMed (a popular biomedical literature search engine).
After pre-training, we finetune MolXPT on MoleculeNet (a benchmark about molecular property prediction) (Wu et al., 2018) and molecule-text translation (Edwards et al., 2022) using promptbased finetuning. On MoleculeNet, MolXPT outperforms strong baselines with sophisticated design like GEM (Fang et al., 2022). On text-molecule translation, MolXPT performs comparably with the state-of-the-art model, MolT5-large (Edwards et al., 2022). MolT5-large has 800M parameters while MolXPT only uses 44% of its parameters.
We also verify that MolXPT has the zero-shot ability on text-to-molecule generation.
## 2 Our Method
MolXPT is a language model pre-trained on heterogeneous data including scientific text, SMILES
sequences, and "wrapped" sequences between SMILES and text. Due to the flexible input, we can finetune it for various text and molecular tasks.
The framework of MolXPT is in Figure 1.
## 2.1 Pre-Training Corpus
For scientific text, we use the titles and abstracts of 30M papers from PubMed1. For molecular SMILES, we randomly choose 30M molecules from PubChem2(Kim et al., 2022).
The wrapped sequences are constructed via a
"detect and replace" pipeline. We first use BERN2
(Sung et al., 2022), a widely used named entity recognition (NER) tool for biomedical purpose, to detect all mentions of molecules and link them to the entities in public knowledge bases like ChEBI
1https://ftp.ncbi.nlm.nih.gov/pubmed/
2https://pubchem.ncbi.nlm.nih.gov/
(Hastings et al., 2016). After that, we can retrieve the molecular SMILES of the matched entities. Finally, we replace the molecular mentions to their corresponding SMILES. An example is shown in the left panel of Figure 1. The wrapped sequences must contain at least one molecular SMILES. We eventually obtain 8M wrapped sequences in total.
Text and SMILES are tokenized separately. For text, we use byte-pair encoding (BPE) (Sennrich et al., 2016) to split the words into subwords.
The number of BPE merge operation is 40k. For SMILES sequences (including those in wrapped sequences), we tokenize them with the regular expression from (Schwaller et al., 2018). For each SMILES sequence S, we add a start-of-molecule token ⟨som⟩ at the beginning of S and append an end-of-molecule token ⟨eom⟩ at the end of S.
## 2.2 Model And Training
Model architecture: MolXPT has the same architecture as the GPT models (Radford et al., 2019).
Due to computational resource limitation, in this paper, we follow the GPT-2medium configuration with 24 layers, 1024 hidden size and 16 attention heads. The maximum length of input we can process is 2048 and the vocabulary size is 44536. In total, our model has 350M parameters.
Pre-training: The pre-training objective function of MolXPT is the negative log-likelihood. Mathematically, let D = {xi}i denote the collection of sequences of the three types of the data, and xi = (si,1, si,2, · · · , si,ni
) is the i-th sequence with nitokens. The training objective function is:
$$\operatorname*{min}-{\frac{1}{|\mathcal{D}|}}\sum_{i=1}^{|\mathcal{D}|}\sum_{j=1}^{n_{i}}\log P(s_{i,j}|s_{i,j-1},s_{i,j-2},\cdots,s_{1}).$$
The pre-training details are left in Appendix B.
$\mathrm{f}$ .
| Dataset | BBBP | Tox21 | ClinTox | HIV | BACE | SIDER | Avg |
|--------------|------------|------------|------------|------------|------------|------------|-------|
| #Molecules | 2039 | 7831 | 1478 | 41127 | 1513 | 1478 | |
| G-Contextual | 70.3 ± 1.6 | 75.2 ± 0.3 | 59.9 ± 8.2 | 75.9 ± 0.9 | 79.2 ± 0.3 | 58.4 ± 0.6 | 69.8 |
| G-Motif | 66.4 ± 3.4 | 73.2 ± 0.8 | 77.8 ± 2.0 | 73.8 ± 1.4 | 73.4 ± 4.0 | 60.6 ± 1.1 | 70.9 |
| GROVERbase | 70.0 ± 0.1 | 74.3 ± 0.1 | 81.2 ± 3.0 | 62.5 ± 0.9 | 82.6 ± 0.7 | 64.8 ± 0.6 | 72.6 |
| GROVERlarge | 69.5 ± 0.1 | 73.5 ± 0.1 | 76.2 ± 3.7 | 68.2 ± 1.1 | 81.0 ± 1.4 | 65.4 ± 0.1 | 72.3 |
| GraphMVP | 72.4 ± 1.6 | 75.9 ± 0.5 | 79.1 ± 2.8 | 77.0 ± 1.2 | 81.2 ± 0.9 | 63.9 ± 1.2 | 74.9 |
| MGSSL | 70.5 ± 1.1 | 76.5 ± 0.3 | 80.7 ± 2.1 | 79.5 ± 1.1 | 79.7 ± 0.8 | 61.8 ± 0.8 | 74.8 |
| GEM | 72.4 ± 0.4 | 78.1 ± 0.1 | 90.1 ± 1.3 | 80.6 ± 0.9 | 85.6 ± 1.1 | 67.2 ± 0.4 | 79.0 |
| KV-PLM | 74.6 ± 0.9 | 72.7 ± 0.6 | - | 74.0 ± 1.2 | - | 61.5 ± 1.5 | - |
| Galactica | 66.1 | 68.9 | 82.6 | 74.5 | 61.7 | 63.2 | 69.5 |
| MoMu | 70.5 ± 2.0 | 75.6 ± 0.3 | 79.9 ± 4.1 | 76.2 ± 0.9 | 77.1 ± 1.4 | 60.5 ± 0.9 | 73.3 |
| MolXPT | 80.0 ± 0.5 | 77.1 ± 0.2 | 95.3 ± 0.2 | 78.1 ± 0.4 | 88.4 ± 1.0 | 71.7 ± 0.2 | 81.9 |
Prompt-based finetuning: MolXPT can be finetuned for downstream tasks about molecules and text. Adding classification or regression heads to pre-trained backbone models introduces the gap between pre-training and finetuning (Brown et al.,
2020; Chen et al., 2022; Gu et al., 2022). Therefore, we adopt prompt-based finetuning (Gao et al.,
2021) to unify different tasks into a sequence generation task, which is consistent with the pre-training objective. Briefly, given a task, we convert the input and output into text and/or SMILES sequences, equip the sequences with task-specific prompts and finetune using language modeling loss. Prompts for MoleculeNet and text-molecule translation are introduced in the Section 3.1 and 3.2 respectively.
Discussion: Some works also try to jointly model text and molecules. Zeng et al. (2022) propose KV-PLM, where SMILES sequences are appended after molecule names for pre-training. Su et al.
(2022) use contrastive learning between text and molecular graphs. Our MolXPT is a generative model while the above two models are not. Both of them are built upon SciBERT (Beltagy et al., 2019),
a BERT model (Devlin et al., 2019) for scientific literature. MolXPT is complementary to them.
## 3 Experiments
We evaluated MolXPT on two downstream tasks:
(1) molecular property prediction on MoleculeNet
(Wu et al., 2018), which is to predict whether the given molecule has specific properties; (2) the generation between text descriptions and molecules
(Edwards et al., 2022), where both molecules and text should be considered. In this section, we focus on introducing task definition, prompt design and results while leaving the detailed finetuning hyper-parameters in Appendix C.
## 3.1 Results On Moleculenet
MoleculeNet (Wu et al., 2018) is a widely-used benchmark for molecular modeling, which has more than 700k compounds for various different properties. We choose six molecular classification tasks for evaluation, which are BBBP, Tox21, ClinTox, HIV, BACE and SIDER. Details are left in Appendix A. We follow GEM (Fang et al., 2022)
to split the data into training/validation/test sets based on the scaffold. For these tasks, the input is a SMILES and the output is a binary label.
Finetuning strategy: Previous molecular property prediction models mainly use SMILES sequences or molecular graphs as input, while we can use the "wrapped" sequences. For example, one task is to predict the blood-brain barrier penetration
(BBBP) of a molecule. Therefore, the prompt is
"*We can conclude that the BBB penetration of*⟨som⟩
⟨SMILES⟩ ⟨eom⟩ is ⟨tag⟩", where ⟨SMILES⟩ denotes the molecular SMILES, and ⟨tag⟩ denotes the classification result. For the BBBP task, we design
⟨tag⟩ as "true" or "false", indicating whether the compound can or cannot cross BBB.
Different tasks have different prompts (see Appendix C.1), but we put the tags to the last token of the prompt for all tasks. Let (si,1, si,2, · · · , si,Ti
)
denote the i-th wrapped sequence for the downstream task with Titokens, where si,Ti is the tag of the sequence. Denote that there are N samples for finetuning. The finetuning strategy could be either
$$\operatorname*{min}-{\frac{1}{N}}\sum_{i=1}^{N}\log P(s_{i,T_{i}}|s_{i,<T_{i}}),\qquad(1)$$
indicating that we finetune the tags only, or
$$\operatorname*{min}-{\frac{1}{N}}\sum_{i=1}^{N}{\frac{1}{T_{i}}}\sum_{j=1}^{T_{i}}\log P(s_{i,j}|s_{i,<j}),\quad(2)$$
indicating that we finetune the full prompts. According to our exploration, Eqn.(1) achieves slightly better results and we use it for all tasks
(see Appendix C.4 for the results).
Let ptrue and pfalse denote the probabilities of tags "true" and "false" after encoding the prefix "*We can conclude that the BBB penetration* of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is". The probabilities that ⟨SMILES⟩ can and cannot cross blood-brain barrier are normalized as ptrue/(ptrue + pfalse) and pfalse/(ptrue + pfalse) respectively. The finetuning hyper-parameters are in Appendix C.2.
We compare MolXPT with two types of baselines: (1) pre-trained language model baselines including KV-PLM (Zeng et al., 2022),
Galactica (Taylor et al., 2022) and MoMu (Su et al., 2022). (2) pre-trained Graph Neural Network (GNN) baselines including GContextual (Rong et al., 2020), G-Motif (Rong et al., 2020), GROVERbase (Rong et al., 2020),
GROVERlarge (Rong et al., 2020), GraphMVP (Liu et al., 2022), MGSSL (Zhang et al., 2021) and GEM (Fang et al., 2022). The evaluation metric is the ROC-AUC score. The results are in Table 1.
MolXPT outperforms the GNN baselines pretrained on pure molecular data, indicating the effectiveness of pre-training with scientific text corpus. Compared with Galactica which also uses both SMILES and text for pre-training GPT-like model, MolXPT obtains better performance. Note that Galactica does not purposely build and train on the "wrapped" sequences, whose importance is demonstrated via our empirical results. A possible explanation of the superior performance is that the SMILES describes the component and structural information of molecules, while the text describes the general properties. They are complementary to each other, and joint training on them brings more effective representations.
## 3.2 Results On Text-Molecule Translation
We evaluated the performance of MolXPT on CheBI-20 (Edwards et al., 2021), a bidirectional text-molecule translation dataset. It consists of 33,010 molecule-description pairs. We use the data split provided by MolT5 (Edwards et al., 2022),
where the training, validation and test sets account 80%, 10% and 10% of total data. For molecule-totext generation, given a molecular SMILES S, the prompt is: "The description of ⟨som⟩ S ⟨eom⟩ is:
The molecule is", followed by the text description of S. For text-to-molecule generation, given a text description T, the prompt is: "T. The compound is ⟨som⟩", and the model will generate the molecular SMILES ended with ⟨eom⟩. We compare our method with MolT5 (Edwards et al., 2022).
For molecule-to-text generation, the results are evaluated by NLP metrics including BLEU (Papineni et al., 2002), Rouge (Lin, 2004) and METEOR (Banerjee and Lavie, 2005). "Text2mol" is a deep learning based metric proposed by Edwards et al. (2022) to measure the similarity of the text-molecule pairs. For text-to-molecule generation, we evaluate the following metrics: the proportion of the generated SMILES that exactly match the reference SMILES (denoted as "Exact"); the Tanimoto similarity of three types of fingerprints: MACCS (Durant et al., 2002), RDK
(Schneider et al., 2015) and Morgan (Rogers and Hahn, 2010); the FCD score (Preuer et al., 2018),
which measures the molecule distances by a pretrained model; the percentage of the valid generated SMILES. The results are reported in Table 2.
We observe that MolXPT achieves significantly better performance than MolT5-small and MolT5-base, and has comparable performance with MolT5-large. Note that MolT5-large has 800M
parameters while MolXPT only uses 44% of its parameters. For both tasks, our model performs the best on Text2Mol metric, indicating that MolXPT
captures the alignment between text and molecule better. We attribute it to the wrapped sequences, by which the model can learn the relation between molecule and text explicitly.
We further verify the zero-shot text-to-molecule generation ability of MolXPT. The pre-trained MolXPT takes the text as input and directly generates molecules without finetuning. The top-1 and top-5 fingerprint similarity is in Table 3. Indeed, compared with the full data setting, the performance drops, but still reasonable numbers. In addition, the zero-shot MolXPT successfully recovers 33 molecules based on the text (see Appendix D).
## 4 Conclusions And Future Work
We propose MolXPT, a generative model pretrained on scientific text, molecular SMILES and
| Molecule-to-text | BLEU-2 | BLEU-4 | Rouge-1 | Rouge-2 | Rouge-L | METEOR | Text2Mol |
|--------------------|----------|----------|-----------|-----------|-----------|-----------|------------|
| MolT5-small (77M) | 0.519 | 0.436 | 0.620 | 0.469 | 0.563 | 0.551 | 0.540 |
| MolT5-base (250M) | 0.540 | 0.457 | 0.634 | 0.485 | 0.578 | 0.569 | 0.547 |
| MolT5-Large (800M) | 0.594 | 0.508 | 0.654 | 0.510 | 0.594 | 0.614 | 0.582 |
| MolXPT (350M) | 0.594 | 0.505 | 0.660 | 0.511 | 0.597 | 0.626 | 0.594 |
| Text-to-molecule | Exact↑ | MACCS↑ | RDK↑ | Morgan↑ | FCD↓ | Text2mol↑ | Validity↑ |
| MolT5-small | 0.079 | 0.703 | 0.568 | 0.517 | 2.49 | 0.482 | 0.721 |
| MolT5-medium | 0.081 | 0.721 | 0.588 | 0.529 | 2.18 | 0.496 | 0.772 |
| MolT5-large | 0.311 | 0.834 | 0.746 | 0.684 | 1.20 | 0.554 | 0.905 |
| MolXPT | 0.215 | 0.859 | 0.757 | 0.667 | 0.45 | 0.578 | 0.983 |
| MACCS | RDK | Morgan | |
|-------------------|-------|----------|-------|
| Zero-shot (Top-1) | 0.540 | 0.383 | 0.228 |
| Zero-shot (Top-5) | 0.580 | 0.423 | 0.423 |
| Full data (Top-1) | 0.841 | 0.746 | 0.660 |
Table 3: Zero-shot text-to-molecule generation.
their wrapped sequences. We train a 24-layer MolXPT with 350M parameters. By prompt-based finetuning, it improves strong baselines on MoleculeNet and achieves comparable results with the best model on molecule-text translation but using much fewer parameters.
For future work, first, we will train larger MolXPT to further verify the performances across different tasks and the zero-shot/in-context (Xie et al., 2022) learning ability. Second, how to further enhance the interaction between molecules and text (e.g., using contrastive learning to enhance consistency) should be studied. Third, how to effectively adapt MolXPT into other molecule and text tasks such as text-guided molecule optimization is another direction to explore.
## Limitations
One limitation of our method is that when training larger models, it requires more computation resources, whose cost is relatively high. However, after pre-training, we will release our models so that readers can directly use them without pre-training again.
## Broader Impacts
We provide a new generative pre-trained model on molecules and text. On one hand, the model can be used to speed up scientific discovery, like molecule design, drug optimization, etc. On the other hand, once the model is trained on clinical data (which also describes the usage of drug molecules), it might lead to personal information leaky. We will enhance data filtration to anonymize all personal information, and will design new models to protect the information.
## Acknowledgement
The authors Zequn Liu and Ming Zhang are partially supported by National Natural Science Foundation of China (NSFC Grant Number 62276002).
## References
Viraj Bagal, Rishal Aggarwal, P. K. Vinod, and U. Deva Priyakumar. 2022. Molgpt: Molecular generation using a transformer-decoder model. *Journal of Chemical Information and Modeling*, 62(9):2064–2076.
Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In *Proceedings* of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615–
3620, Hong Kong, China. Association for Computational Linguistics.
Elliot Bolton, David Hall, Michihiro Yasunaga, Tony Lee, Chris Manning, and Percy Liang. 2022. PubMedGPT 2.7B.
Nathan Frey, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally. 2022. Neural scaling of deep chemical models. *ChemRxiv*.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Yulong Chen, Yang Liu, Li Dong, Shuohang Wang, Chenguang Zhu, Michael Zeng, and Yue Zhang.
2022. Adaprompt: Adaptive model training for prompt-based nlp. *arXiv preprint arXiv:2202.04824*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Joseph L Durant, Burton A Leland, Douglas R Henry, and James G Nourse. 2002. Reoptimization of mdl keys for use in drug discovery. *Journal of chemical information and computer sciences*, 42(6):1273–
1280.
Carl Edwards, Tuan Lai, Kevin Ros, Garrett Honke, and Heng Ji. 2022. Translation between molecules and natural language. *arXiv preprint arXiv:2204.11817*.
Carl Edwards, ChengXiang Zhai, and Heng Ji. 2021.
Text2mol: Cross-modal molecule retrieval with natural language queries. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 595–607.
Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. 2022. Geometry-enhanced molecular representation learning for property prediction. *Nature Machine Intelligence*, 4(2):127–134.
Minghao Feng, Bingqing Tang, Steven H Liang, and Xuefeng Jiang. 2016. Sulfur containing scaffolds in drugs: synthesis and application in medicinal chemistry. *Current topics in medicinal chemistry*,
16(11):1200–1216.
Noelia Ferruz, Steffen Schmidt, and Birte Höcker. 2022.
Protgpt2 is a deep unsupervised language model for protein design. *Nature Communications*, 13(1):4348.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3816–3830.
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang.
2022. Ppt: Pre-trained prompt tuning for few-shot learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 8410–8423.
Janna Hastings, Gareth Owen, Adriano Dekker, Marcus Ennis, Namrata Kale, Venkatesh Muthukrishnan, Steve Turner, Neil Swainston, Pedro Mendes, and Christoph Steinbeck. 2016. Chebi in 2016: Improved services and an expanding collection of metabolites.
Nucleic acids research, 44(D1):D1214–D1219.
David N Juurlink, Muhammad Mamdani, Alexander Kopp, Andreas Laupacis, and Donald A Redelmeier. 2003. Drug-drug interactions among elderly patients hospitalized for drug toxicity. *Jama*, 289(13):1652–
1658.
Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Benjamin A
Shoemaker, Paul A Thiessen, Bo Yu, Leonid Zaslavsky, Jian Zhang, and Evan E Bolton. 2022.
PubChem 2023 update. *Nucleic Acids Research*,
51(D1):D1373–D1380.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *ICLR (Poster)*.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81.
Shengchao Liu, Hanchen Wang, Weiyang Liu, Joan Lasenby, Hongyu Guo, and Jian Tang. 2022. Pretraining molecular graph representation with 3d geometry. In International Conference on Learning Representations.
Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. 2022.
BioGPT: generative pre-trained transformer for biomedical text generation and mining. Briefings in Bioinformatics, 23(6).
OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. Technical blog.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311–318.
Kristina Preuer, Philipp Renz, Thomas Unterthiner, Sepp Hochreiter, and Gunter Klambauer. 2018.
Frechet chemnet distance: a metric for generative models for molecules in drug discovery. *Journal* of chemical information and modeling, 58(9):1736–
1741.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21(140):1–67.
David Rogers and Mathew Hahn. 2010. Extendedconnectivity fingerprints. *Journal of chemical information and modeling*, 50(5):742–754.
Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang.
2020. Self-supervised graph transformer on largescale molecular data. *Advances in Neural Information Processing Systems*, 33:12559–12571.
Nadine Schneider, Roger A Sayle, and Gregory A Landrum. 2015. Get your atoms in order: An opensource implementation of a novel and robust molecular canonicalization algorithm. *Journal of chemical* information and modeling, 55(10):2111–2120.
Philippe Schwaller, Theophile Gaudin, David Lanyi, Costas Bekas, and Teodoro Laino. 2018. "found in translation": predicting outcomes of complex organic chemistry reactions using neural sequence-tosequence models. *Chemical science*, 9(28):6091–
6098.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2022. Large language models encode clinical knowledge. *arXiv preprint arXiv:2212.13138*.
Bing Su, Dazhao Du, Zhao Yang, Yujie Zhou, Jiangmeng Li, Anyi Rao, Hao Sun, Zhiwu Lu, and JiRong Wen. 2022. A molecular multimodal foundation model associating molecule graphs with natural language. *arXiv preprint arXiv:2209.05481*.
Mujeen Sung, Minbyul Jeong, Yonghwa Choi, Donghyeon Kim, Jinhyuk Lee, and Jaewoo Kang.
2022. Bern2: an advanced neural biomedical named entity recognition and normalization tool. *arXiv* preprint arXiv:2201.02080.
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022.
Galactica: A large language model for science. arXiv preprint arXiv:2211.09085.
Xiaochu Tong, Xiaohong Liu, Xiaoqin Tan, Xutong Li, Jiaxin Jiang, Zhaoping Xiong, Tingyang Xu, Hualiang Jiang, Nan Qiao, and Mingyue Zheng. 2021. Generative models for de novo drug design. Journal of Medicinal Chemistry, 64(19):14011–14027.
David Weininger. 1988. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. *Journal of chemical information and computer sciences*, 28(1):31–36.
Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. 2018. Moleculenet:
a benchmark for molecular machine learning. *Chemical science*, 9(2):513–530.
Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An explanation of in-context learning as implicit bayesian inference. In *International Conference on Learning Representations*.
Zheni Zeng, Yuan Yao, Zhiyuan Liu, and Maosong Sun.
2022. A deep-learning system bridging molecule structure and biomedical text with comprehension comparable to human professionals. *Nature Communications*, 13(1):862.
Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Chee-Kong Lee. 2021. Motif-based graph selfsupervised learning for molecular property prediction. *Advances in Neural Information Processing* Systems, 34:15870–15882.
## Appendix A Datasets And Baselines Of Moleculenet
We choose the following tasks of MoleculeNet for evaluation:
(1) BBBP contains compounds with binary labels on blood-brain barrier penetration.
(2) Tox21 is a dataset for predicting the human toxicity of compounds on 12 different targets.
(3) ClinTox contains drugs approved by the FDA
and those that have failed clinical trials for toxicity reasons.
(4) HIV aims to predict whether a drug can inhibit HIV replication.
(5) BACE describes binding results for a set of inhibitors of human β-secretase 1.
(6) SIDER has compounds used in marketed medicines with 27 categories of side effects.
We compare MolXPT with the following baselines:
(1) GROVER is a self-supervised pre-trained graph Transformer model. G-Contextual and G-Motif are two variants of it pre-trained with contextual property prediction task and motif prediction task.
(2) GraphMVP is a self-supervised pre-trained GNN model using both 2D topological structures and 3D geometric views of molecules.
(3) MGSSL leverages a retrosynthesis-based algorithm BRICS and additional rules to find the motifs and combines motif layers with atom layers.
(4) GEM is a geometry-enhanced pre-trained GNN
model.
(5) Galactica is a GPT-like model trained on a large scientific corpus and many natural sequences like SMILES. We report the result of Galactica-120B.
(6) KV-PLM is a BERT-like model where SMILES
sequences are appended after molecule names for pre-training.
(7) MoMu uses contrastive learning to jointly pretrain a BERT model for text and a GNN model for molecules.
## B Pre-Training Hyper-Parameters
MolXPT is pre-trained for 200k steps on eight A100 GPUs. The batchsize is 2048 tokens per GPU. The gradients are accumulated for 16 steps before updating. We use Adam (Kingma and Ba, 2015) optimizer for optimization. The peak learning rate is 0.0005 and the warm-up steps are 20000.
The learning rate scheduler is inverse square root decay scheduler. The dropout is 0.1.
## C **Finetuning Details Of Downstream Tasks** C.1 Prompts For Finetuning Moleculenet
(1) BBBP: "*We can conclude that the BBB penetration of* ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ *is true/false.*"
(2) Tox21: "We can conclude that the ⟨som⟩
⟨SMILES⟩ ⟨eom⟩ activity outcome on ⟨target⟩ is active/inactive. " where ⟨*target*⟩ refers to corresponding receptor or enzyme for each subtask, e.g.
the ⟨*target*⟩ of subtask "AR" is "Androgen Receptor".
(3) ClinTox:"*We can conclude that the clinical trial* toxicity of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ *is true/false.*" for subtask CT_TOX and "*We can conclude that the* FDA approval status of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is true/false." for subtask FDA_APPROVED.
(4) HIV: "We can conclude that the screening result of ability to inhibit HIV replication of ⟨som⟩
⟨SMILES⟩ ⟨eom⟩ *is active/inactive.*"
(5) BACE: "We can conclude that the binding result on beta-secretase 1 of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is true/false."
(6) SIDER:"*We can conclude that the* ⟨som⟩
⟨SMILES⟩ ⟨eom⟩ *can bring about the side effect* of ⟨side-effect⟩ *is true/false.*" where ⟨*side-effect*⟩
refers to corresponding side-effect for each subtask.
## C.2 Details Of Finetuning Moleculenet
We grid search the following hyper-parameters:
learning rate in {3 × 10−5, 5 × 10−5}; dropout in
{0.1, 0.3}; total epochs from {30, 50}. The model is selected according to validation performance.
## C.3 Details Of Finetuning Text-Molecule Generation
For text-molecule generation, MolXPT is finetuned for 100 steps on one P40 GPU with 1024 tokens and 16 accumulated steps per device. Models are finetuned for 100 epochs. The learning rate is 0.0001 and the dropout rate is grid searched from
[0.1, 0.2, 0.3, 0.4, 0.5]. Setting dropout rate as 0.4 and 0.5 achieves the best validation performance on molecule-to-text generation and text-to-molecule generation respectively. We use the corresponding models for testing.
## C.4 **Moleculenet Finetuning Strategy Selection**
We provide two finetune strategies in Eqn.(1) and Eqn.(2). Their results are reported in Table 4. Their results are similar and Eqn.(1) is slightly better.
## D Zero-Shot Text-To-Molecule Generation
Given K generated molecule mˆ 1, mˆ 2, *· · ·* , mˆ K
and the reference molecule m, the top-K fingerprint similarity is
$$\operatorname*{max}_{i\in[K]}\operatorname{similarity}(m,{\hat{m}}_{i}).$$
$$({\mathfrak{I}})$$
MolXPT generates 33 molecules that can exactly match the reference molecules without finetuning.
Figure 2 shows three of the cases.
Figure 2: Examples for zero-shot text-to-molecule generation. We randomly pick up three cases that MolXPT can
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png) successfully generate the reference molecules without finetuning.
| Dataset | BBBP | Tox21 | ClinTox | HIV | BACE | SIDER | Avg |
|-----------------|------------|------------|------------|------------|------------|------------|-------|
| Devfull prompt | 98.8 ± 0.2 | 78.8 ± 0.1 | 98.8 ± 0.1 | 82.9 ± 1.0 | 78.4 ± 0.3 | 67.7 ± 0.7 | 84.2 |
| Devtags only | 98.9 ± 0.3 | 78.8 ± 0.2 | 97.7 ± 0.1 | 85.3 ± 0.2 | 75.8 ± 0.8 | 69.4 ± 0.6 | 84.3 |
| Testfull prompt | 78.1 ± 0.4 | 77.2 ± 0.1 | 93.4 ± 0.1 | 78.1 ± 0.9 | 87.9 ± 0.3 | 70.0 ± 0.2 | 80.8 |
| Testtags only | 80.0 ± 0.5 | 77.1 ± 0.2 | 95.3 ± 0.2 | 78.1 ± 0.4 | 88.4 ± 1.0 | 71.7 ± 0.2 | 81.9 |
Table 4: Comparison of different finetuning strategies on MoleculeNet. "Dev" and "Test" denote validation set and test set respectively. Subscripts represent finetuning full prompts (Eqn.(2)) or tags only respectively (Eqn.(1)). The evaluation metric is ROC-AUC.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The section called "Limitations".
✓ A2. Did you discuss any potential risks of your work?
The section called "Ethnics statement".
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction (Section 1)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 2.2 And Appendix B
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 2.2 and Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.1, Section 3.2 and Appendix C
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3.1, Section 3.2
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.2.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
luo-etal-2023-study | A Study on the Efficiency and Generalization of Light Hybrid Retrievers | https://aclanthology.org/2023.acl-short.139 | Hybrid retrievers can take advantage of both sparse and dense retrievers. Previous hybrid retrievers leverage indexing-heavy dense retrievers. In this work, we study {``}Is it possible to reduce the indexing memory of hybrid retrievers without sacrificing performance{''}? Driven by this question, we leverage an indexing-efficient dense retriever (i.e. DrBoost) and introduce a LITE retriever that further reduces the memory of DrBoost. LITE is jointly trained on contrastive learning and knowledge distillation from DrBoost. Then, we integrate BM25, a sparse retriever, with either LITE or DrBoost to form light hybrid retrievers. Our Hybrid-LITE retriever saves $13\times$ memory while maintaining 98.0{\%} performance of the hybrid retriever of BM25 and DPR. In addition, we study the generalization capacity of our light hybrid retrievers on out-of-domain dataset and a set of adversarial attacks datasets. Experiments showcase that light hybrid retrievers achieve better generalization performance than individual sparse and dense retrievers. Nevertheless, our analysis shows that there is a large room to improve the robustness of retrievers, suggesting a new research direction. | # A Study On The Efficiency And Generalization Of Light Hybrid Retrieversd
Man Luo 1∗ Shashank Jain2 Anchit Gupta2† **Arash Einolghozati**2†
Barlas Oguz2† Debojeet Chatterjee2† **Xilun Chen**2†
Chitta Baral1 **Peyman Heidari** 2 1 Arizona State University 2 Meta Reality Lab 1 {mluo26, chitta}@asu.edu 2{shajain, anchit, arashe, barlaso, debo}@fb.com 2{xilun, peymanheidari}@fb.com
## Abstract
Hybrid retrievers can take advantage of both sparse and dense retrievers. Previous hybrid retrievers leverage indexing-heavy dense retrievers. In this work, we study "Is it possible to reduce the indexing memory of hybrid retrievers without sacrificing performance?" Driven by this question, we leverage an indexing-efficient dense retriever (i.e. DrBoost) and introduce a LITE retriever that further reduces the memory of DrBoost. LITE is jointly trained on contrastive learning and knowledge distillation from DrBoost. Then, we integrate BM25, a sparse retriever, with either LITE or DrBoost to form light hybrid retrievers. Our Hybrid-LITE
retriever saves 13× memory while maintaining 98.0% performance of the hybrid retriever of BM25 and DPR. In addition, we study the generalization capacity of our light hybrid retrievers on out-of-domain dataset and a set of adversarial attacks datasets. Experiments showcase that light hybrid retrievers achieve better generalization performance than individual sparse and dense retrievers. Nevertheless, our analysis shows that there is a large room to improve the robustness of retrievers, suggesting a new research direction.
## 1 Introduction
The classical IR methods, such as BM25 (Robertson et al., 2009), produce sparse vectors for question and documents based on bag-of-words approaches. Recent research pays attention toward building neural retrievers which learn dense embeddings of the query and document into a semantic space (Karpukhin et al., 2020; Khattab and Zaharia, 2020). Sparse and dense retrievers have their pros and cons, and the hybrid of sparse and dense retrievers can take advantage of both worlds and achieve better performance than individual sparse and dense retrievers. Therefore, hybrid retrievers are widely used in practice (Ma et al., 2021b; Chen et al., 2021).
![0_image_0.png](0_image_0.png)
Previous hybrid retrievers are composed of indexing-heavy dense retrievers (DR), in this work, we study the question "Is it possible to reduce the indexing memory of hybrid retrievers without sacrificing performance?" To answer this question, we reduce the memory by using the state-ofthe-art indexing-efficient retriever, DrBoost (Lewis et al., 2021), a boosting retriever with multiple
"weak" learners. Compared to DPR (Karpukhin et al., 2020), a representative DR, DrBoost reduces the indexing memory by 6 times while maintaining the performance. We introduce a LITE model that further reduces the memory of DrBoost, which is jointly trained on retrieval task via contrastive learning and knowledge distillation from DrBoost
(see Figure 1). We then integrate BM25 with either LITE and DrBoost to form light hybrid retrievers (Hybrid-LITE and Hybrid-DrBoost) to assess whether light hybrid retrievers can achieve memory-efficiency and sufficient performance.
We conduct experiments on the NaturalQuestion dataset (Kwiatkowski et al., 2019) and draw interesting results. First of all, LITE retriever maintains 1617 98.7% of the teacher model performance and reduces its memory by 2 times. Second, our HybridLITE saves more than 13× memory compared to Hybrid-DPR, while maintaining more than 98.0%
performance; and Hybrid-DrBoost reduces the indexing memory (8×) compared to Hybrid-DPR
and maintains at least 98.5% of the performance.
This shows that the light hybrid model can achieve sufficient performance while reducing the indexing memory significantly, which suggests the practical usage of light retrievers for memory-limited applications, such as on-devices.
One important reason for using hybrid retrievers in real-world applications is the generalization.
Thus, we further study if reducing the indexing memory will hamper the generalization of light hybrid retrievers. Two prominent ideas have emerged to test generalization: out-of-domain (OOD) generalization and adversarial robustness (Gokhale et al.,
2022). We study OOD generalization of retrievers on EntityQuestion (Sciavolino et al., 2021).
To study the robustness, we leverage six techniques (Morris et al., 2020) to create adversarial attack testing sets based on NQ dataset. Our experiments demonstrate that Hybrid-LITE and HybridDrBoost achieve better generalization performance than individual components. The study of robustness shows that hybrid retrievers are always better than sparse and dense retrievers. Nevertheless all retrievers are vulnerable, suggesting room for improving the robustness of retrievers, and our datasets can aid the future research.
## 2 Related Work
Hybrid Retriever integrates the sparse and dense retriever and ranks the documents by interpolating the relevance score from each retriever.
The most popular way to obtain the hybrid ranking is applying linear combination of the sparse/dense retriever scores (Karpukhin et al., 2020; Ma et al.,
2020; Luan et al., 2021; Ma et al., 2021a; Luo et al., 2022). Instead of using the scores, Chen et al. (2022) adopts Reciprocal Rank Fusion (Cormack et al., 2009) to obtain the final ranking by the ranking positions of each candidate retrieved by individual retriever. Arabzadeh et al. (2021) trains a classification model to select one of the retrieval strategies: sparse, dense or hybrid model.
Most of the hybrid models rely on heavy dense retrievers, and one exception is (Ma et al., 2021a),
where they use linear projection, PCA, and product quantization (Jegou et al., 2010) to compress the dense retriever component. Our hybrid retrievers use either DrBoost or our proposed LITE as the dense retrievers, which are more memory-efficient and achieve better performance than the methods used in (Ma et al., 2021a).
## Indexing-Efficient Dense Retriever. Efficiency
includes two dimensions: latency (Seo et al., 2019; Lee et al., 2021; Varshney et al., 2022) and memory. In this work, our primary focus is on memory, specifically the memory used for indexing. Most of the existing DRs are indexing heavy (Karpukhin et al., 2020; Khattab and Zaharia, 2020; Luo, 2022). To improve the indexing efficiency, there are mainly three types of techniques. One is to use vector product quantization (Jegou et al., 2010).
Second is to compress a high dimension dense vector to a low dimension dense vector, for e.g. from 768 to 32 dimension (Lewis et al., 2021; Ma et al.,
2021a). The third way is to use a binary vector (Yamada et al., 2021; Zhan et al., 2021). Our proposed method LITE (§3.2) reduces the indexing memory by joint training of retrieval task and knowledge distillation from a teacher model.
Generalization of IR. Two main benchmarks have been proposed to study the OOD generalization of retrievers, BEIR (Thakur et al., 2021b) and EntityQuestion (Sciavolino et al., 2021). As shown by previous work (Thakur et al., 2021b; Chen et al.,
2022), the generalization is one major concern of DR. To address this limitation, Wang et al. (2021)
proposed GPL, a domain adaptation technique to generate synthetic question-answer pairs in specific domains. A follow-up work Thakur et al. (2022) trains BPR and JPQ on the GPL synthetic data to achieve efficiency and generalization. Chen et al.
(2022) investigates a hybrid model in the OOD setting, yet different from us, they use a heavy DR
and do not concern the indexing memory. Most existing work studies OOD generalization, and much less attention paid toward the robustness of retrievers (Penha et al., 2022; Zhuang and Zuccon, 2022; Chen et al.). To study robustness, Penha et al.
(2022) identifies four ways to change the syntax of the queries but not the semantics. Our work is a complementary to Penha et al. (2022), where we leverage adversarial attack techniques (Morris et al., 2020) to create six different testing sets for NQ dataset (Kwiatkowski et al., 2019).
## 3 Model
In this section, we first review DrBoost (Lewis et al., 2021), and our model LITE which further reduces the memory of DrBoost, and lastly, we describe the hybrid retrievers that integrate light dense retrievers (i.e. LITE and DrBoost) and BM25.
## 3.1 Reivew Of Drboost
DrBoost is based on ensemble learning to form a strong learner by a sequence of weak leaners, and each weak learner is trained to minimize the mistakes of the combination of the previous learners.
The weak learner has the similar architecture as DPR (Karpukhin et al., 2020) (review of DPR is given in Appendix A), but the output vectors are compressed to a much lower dimension by a linear regression layer W,
$$\begin{array}{l l l}{{\mathrm{v}_{q}^{i}=\mathrm{W}_{q}\cdot\mathrm{V}_{q}^{i},}}&{{\mathrm{v}_{c}^{i}=\mathrm{W}_{c}\cdot\mathrm{V}_{c}^{i},}}&{{}}&{{(1)}}\\ {{\mathrm{V}_{q/c}^{i}}}&{{\mathrm{are}}}&{{\mathrm{the}}}&{{\mathrm{representation}\quad\mathrm{of}\quad\mathrm{question}}}\end{array}$$
where Viq/c are the representation of question/document given by the embeddings of special tokens [CLS] of a high dimension, v i q/c are the lower embeddings produced by the i th weak learner. The final output representation of DrBoost is the concatenation of each weak learners' representations as expressed by Eq. 2.
$$\mathbf{q}=[\mathbf{v}_{q}^{1},\ldots,\mathbf{v}_{q}^{n}],\quad\mathbf{c}=[\mathbf{v}_{c}^{1},\ldots,\mathbf{v}_{c}^{n}],\tag{2}$$
where n is the total number of weak learners in the DrBoost. The training objective of DrBoost is
Lcon = − log e sim(q,c+) e sim(q,c+) +Pj=n j=1 e sim(q,c − j ) , (3)
where sim(*q, c*) is the inner-dot product.
## 3.2 Lite: Joint Training With Knowledge Distillation
Since DrBoost has N encoders, the computation of query representations takes N times as a single encoder. To save latency, Lewis et al. (2021) trains a student encoder which learns the N embeddings from the teacher encoders. As a result, while the student model consists of only one encoder, it produces the same indexing memory as the teacher model. Here, we want to further reduce the student indexing memory. To achieve this, we introduce a LITE retriever (see Figure 1), which produces two embeddings for an input text: one has a smaller dimension (v*q/c,s*) for retrieval task, and the other one is a larger dimension (v*q/c,l*) for learning knowledge from the N teacher models. The small and large embeddings are obtained by compressing the
[CLS] token embedding via separate linear regression layers, mathematically,
$$\mathrm{v}_{q/c,s}=\mathrm{W}_{q/c,s}\cdot\mathrm{V}_{q/c},\quad\mathrm{v}_{q/c,l}=\mathrm{W}_{q/c,l}\cdot\mathrm{V}_{q/c}\tag{4}$$ $\mathrm{v}_{q/c,s}$ is optimized by the contrastive loss (E.q. 3).
v*q/c,s* is optimized by the contrastive loss (E.q. 3).
And v*q/c,l* learns the teacher model embeddings.
The knowledge distillation (KD) loss is composed of three parts (Eq. 5): 1) the distance between student question embeddings and the teacher question embeddings, 2) the distance between student context embeddings and the teacher context embeddings, and 3) the distance between student question embeddings and the teacher positive context embeddings.
$$\mathcal{L}_{K D}=\|\mathbf{v}_{q,l}-\mathbf{q}\|^{2}+\|\mathbf{v}_{c,l}-\mathbf{c}\|^{2}+\|\mathbf{v}_{q,l}-\mathbf{c}^{+}\|^{2}\tag{5}$$ The final objective of the student model is,
$${\mathcal{L}}_{j o i n t}={\mathcal{L}}_{c o n}+{\mathcal{L}}_{K D}.$$
$\eqref{eq:walpha}$
In contrast to the distillation method in DrBoost, which solely learns the embeddings from the teacher model, LITE is simultaneously trained on both the retrieval task and the knowledge distillation task. During the inference time, LITE only utilizes the retrieval embeddings (vc,s ) to achieve indexing-efficiency. It is also notable that LITE is a flexible training framework capable of incorporating most neural retrievers as its backbone models, despite our work being solely reliant on DrBoost.
## 3.3 Memory Efficient Hybrid Model
Our hybrid models retrieve the final documents in a re-ranking manner. We first retrieve the top-k documents using BM25 and dense retriever (DrBoost or LITE) separately. The document scores produced by these two retrievers are denoted by SBM25 and SDR respectively. We apply MinMax normalization to original socres to obtain S′BM25 and S′DR ranging from [0, 1]. For each document, we get a new score for final ranking:
$$S_{\mathrm{hybrid}}=w_{1}\times S_{\mathrm{BM25}}^{\prime}+w_{2}\times S_{\mathrm{DR}}^{\prime},$$
$$\left(T\right)$$
′DR, (7)
where w1 and w2 denote the weights of BM25 and DrBoost scores respectively. In our experiments, we simply set equal weights (i.e. 0.5) to each method. If a context is not retrieved by either retriever, then its score for that retriever is 0.
![3_image_0.png](3_image_0.png)
## 4 Adversarial Attack Robustness Dataset
Adversarial attacks are used to asses model's robustness, where testing samples are obtained by small perturbations of the original samples, and such perturbations keep the label unchanged. To test the robustness of IR systems, we create 6 different adversarial attacks1for NQ (Kwiatkowski et al., 2019). Each method is chosen because they do not change the original meaning of the queries and the relevant documents should be the same as the original relevant documents (see Figure 2).
The six methods include: *Char-Swap (CS):* augments words by swapping characters out for other characters; *Word Deletion (WD):* delete a word randomly from the original query; *Synonym Replacement (SR):* replaces a word in the query with a synonym from the WordNet (Miller, 1995); *WordOrder-Swap (WOS):* swaps the order of the words in the original query; *Synonym Insertion (SI):* insert a synonym of a word from the WordNet to the original query; *Back-Translation (BT)* translates the original query into a target language and translates it back to the source language. Figure 2 shows an example of each attacked instance2.
## 5 Experiments And Results
Existing Methods. We include four existing methods in this work, DrBoost (Lewis et al.,
2021), DPR (Karpukhin et al., 2020), SPAR (Chen et al., 2021) and a heavy hybrid model BM25 +
DPR (Karpukhin et al., 2020). In Table 1, the performance of DrBoost is from the original paper and the performance of the other three methods are
## From (Chen Et Al., 2021).
Our Baselines. Three baselines are presented, BM25, DPR32, and DrBoost-2. DPR32 refers to DPR with a linear projection layer to representation to 32 dimension. DrBoost-2 takes DPR32 as the first weak learner, and uses it to mine negative passages to train the next weak learner and then combine these two models. We do not go beyond 2 weak learners because our goal is to achieve memory-efficiency while increasing the number of encoders in the DrBoost will yield larger indexing.
Our Models. LITE and the three light hybrid models are presented. LITE is trained by the method we introduce in §3.2 with the distilled knowledge from DrBoost-2 teacher model. We present three hybrid models BM25 + LITE, BM25
+ DPR32, and BM25 + DrBoost-2, which are memory-efficient compared to existing methods.
Next we present the experiments and the findings.
## 5.1 Memory Efficiency And Performance
LITE achieves much better performance compared to DPR32 even though both use the same amount of memory. LITE also maintains more than 98% knowledge of its teacher (DrBoost-2), and importantly saves 2× of indexing memory. Such results shows the effectiveness of LITE.
Hybrid-LITE achieves better performance than DrBoost-2 while using less indexing memory.
Hybrid-LITE also matches the performance of DrBoost in terms of R@100 (87.4 v.s. 87.2) while using 3× less memory. Compared with HybridDPR, Hybrid-LITE maintains 98.4% performance but uses 13× less memory. Compared with the SOTA model SPAR, Hybrid-LITE achieves 98.2%
performance and uses 25× less memory.
Hybrid-DrBoost-2 achieves almost similar performance as DrBoost which contains 6 encoders.
This shows the effects of BM25 match the capacity of 4 encoders in the DrBoost. We also compare Hybrid-DrBoost-2 with BM25 + DRP or SPAR,
where our model achieves almost 99% performance but uses less than 8× or 16× of memory.
## 5.2 Out-Of-Domain Generalization
We study the out-of-domain generalization of retriever on EntityQuestion (Sciavolino et al., 2021),
which consists of simple entity centric questions but shown to be difficult for dense retrievers. We train the model on NQ and test on EQ.
| Method | Index-M | NQ | EntityQuestion | | |
|-------------------------|-----------|-------|------------------|-------|------|
| (GB) | R@20 | R@100 | R@20 | R@100 | |
| Existing Method DrBoost | 15.4/13.5 | 81.3 | 87.4 | 51.2 | 63.4 |
| DPR | 61.5 | 79.5 | 86.1 | 56.6 | 70.1 |
| BPR | 2 | 77.9 | 85.7 | - | - |
| BM25+DPR | 63.9 | 82.6 | 88.6 | 73.3 | 82.3 |
| SPAR | 123.0 | 83.6 | 88.8 | 74.0 | 82.0 |
| Our Baseline BM25 | 2.4 | 63.9 | 78.8 | 71.2 | 79.7 |
| DPR32 | 2.5 | 70.4 | 80.0 | 31.1 | 45.5 |
| DrBoost-2 | 5.1 | 77.3 | 84.5 | 41.3 | 54.2 |
| Our Model LITE | 2.5 | 75.1 | 83.4 | 35.0 | 48.1 |
| Hybrid-LITE | 4.9 | 79.9 | 87.2 | 71.5 | 80.8 |
| Hybrid-DPR32 | 4.9 | 77.7 | 86.2 | 70.8 | 80.5 |
| Hybrid-DrBoost-2 | 7.5 | 80.4 | 87.5 | 72.4 | 81.4 |
First of all, our experimental results show that the performance of DPR32, DrBoost-2, and LITE
are much worse than BM25 on EQ. Nevertheless, our hybrid models improve both BM25 and dense retriever performance. Our light hybrid models achieve similar performance as hybrid-DPR and SPAR, which demonstrates that our light hybrid retrievers exhibit good OOD generalization.
## 5.3 Adversarial Attack Robustness
The robustness is evaluated in terms of both performance (higher R@K means more robust) and the average drop w.r.t the original performance on NQ
dataset (smaller drop means more robust).
From Table 2, we observe that all models perform worse compared to the original performance on all adversarial attack sets, which showcase that the current retrievers are not robust enough. Interestingly, while it is expected that BM25 will be robust on word-order-swap (WOS) attack, it is not straightforward that a dense retriever is also robust on this type of questions. This shows that the order of the words in the question is not important for the dense retriever neither. We also see that charswap (CS) is the most difficult attack, which means that both types of retrievers might not perform well when there are typos in the questions.
Diving into the individual performance of each retriever, we see that some models are more robust than others. For example, LITE is more robust than DPR32. We also compare the hybrid model with the pure dense retriever counterparts (e.g. compare
Method R@100
Ori CS WD SR WOS SI BT Drop
BM25 78.8 68.2 71.7 74.5 78.3 77.2 71.2 5.9 DPR32 80.8 61.9 65.8 75.3 76.4 73.3 71.1 10.3
LITE 83.4 69.3 71.8 78.9 81.2 79.0 75.6 7.9
DrBoost-2 84.5 71.6 80.1 74.7 82.6 80.4 77.9 7.8
DPR768 86.1 74.8 78.9 82.5 85.0 83.4 80.3 5.5
+DPR32 86.2 74.4 78.0 82.7 84.9 83.2 78.6 6.1
+LITE 87.2 76.5 78.0 83.7 86.6 85.4 80.8 5.1
+DrBoost-2 87.5 77.7 **84.6** 81.0 86.7 85.9 81.9 5.2
+DPR768 **88.3 78.6** 82.9 **85.4 87.7 86.6 82.6 4.4**
hybrid Drboost-2 with DrBoost-2), and find that hybrid models are consistently more robust. This suggests that the hybrid model can mitigate the performance drop of both BM25 and dense retriever.
## 6 Conclusion
To achieve indexing efficiency, in this work, we study light hybrid retrievers. We introduce LITE,
which is jointly trained on retrieval task via contrastive learning and knowledge distillation from a more capable teacher models which requires heavier indexing-memory. While in this work, we mainly take DrBoost as the teacher model, LITE
is a flexible training framework that can be incorporated with most of the neural retriever. Then, we integrate BM25 with LITE or DrBoost to form light hybrid retrievers. Our light hybrid models achieve sufficient performance and largely reduce the memory. We also study the generalization of retrievers and suggest that all sparse, dense, and hybrid retrievers are not robust enough, which opens up a new avenue for research.
## Limitation
The main limitation of this work is the technical novelty of hybrid retriever. Hyrbid-DrBoost is built on top of DrBoost, and the interpolation of BM25 with DrBoost. However, we would like to point out that our study can serve as an important finding for real-life applications. Previous retrievers are built on top of indexing-heavy dense retrievers, such as DPR. This limits their applications where memory is a hard constraints, for example, on-devices. Our study suggests that a light hybrid retriever can save memory but maintain sufficient performance.
## References
Negar Arabzadeh, Xinyi Yan, and Charles LA Clarke.
2021. Predicting efficiency/effectiveness trade-offs for dense vs. sparse retrieval strategy selection. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2862–2866.
Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, and Marc Najork. 2022. Out-of-domain semantics to the rescue! zero-shot hybrid retrieval models.
In *European Conference on Information Retrieval*,
pages 95–110. Springer.
Xilun Chen, Kushal Lakhotia, Barlas Oguz, Anchit ˘
Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, and Wen-tau Yih. 2021.
Salient phrase aware dense retrieval: Can a dense retriever imitate a sparse one? *arXiv preprint* arXiv:2110.06918.
Xuanang Chen, Jian Luo, Ben He, Le Sun, and Yingfei Sun. Towards robust dense retrieval via local ranking alignment.
Gordon V Cormack, Charles LA Clarke, and Stefan Buettcher. 2009. Reciprocal rank fusion outperforms condorcet and individual rank learning methods. In Proceedings of the 32nd international ACM SIGIR
conference on Research and development in information retrieval, pages 758–759.
Tejas Gokhale, Swaroop Mishra, Man Luo, Bhavdeep Sachdeva, and Chitta Baral. 2022. Generalized but not robust? comparing the effects of data modification methods on out-of-domain generalization and adversarial robustness. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2705–2718.
Herve Jegou, Matthijs Douze, and Cordelia Schmid.
2010. Product quantization for nearest neighbor search. IEEE transactions on pattern analysis and machine intelligence, 33(1):117–128.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781.
Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In *Proceedings of the 43rd* International ACM SIGIR conference on research and development in Information Retrieval, pages 39–
48.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob
Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. *Transactions of the Association of Computational Linguistics*.
Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021. Learning dense representations of phrases at scale. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6634–6647.
Patrick Lewis, Barlas Oguz, Wenhan Xiong, Fabio ˘
Petroni, Wen-tau Yih, and Sebastian Riedel. 2021. Boosted dense retriever. *arXiv preprint* arXiv:2112.07771.
Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, dense, and attentional representations for text retrieval. Transactions of the Association for Computational Linguistics, 9:329–
345.
Man Luo. 2022. Neural retriever and go beyond: A
thesis proposal. *arXiv preprint arXiv:2205.16005*.
Man Luo, Arindam Mitra, Tejas Gokhale, and Chitta Baral. 2022. Improving biomedical information retrieval with neural retrievers.
Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. 2020. Zero-shot neural passage retrieval via domain-targeted synthetic question generation.
arXiv preprint arXiv:2004.14503.
Xueguang Ma, Minghan Li, Kai Sun, Ji Xin, and Jimmy Lin. 2021a. Simple and effective unsupervised redundancy elimination to compress dense vectors for passage retrieval. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2854–2859.
Xueguang Ma, Kai Sun, Ronak Pradeep, and Jimmy Lin. 2021b. A replication study of dense passage retriever. *arXiv preprint arXiv:2104.05740*.
George A Miller. 1995. Wordnet: a lexical database for english. *Communications of the ACM*, 38(11):39–41.
John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp.
Gustavo Penha, Arthur Câmara, and Claudia Hauff.
2022. Evaluating the robustness of retrieval pipelines with query variation generators. In *European Conference on Information Retrieval*, pages 397–412.
Springer.
Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® *in Information Retrieval*, 3(4):333–389.
Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. 2021. Simple entity-centric questions challenge dense retrievers. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6138–6148.
Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019.
Real-time open-domain question answering with dense-sparse phrase index. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 4430–4441.
Nandan Thakur, N. Reimers, Andreas Ruckl'e, Abhishek Srivastava, and Iryna Gurevych. 2021a. Beir:
A heterogenous benchmark for zero-shot evaluation of information retrieval models. *ArXiv*,
abs/2104.08663.
Nandan Thakur, Nils Reimers, and Jimmy Lin. 2022.
Domain adaptation for memory-efficient dense retrieval. *arXiv preprint arXiv:2205.11498*.
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021b. Beir:
A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In *Thirty-fifth Conference on Neural Information Processing Systems* Datasets and Benchmarks Track (Round 2).
Neeraj Varshney, Man Luo, and Chitta Baral. 2022.
Can open-domain qa reader utilize external knowledge efficiently like humans? *arXiv preprint* arXiv:2211.12707.
Kexin Wang, Nandan Thakur, Nils Reimers, and Iryna Gurevych. 2021. Gpl: Generative pseudo labeling for unsupervised domain adaptation of dense retrieval.
arXiv preprint arXiv:2112.07577.
Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi.
2021. Efficient passage retrieval with hashing for open-domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers), pages 979–986.
Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021. Jointly optimizing query encoder and product quantization to improve retrieval performance. In *Proceedings of the* 30th ACM International Conference on Information
& Knowledge Management, pages 2487–2496.
Shengyao Zhuang and Guido Zuccon. 2022. Characterbert and self-teaching for improving the robustness of dense retrievers on queries with typos. arXiv preprint arXiv:2204.00716.
## A Preliminary
BM25 Robertson et al. (2009), is a bag-of-words ranking function that scores the query (Q) and document (D) based on the term frequency. The following equation is the one of the most prominent instantiations of the function,
$$\begin{array}{c}{{s c o r e(D,Q)=\sum_{i=1}^{n}\mathrm{IDF}(q_{i})\cdot}}\\ {{\frac{f(q_{i},D)\cdot(k_{1}+1)}{f(q_{i},D)+k1\cdot(1-b+b\cdot\frac{|D|}{a v g d l})},}}\end{array}\tag{8}$$
where IDF(qi) is the inverse document frequency of query term qi, f(qi, D) is the frequency of qi in document D, |D| is the length of the document D, and *avgdl* is the average length of all documents in the corpus. In practice, k1 ∈ [1.2, 2.0]
and b = 0.75. BM25 is an unsupervised method that generalizes well in different domains (Thakur et al., 2021a).
DPR Dense passage retriever involves two encoders: the question encoder Eq produces a dense vector representation Vq for an input question q, and the context encoder Ec produces a dense vector Vc representation for an input context c. Both encoders are BERT models and the output vectors are the embeddings of the special token [CLS] in front of the input text (Eq. 9).
$$\mathrm{V}_{q}=\mathrm{E}_{q}(q)\,[\,\mathrm{CLS}\,]\,,\quad\mathrm{V}_{c}=\mathrm{E}_{c}(c)\,[\,\mathrm{CLS}\,]\,.\tag{9}$$
The score of c w.r.t q is the inner-dot product of their representations (Eq 10).
$$\mathrm{sim}(q,c)=\mathrm{V}_{q}^{\top}\mathrm{V}_{c}.\qquad\qquad(10)$$
DPR uses contrastive loss to optimize the model such that the score of positive context c
+ is higher than the score of the negative context c−. Mathematically, DPR maximizes the following objective function,
$${\mathcal{L}}_{c o n}=-\log\frac{e^{\sin(q,c^{+})}}{e^{\sin(q,c^{+})}+\sum_{j=1}^{j=n}e^{\sin(q,c_{j}^{-})}},\tag{11}$$
where n is the number of negative contexts. For better representation learning, DPR uses BM25 to mine the hard negative context and the in-batch negative context to train the model.
![7_image_0.png](7_image_0.png)
## B Ablation Study
In this section, we conduct ablation studies to see the effects of the proposed methods, and all models are trained and tested on NQ dataset.
## B.1 Lite Can Improve Drboost
Recall that DPR32 is one encoder in DrBoost-2, and since LITE performs better than DPR32 (see Table 1), we ask the question can LITE replaces DPR32 to form a stronger DrBoost-2 model? To answer this question, we compare the performance of R-DrBoost-2 (i.e. replace DPR32 with LITE) with the original DrBoost-2. From Table 3, We observe that R-DrBoost-2 performs worse than DrBoost-2, indicating that the encoders in the DrBoost indeed relate and complement to each other and replacing an unrelated encoder degrades the performance.
Then we ask another question, can we train a weak learner that minimizes the error of LITE, and combine LITE with the new weak learner to form a stronger DrBoost (L-DrBoost-2)? Table 3 shows L-DrBoost-2 is better than DrBoost-2, and hybrid L-DrBoost-2 is better than hybrid DrBoost-2 as well (81.0 v.s. 80.4 on R@20). This indicates that starting with a stronger weak learner can yield a stronger DrBoost.
## B.2 Hybrid Model Consistently Improves The Drboost Performance.
We study six DrBoost models with 1-6 weak learners. In Figure 3, we see that the performance of hybrid models consistently improves the DrBoost performance, demonstrating the results of BM25 and DrBoost complement each other and combining two models improves individual performance.
We also see that the improvement is larger when the DrBoost is weaker, e.g. hybrid model significantly improves DPR32.
![7_image_1.png](7_image_1.png)
| Model | Method | NQ | |
|----------------|----------------|-------|-------|
| R20 | R100 | | |
| Simple Sum | 79.03 | 84.63 | |
| Hybrid(32*2) | Multiplication | 79.03 | 84.63 |
| MinMax and Sum | 80.41 | 87.47 | |
| Simple Sum | 81.61 | 86.12 | |
| Hybrid(32*6) | Multiplication | 81.19 | 86.12 |
| MinMax and Sum | 81.52 | 88.28 | |
## B.3 Different Hybrid Scores
In our hybrid model, besides the hybrid scores we introduced in §3.3, we also study two different hybrid scores of BM25 and the DrBoost. Simple Summation is to add two scores together, and multiplication is to mutiply two scores. We compare two hybrid models' performance, Hybrid-DrBoost2 and Hybrid-DrBoost-6. Table 4 shows that the MinMax normalization performs the best (except that simple summation is slightly better in terms of R@20 for hybrid models with 6 weak learners).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2
✓ B1. Did you cite the creators of artifacts you used?
section 2 and 3.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
agnew-etal-2023-mechanical | The Mechanical Bard: An Interpretable Machine Learning Approach to {S}hakespearean Sonnet Generation | https://aclanthology.org/2023.acl-short.140 | We consider the automated generation of sonnets, a poetic form constrained according to meter, rhyme scheme, and length. Sonnets generally also use rhetorical figures, expressive language, and a consistent theme or narrative. Our constrained decoding approach allows for the generation of sonnets within preset poetic constraints, while using a relatively modest neural backbone. Human evaluation confirms that our approach produces Shakespearean sonnets that resemble human-authored sonnets, and which adhere to the genre{'}s defined constraints and contain lyrical language and literary devices. | # The Mechanical Bard: An Interpretable Machine Learning Approach To Shakespearean Sonnet Generation
Edwin Agnew∗, Michelle Qiu∗, Lily Zhu∗**, Sam Wiseman, Cynthia Rudin**
Duke University, Durham, NC
[email protected], [email protected], [email protected] [email protected], [email protected]
## Abstract
We consider the automated generation of sonnets, a poetic form constrained according to meter, rhyme scheme, and length. Sonnets generally also use rhetorical figures, expressive language, and a consistent theme or narrative. Our constrained decoding approach allows for the generation of sonnets within preset poetic constraints, while using a relatively modest neural backbone. Human evaluation confirms that our approach produces Shakespearean sonnets that resemble human-authored sonnets, and which adhere to the genre's defined constraints and contain lyrical language and literary devices.
## 1 Introduction
We consider the task of automatically generating Shakespearean sonnets, a popular poetic form with highly specific rhyme and meter constraints1. Each sonnet consists of three quatrains followed by a single couplet according to the rhyme scheme ABAB BCBC CDCD EE, and each line contains ten syllables with a stress pattern of iambic pentameter.
Rather than train a model to obey these constraints implicitly (which leads to enormous models that that still do not obey the constraints), we opt to enforce them explicitly using a simple but novel approach to generation.
In particular, we use part-of-speech (POS) templates selected and edited from individual lines in Shakespeare's sonnets, with each template intended to offer a different combination of parts of speech and narrative directions. Associated thematic words are then selected and placed at the end of each template, and their rhyming pairs are chosen dynamically by a language model (e.g., GPT-2, Radford et al., 2019) and placed at the end of the corresponding lines according to the rhyme scheme.
*denotes equal contribution 1Our code is available at https://github.com/
edwinagnew/Poetix_Sonnets
When all the lovers of this world are dead,
![0_image_1.png](0_image_1.png)
The sun of heaven on a golden day To burn the earth's fire by the flame and spread Where all the flowers of your fair days lay.
These are the blossoms that you take care of.
Why do you linger such a long delay?
Forgive the fluttered flower of meek love Or who you have so long to love the day?
![0_image_0.png](0_image_0.png)
Figure 1: A sonnet generated with the theme "death".
The rest of the line is filled with related words that fit the specified POS and meter, leading to the end rhyme word. Figure 1 shows sample output.
Our use of these templates ensures sophisticatedseeming language and syntax that competing systems do not capture. Our approach provides excellent grammatical structure comparable to that of human-written poetry, all while using a relatively simple model and generation procedure.
We extensively evaluate the ability of our approach to generate *whole* sonnets (a setting often ignored by recent work in poetry generation) and find that our approach is preferred over strong baselines by both expert annotators (recruited from an academic English department) and by crowdworkers. As this research was conducted before the release of ChatGPT, we were not able to robustly compare our model's performance against this language model. However, we make several observations about the poetic quality of sonnets generated by ChatGPT.
## 2 Related Work
Early attempts at poetry generation relied mainly on rule-based methods (Gervás, 2000; Oliveira, 1627 2012; Manurung et al., 2000; Veale, 2013). More recent automated poetry generation techniques, especially for sonnet generation, have relied on combinations of task-specific language models and rules. For instance, Ghazvininejad et al. (2016)'s Hafez uses a finite state acceptor to generate a large number of possible lines, the best of which are then selected with an RNN trained on song lyrics. Like our approach, they use rhyming dictionaries to find rhyming words and word embeddings to find topical words. Similarly, Benhardt et al. (2018) preselects rhyming words and generates lines backwards with a recurrent neural network (RNN). Also in this vein are Lau et al. (2018)'s Deepspeare, which consists of an LSTM language model, an iambic model, and a rhyming model, and the recent work of Van de Cruys (2020) and Wang et al. (2021).
Our approach distinguishes itself in using a general-purpose pretrained language model, but more importantly in its use of human-curated constraints and templates. These allow for generating high-quality poems with a very simple approach.
## 3 Methodology
The general idea of our approach is to take a pretrained language model (in this case GPT-2) and apply hard constraints to the generation procedure so that it can only output text satisfying various poetic constraints. These constraints can be broadly divided into *hard* constraints (e.g., number of syllables) and *soft* constraints (e.g., sounding poetic),
and our methodology can be separated similarly.
Our generation process is in Figure 2.
## 3.1 Pos Templates
The most important part of our method is the use of handcrafted grammar templates. Taking inspiration from existing sonnets, we created a list of about 120 templates that encode the part-of-speech structure of a line of poetry. Each template can generate an unbounded number of possible poetic lines. For example, the line "The beauty of life on a lonely sea" is represented by the template "THE NN OF NN ON A JJ NN." More sample templates are in Section A.1. Since the templates allow for considerable flexibility, obeying the templates does not alone suffice for poetry. For example, the same template could be used to write poetic lines with distinct meanings such as "The tree of anguish on a stormy night" or a nonsensical line like "The fork of ant on an unpacked transfer." A subset of these
## 3.2 Strict Sonnet Constraints
The two most critical features of sonnets distinguishing them from other poetry forms are that they are written in iambic pentameter (i.e., each line has 10 syllables of alternating stress pattern), and they follow an ABAB CDCD EFEF GG rhyme scheme.
To detect iambic pentameter, we use the CMU Pronouncing Dictionary (CMU, 2019), which reveals how many syllables a word contains and the stress of each syllable. An unstressed syllable is represented as '0' and a stressed syllable as '1', and so the line "The beauty of life on a lonely sea" is represented as '0 10 1 0 1 0 10 1'. For simplicity, 1-syllable words can be designated as either 0 or 1.
Given a POS-tag for every word in our dictionary, we create a tree-like data structure that represents every possible meter for a given template.
Continuing the example, the first word could only be 'the', but the second word could be filled with a 1-syllable noun like 'tree', a 2-syllable noun like
'chaos' (10), or a 3-syllable noun like 'audio' (101),
and so on. Each choice affects the possible pronunciations of the next word as well as the number of remaining words in order to reach 10 syllables. The pronunciation dictionary ensures the last syllable of the last word on each line matches its partner.
## 3.3 Language Model
We use a language model to generate individual sonnet lines, subject to the formal constraints outlined above. In particular, we first fine-tune GPT2 (Radford et al., 2019) on a large corpus of over 15000 poems 2and a smaller corpus of sonnets3.
We then use a constrained beam-search to generate each line, where only legal tokens (under the aforementioned constraints) can be generated at each step; this generation approach resembles previous constrained decoding techniques used in sonnet generation (Ghazvininejad et al., 2016), although our approach differs in the choice of model and direct enforcement of constraints. For a comparison of generation quality using a GPT-2 model that has not been fine-tuned, see Section 4.1.
## 3.4 Thematic Word Choice
To ensure the content of the poem fits the theme specified by the user, we provide an excerpt of a 2https://www.kaggle.com/datasets/johnhallman/completepoetryfoundationorg-dataset 3https://www.kaggle.com/datasets/michelleqiu/sonnets
![2_image_0.png](2_image_0.png)
theme-appropriate poem as additional context to GPT-2 during generation. This additional poem is selected by finding a list of synonyms to the theme word using the WordNet synonym database (Miller, 1998) and then choosing lines from a poem corpus that contain at least one synonym. We also remove words from the vocabulary if they have less than 0.5 cosine similarity with the theme word, based on the corresponding FastText word embeddings (Bojanowski et al., 2017). This avoids having words like "algebra" in poems with themes like "forest."
## 3.5 Generation Procedure
Having introduced our method's components, we now describe the generation procedure. A user inputs a theme word, a beam search parameter, b, and the number of templates sampled per line, k. A
seed is chosen with the above method. Then for each line, we sample k random templates. For each template, we generate the line using a modified beam search. Specifically, the beam search maintains b different hypotheses per line at all times.
For each hypothesis, we first mask out any tokens that violate our hard POS, meter, or rhyme constraints and select the b best next-tokens for each of the k templates. These b 2 new candidates are reranked according to our custom scoring function, and the top k × b proceed to the next stage. The constraint-filtering at each stage guarantees that the generated line will match the input template, while the beam search allows more flexible word choice than greedy word-filling for each POS. If none of the k×b generated lines score better than a specific threshold, then a new template is chosen and the line is generated again. Otherwise, line generation continues until the poem is completed.
## 3.6 Poetic Devices
To make the poems more poetic, we adjust our scoring function to weight lines with alliteration, penalties for repetition, and/or internal rhyme. Alliteration occurs when a line contains words starting with the same letter, repetition occurs when a word is present several times throughout a poem, and internal rhyme occurs when two words rhyme within the same line. To weight alliteration, when the first token of a new word is being generated, a list A⃗ = [a1, a2*, ...a*n] is generated where aiis the number of occurrences of the first letter of the ith token in the current line. To weight and discourage repetition, a list T⃗ = [t1, t2*, ...t*n] is generated where tiis the number of occurrences of the ith token in the poem, negated. To weight internal rhyme, a list R⃗ = [r1, r2*, ..., r*n] is generated where ri = 1 if the ith token is part of a word that rhymes with any of the words in the current line generated so far, and ri = 0 otherwise.
The final token distribution is then proportional to P˜ + αA × A⃗ + αT × T⃗ + αR × R⃗ , where P˜ is the language model's next-token distribution, and αA, αT , and αR are user-specified non-negative parameters, which represent the degree to which alliteration, repetition, and internal rhyme should be favored during generation.
## 3.7 Postprocessing
After a poem is completed and all 14 lines score above a fixed threshold, a small number of adjustments are made. These include fixing common mistakes made by GPT-2 like not capitalizing the word 'I' and not capitalizing following punctuation.
## 4 Experiments
We used human input to test our sonnets against both model-generated and human-written sonnets.
To test adherence to a theme throughout a son-
| Expert Evaluation | | |
|---------------------|-------|-----------|
| Category | Mean | p-value |
| PoeTryMe | | |
| Grammar | 4.50* | 1.71×10-4 |
| Emotion | 4.30* | 3.13×10-3 |
| Poetic | 4.30* | 3.13×10-3 |
| Human | 4.10* | 5.77×10-3 |
| Theme | 2.60 | 0.211286 |
| Benhardt et al. | | |
| Grammar | 3.83* | 0.03 |
| Emotion | 3.67* | 0.05 |
| Poetic | 3.75* | 0.04 |
| Human | 3.75* | 0.02 |
| Theme | 2.42 | 0.06 |
| Human-written poems | | |
| Grammar | 1.36 | 1.00×10-6 |
| Emotion | 1.4 | 5.00×10-6 |
| Poetic | 1.64 | 5.40×10-5 |
| Human | 1.36 | 1.00×10-6 |
| Theme | 1.57 | 7.70×10-5 |
| Amazon MTurk Evaluation | | |
|---------------------------|-------|------------|
| Category | Mean | p-value |
| PoeTryMe | | |
| Grammar | 3.66* | 2.00 ×10-6 |
| Emotion | 3.54* | 1.16 ×10-4 |
| Poetic | 3.55* | 3.70 ×10-5 |
| Human | 3.59* | 1.60 ×10-5 |
| Theme | 2.86 | 0.19 |
| Benhardt et al. | | |
| Grammar | 3.34* | 6.57 ×10-3 |
| Emotion | 3.16* | 0.12 |
| Poetic | 3.11* | 0.19 |
| Human | 3.06* | 0.33 |
| Theme | 2.77 | 0.06 |
| Human-written poems | | |
| Grammar | 3.13* | 0.14 |
| Emotion | 2.86 | 0.14 |
| Poetic | 2.91 | 0.24 |
| Human | 2.92 | 0.27 |
| Theme | 2.67 | 0.02 |
net, we desired baselines that generate whole sonnets with user-provided themes. This limited our competitors, as some generate discrete quatrains or generate without input themes (e.g., Deepspeare),
leaving only Benhardt et al. (2018) and PoeTryMe
(Oliveira, 2012); see Section A.2.
Furthermore, an evaluation of poetry quality is incomplete without human-written sonnets, selected from sonnets.org. Though these poems do not have an explicit theme, we selected poems that followed our five themes.
To optimally test our model, we conducted an internal analysis and selected k values sampled from 3, 5, or 7, b values sampled from 3, 5, or 7, and repetition penalty values sampled from 1.4, 1.6, or 1.8 that we concluded produced the highest quality sonnets. To evaluate adherence to theme, we generated poems with themes "death," "darkness,"
"forest," "love," and "wisdom."
In each test, respondents compared six randomly selected pairs of sonnets, with each of our sonnets displayed with a competing model/human-written sonnet generated with the same theme word. Respondents indicated which of the two sonnets performed better in categories of theme, poeticness, grammar, emotion, and likelihood of being humanwritten. Detailed instructions are in A.3.
## 4.1 Expert Evaluation
For an expert evaluation, we recruited six faculty members and students from an academic English department. Figures 3 and 5 show that we strongly outperform PoeTryMe in all categories but theme with high statistical significance (p<0.006), and we outperform Benhardt et al. in all poetic categories but theme and emotion with statistical significance
(p<0.05). Notably, while we outperform other computer-generated poems, respondents could still distinguish between our poems and human-written sonnets quite easily. See more in A.4.
## 4.2 Amazon Mturk Evaluation
Along with expert evaluation, we used Amazon MTurk services to assess poems on a larger scale. Figures 4 and 6 show our superior performance against competitors in several categories. As expected of most computer-generated work, our poems failed to outperform human-written poems.
However, we can only strongly conclude that the human-written poems are better in one category, theme. Our poems even outperformed humanwritten poems in grammar (albeit with low statistical significance), showing that our strictly constrained beam search generates high quality grammar. See more in A.5.
![4_image_0.png](4_image_0.png)
## 4.3 Ablative Evaluation
We also conducted ablative studies showing the efficacy of two key elements of our method: line templates and the fine-tuned GPT-2 language model.
We generated two sets of ablation poems: one with the fine-tuned GPT-2 and no templating, and one using the untrained GPT-2 model and templating.
We then used Amazon MTurk services to test each set against poems generated with both factors under the same criteria as previous experiments. From Figure 11, it is the combination of the fine-tuned model and templating that ensures higher quality sonnets than if only one factor is implemented. Our poems with both factors outperform both sets of ablative poems with varying statistical significance.
Specifically, providing templates is clearly the critical piece to generate poems of a high caliber. See more in A.6.
## 5 Conclusion
We propose a novel method for generating highquality poems that uses POS templating to determine a logical syntactical structure and rigorously
![4_image_1.png](4_image_1.png)
maintains constraints necessary for any sonnet. Our method is highly versatile, with poetic factors like alliteration, internal rhyme, repetition, and theme adjustable to ensure creative output. After extensive surveys conducted with expert evaluators and MTurk participants, our model's success over similar competitors is evident, though our model's poems, like those of most computer poetry generators, remain distinguishable from human written poems.
While we were unable to compare our model's performance to that of ChatGPT, our finetuned GPT-2 requires far less computing power than subsequent GPT models. Additionally, while we commenced this project's evaluation prior to the release of ChatGPT, after a preliminary qualitative evaluation, ChatGPT seems to produce very generic poetry (see A.7). Thus, for this particular application, our model may be a viable method that is more cost-effective and produces relatively high-quality sonnets.
## Limitations
Though our method produces full sonnets that are more impressive than all previous approaches, it is still not at the level of human-generated poetry.
It is not clear how to achieve this level, whether it would be using massive large language models, or through our general approach, which is to bend those models around an interpretable framework that knows the rules that sonnets obey. Certainly our approach requires a lot less data - even if one used all the sonnets that have ever been written to train a language model, it is unclear that the language model would learn the very specific rules required of sonnets. However, there may be other ways to obtain these constraints that have not yet been developed.
## Ethics Statement
As with all neural generation, there are concerns about misinformation and generating toxic text.
These concerns apply to some degree to poetry generation, although our rigidly constrained approach and limited vocabulary should mitigate this.
## References
John Benhardt, Peter Hase, Liuyi Zhu, and Cynthia Rudin. 2018. Shall I compare thee to a machinewritten sonnet? An approach to algorithmic sonnet generation.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the association for computational linguistics*, 5:135–146.
Carnegie Mellon University CMU. 2019.
The CMU pronouncing dictionary.
http://www.speech.cs.cmu.edu/cgi-bin/cmudict, Internet.
Pablo Gervás. 2000. Wasp: Evaluation of different strategies for the automatic generation of spanish verse. In Proceedings of the AISB-00 Symposium on Creative & Cultural Aspects of AI, pages 93–100.
Marjan Ghazvininejad, Xing Shi, Yejin Choi, and Kevin Knight. 2016. Generating topical poetry. In *Proceedings of the 2016 Conference on Empirical Methods* in Natural Language Processing, pages 1183–1191.
Jey Han Lau, Trevor Cohn, Timothy Baldwin, Julian Brooke, and Adam Hammond. 2018. Deep-speare:
A joint neural model of poetic language, meter and rhyme. In *Proceedings of the 56th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1948–1958, Melbourne, Australia. Association for Computational Linguistics.
Ruli Manurung, Graeme Ritchie, and Henry Thompson.
2000. Towards a computational model of poetry generation. https://era.ed.ac.uk/handle/1842/3460.
George A Miller. 1998. WordNet: An electronic lexical database. MIT press.
Hugo Gonçalo Oliveira. 2012. Poetryme: a versatile platform for poetry generation. *Computational Creativity, Concept Invention, and General Intelligence*,
1:21.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
https://github.com/openai/gpt-2.
Tim Van de Cruys. 2020. Automatic poetry generation from prosaic text. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2471–2480, Online. Association for Computational Linguistics.
Tony Veale. 2013. Less rhyme, more reason:
Knowledge-based poetry generation with feeling, insight and wit. In Proceedings of the Fourth International Conference on Computational Creativity, ICCC 2013, Sidney, Australia, June 12-14, 2013, pages 152–159. computationalcreativity.net.
Jianyou Wang, Xiaoxuan Zhang, Yuren Zhou, Christopher Suh, and Cynthia Rudin. 2021. There once was a really bad poet, it was automated but you didn't know it.
## A Appendix A.1 Templating Mechanism
Figure 8 presents more examples of our templating mechanism. We combine an adapted version of the Penn Treebank Project's part of speech tags along with articles, conjunctions, prepositions, and other filler words to construct these templates. Additionally, we provide the stress pattern of the syllables to ensure that the constraint of iambic pentameter is met. However, outside of the pre-determined filler words, POS do not have to directly adhere to the given stress pattern in splitting up words. For instance, in the first template, the provided syllable stress indicates that the JJ tag (adjective) should have two syllables, while the final VB tag (verb)
should have only one syllable. However, the generated line ends with a monosyllabic adjective and a bisyllabic verb. As long as the stressing of the syllables aligns properly, each word can vary in its number of syllables. This is also visible in the fourth template example in Figure 8.
## A.2 Elaboration On Experimental Competitors
Benhardt et al. (2018), referred to as Benhardt et al., uses a RNN to preselect rhyming words and restrict different parts of speech to fit within the sonnet format. Oliveira (2012), referred to as CoPoetryMe, is a versatile platform using semantic and grammar templates to alter the type of poem, input words, and "surprise" factor generated.
## A.3 Experimental Procedure
For each pair of sonnets, respondents were asked to indicate whether Sonnet A or Sonnet B performed better based on factors such as adherence to the inputted theme, poeticness, grammatical correctness, ability to convey emotion, and likelihood of being written by a human. Available answer choices and their corresponding numeric scores from 1 to 5 were "Definitely A" (5), "Probably A" (4), "The same" (3), "Probably B" (2), and "Definitely B" (1).
Both our sonnet and the competing model-humansonnet had equal probability of being either sonnet A or sonnet B in each pair. To analyze this data, user inputs were translated into numeric scoring values corresponding to our model's sonnet being Sonnet A (i.e. if our sonnet is presented as B to the user, a response of "Definitely B" corresponds to a score of 5, "Probably B" corresponds to 4,
"Probably A" corresponds to 2, and "Definitely A"
corresponds to 1). Additionally, respondents were asked to answer sanity check questions to filter out respondents who answer illogically or who do not have a sufficient grasp of English grammar. This setup remained the same across all experiments, and an additional space was allocated for expert evaluators to leave qualitative comments on sonnet quality. Sample sonnet evaluation questions are visible in Figure 9.
After calculating the mean and standard deviation for scores across sonnets, we can immediately see whether our model performed better (an average score of > 3) or worse (an average score of
< 3) than the competitor in each respective category. We then performed a series of t-tests to establish these results' statistical significance. For factors that indicated our model performed better, we performed a right-tailed t-test (with the nullhypothesis as our model performed worse than the baseline), and we performed a left-tailed t-test for the remaining factors (with the null-hypothesis as our model performed better than the baseline).
## A.4 Expert Evaluation Analysis
In the expert evaluation, we emailed faculty at an American academic English department to recruit six faculty members and students to take our survey without payment. While we showed strong performance against the other computer-generated poems, we are consistently outperformed by humanwritten poems in all categories. Weaker performance on theme in experimental results may be explained by competitors' more frequent inclusion of the user-inputted theme word. For instance, in the expert evaluation, between two poems generated with the theme word "forest" (see Figure 10),
one survey respondent states, "Sonnet B repeats forest too much for my taste," subsequently giving our model a 5 in each of poeticness, grammar, emotion, and humanness, yet a 2 in theme.
## A.5 Amazon Mturk Analysis
In our evaluation using Amazon MTurk Services, we requested survey respondents from primarily English-speaking countries and with an approval rate of ≥ 95%. Crowdworkers were paid through the Amazon MTurk platform for this survey that on average took less than 30 minutes to complete. The questions and formatting remained the same as the expert evaluation, except no space was provided for qualitative feedback.
Based on Figure 4 there is enough statistical significance to conclude that our sonnets outperform PoeTryMe in poetic, grammar, emotion, and human categories (p<0.001). Against Benhardt et al., there is enough statistical significance to conclude that our sonnets perform better in grammar (p<0.001), and perform slightly better with weak statistical significance in emotion (p<0.15).
Against human-written sonnets, the p-values for poetic, emotion, and even human categories are too large to strongly reject the null hypothesis that our model performed better than the baseline. Additionally, while the p-value indicates that this value is not statistically significant, it is interesting to note that our poems on average scored better in the grammar category.
## A.6 Ablation Analysis
In our ablation analysis, we replicate the Amazon MTurk analysis yet replace the competitor/humanwritten sonnets with poems generated with either the fine-tuned GPT-2 model without templating or the GPT-2 model without fine-tuning and with templating. This lets us test the individual efficacy of each factor (templating and fine-tuning GPT-2) against our method implementing both. Against poems generated with the fine-tuned GPT-2 and no templating, our sonnets performed better across all categories, and we can strongly reject the null hypothesis that our model performed worse than the baseline (p<0.0001). Against the poems generated with the GPT-2 model without fine-tuning and with templates, we can conclude with high statistical significance (p<0.01) that we performed better in emotion, and conclude with weak statistical significance (p<0.10) that we performed better in grammar and theme. These results indicate that our method is successful due to its usage of both the fine-tuned GPT-2 model and templating.
## A.7 Chatgpt Qualitative Analysis
While we did not have time to extensively evaluate the quality of our sonnets against those of ChatGPT, after generating several sonnets to test ChatGPT's sonnet quality, it seems as though this language model generates relatively generic, non-cohesive sonnets even with different parameters. For instance, in Figure 7, both of the sonnets are unable to cohesively connect these three topics along a reasonable storyline. Additionally, Sonnet A in particular seems to dedicate a single stanza to each of the three topics passed in, hardly attempting to connect them. Of course, with more intensive prompt engineering, it is possible to generate a sonnet more tailored to one's preference. However, even this short analysis demonstrates there are clearly still strides to be made in the field of automatic poetry generation even with the advent of ChatGPT.
Sonnet A
In realms of words, where novels find their reign, My friend, a skeptic, shuns the written tales.
No pages turned, no fictional campaigns, Averse to narratives that time unveils.
Yet in his hand, a pear's sweet succulence, A juicy feast, a taste of nature's grace.
Each bite, a moment of pure indulgence, A palate's joy, a respite from life's race.
But beyond fruit's delight, his spirit soars, Within the bounds of a duelist's domain.
With sword in hand, he dances, parries, scores, A noble art where honor's stance remains.
Though novels may elude his curious gaze, Through pears and fencing, passions find their ways.
Sonnet B
In realms of love, where rom-coms dance their waltz, My friend, a rebel, strays from silver screens.
With flannel-clad allure, he stands, exalts, A soul untouched by scripted love's routines.
No tears shed for the tales of starry-eyed, His heart resists the charms of love's façade.
Yet stitches formed with yarn, his hands provide, Creating warmth, affection neatly clad.
Through threads of patience, vibrant patterns bloom, His nimble fingers weave a tapestry.
Each crochet stitch, a tale of friendship's loom, A testament to artistry set free.
Though rom-coms fall short of his embrace, In flannel and crochet, love finds its place.
| Template | Syllable Stress | Example Line |
|---------------------------------------|--------------------|--------------------------------------------------|
| Where all the NNS of PRPD$ JJ NNS VB. | 0 1 0 1 0 1 01 0 1 | "Where all the gods of their past lives dictate" |
| And it VBD ABNN to the NN | 0 1 0 10 1 0 101 | "And it seemed evil to the enterprise" |
| Between the VBG and the VBG NN | 01 0 10 1 0 10 1 | "Between the glistening and the dying muse" |
| A JJ NN from the JJ NN | 0 10 10 1 0 1 01 | "A little lightness from the earthy sky" |
| Upon PRPO, PRPD$ NN POS NN | 01 01 0 10 101 | "Upon you, your life's possibility" |
| Why VBC PRPS VBG such a JJ NN? | 0 1 0 10 1 0 101 0 | "Why do you squander such a precious thing?" |
| The NNS of ABNN, the NN on the NN | 0 1 0 1 0 10 1 0 1 | "The ghosts of death, the spirit on the earth" |
Figure 8: Template examples, their corresponding syllable stress in order to adhere to iambic pentameter, and a
![8_image_0.png](8_image_0.png)
sample line generated using the template.
| Sonnet A: Our Code |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| I was aghast to see the fireflies |
| Inflamed soothed toads, where there the dead boughs lay And it seemed evil to the enterprise The hag I had, the hag, the hog, the gray. But I knew to my painless fireflies And beauty was a kind and loving thing. My life's light isle so longed on otherwise So too my fireflies bloomed to my king. Those eagles that with auburn hair flew oaks, Beauty and beauty beamed within the air Which made oasis overcomes to coax? So too my hogs beheaded to my lair. The windy night was in the mistletoe And wept soiled toads in my dream's studio. Sonnet B: PoetryMe forest some more and reforest a trip! in deserts where heavenly woodlands clink many, many, many clustered before come: not in establishments of the floor the fields of agony, the endless circumstance findings to lies to interrupt your earth with summation and set, triumph and agony floors of horror forest before my eyes those that study clustered plant are psychologists taking over my ness a second forest an' you've got to forest them reforest on every forest, indeed, that rainforests and grounds of forest coming to accord floor of establishments and lilt of sing |
| Figure 10: Comparison of two sonnets generated with |
Figure 10: Comparison of two sonnets generated with theme word "forest". Sonnet A was generated with our code, and Sonnet B was generated using PoeTryMe.
| Ablation Evaluation | | | | |
|-----------------------|-------|-----------|-------|-----------|
| Category | Mean | p-value | Mean | p-value |
| Grammar | 3.51* | 5.10×10-5 | 3.21* | 0.06 |
| Emotion | 3.61* | 9.00×10-6 | 3.40* | 3.89×10-3 |
| Poetic | 3.61* | 4.00×10-6 | 3.09* | 0.29 |
| Human | 3.66* | 1.00×10-6 | 3.01* | 0.46 |
| Theme | 3.50* | 8.00×10-5 | 3.20* | 0.06 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Ethics
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3,4
✓ B1. Did you cite the creators of artifacts you used?
3.2,3.3,References B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Data used from publicly available sonnets/poems were assumed to be not subject to dispute.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3.3
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
4,4.1,4.2,4.3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
A.5,A.6
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
We do not believe having data on poetry evaluation raises any ethical issues.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
We do not believe having crowdworkers evaluate the same poems that were given to English professors raises any ethical issues.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
A.6 |
diwan-etal-2023-use | When to Use Efficient Self Attention? Profiling Text, Speech and Image Transformer Variants | https://aclanthology.org/2023.acl-short.141 | We present the first unified study of the efficiency of self-attention-based Transformer variants spanning text, speech and vision. We identify input length thresholds (tipping points) at which efficient Transformer variants become more efficient than vanilla models, using a variety of efficiency metrics (latency, throughput, and memory). To conduct this analysis for speech, we introduce L-HuBERT, a novel local-attention variant of a self-supervised speech model. We observe that these thresholds are (a) much higher than typical dataset sequence lengths and (b) dependent on the metric and modality, showing that choosing the right model depends on modality, task type (long-form vs. typical context) and resource constraints (time vs. memory). By visualising the breakdown of the computational costs for transformer components, we also show that non-self-attention components exhibit significant computational costs. We release our profiling toolkit at \url{https://github.com/ajd12342/profiling-transformers} . | # When To Use Efficient Self Attention? Profiling Text, Speech And Image Transformer Variants
Anuj Diwan, Eunsol Choi, David Harwath Department of Computer Science The University of Texas at Austin
{anuj.diwan, eunsol, harwath}@utexas.edu
## Abstract
We present the first unified study of the efficiency of self-attention-based Transformer variants spanning text, speech and vision. We identify input length thresholds (*tipping points*) at which efficient Transformer variants become more efficient than vanilla models, using a variety of efficiency metrics (latency, throughput, and memory). To conduct this analysis for speech, we introduce L-HuBERT, a novel localattention variant of a self-supervised speech model. We observe that these thresholds are (a) much higher than typical dataset sequence lengths and (b) dependent on the metric and modality, showing that choosing the right model depends on modality, task type
(long-form vs. typical context) and resource constraints (time vs. memory). By visualising the breakdown of the computational costs for transformer components, we also show that non-self-attention components exhibit significant computational costs. We release our profiling toolkit at https://github.
com/ajd12342/profiling-transformers.
## 1 Introduction And Related Work
Transformers (Vaswani et al., 2017) are widely adopted across NLP (Devlin et al., 2019; Brown et al., 2020), Speech Processing (Mohamed et al.,
2022) and Computer Vision (Dosovitskiy et al.,
2021). Studies have shown that scaling models up improves performance (Chowdhery et al.,
2022), making efficiency an important research topic. Many Transformer variants focus on improving the efficiency of self-attention, motivated by its asymptotic quadratic time/space complexity with respect to the input sequence length.1 While these Transformer variants are designed be asymptotically faster, in practice they may actually be slower, especially given modest input lengths that are typical of many tasks.
1We refer the readers to Tay et al. (2022) for a comprehensive overview of efficient Transformers.
Our paper presents two main analyses. First, we visualize the *layerwise* efficiency of such models to locate bottlenecks and attempt to answer the question *"is self-attention the true bottleneck?"* We find that in the non-asymptotic case, non-self-attention layers contribute significantly to the overall cost, especially for speech architectures due to the input waveform tokenizer in models like HuBERT (Hsu et al., 2021). Second, *when* should we use selfattention-based efficient Transformers? Comparing efficient variants with vanilla models at different input lengths, we find that this *tipping point* where efficient variants outperform vanilla architectures is much higher than typical input lengths of existing benchmarks across all modalities, calling into question the efficacy of using such efficient Transformers and requiring new benchmarks. We introduce a local-attention variant of a speech Transformer, HuBERT, to conduct this analysis. Together, our analyses suggest that current approaches that focus on improving self-attention might not be the most effective for improving efficiency.
## 2 Efficiency Metrics
Model efficiency is an umbrella term for a suite of efficiency metrics, which do not always correlate with, and sometimes contradict, each other (Dehghani et al., 2022). Further, different metrics are relevant to different end use-cases. To cover most use-cases, we evaluate a set of four metrics; two for computational time and two for memory usage:
Throughput: Number of examples processed per sec, given inputs of a given sequence length, using the maximum possible batch size for a given GPU.
Latency-Inference: Time (in ms) to run inference for 1 unbatched input of a given sequence length.
Max-Memory: The allocated GPU memory (MiB)
for processing 1 input of a given sequence length.
Parameter Count: Number of model parameters.
We profile models in all modalities in *training* mode and *inference* mode. For training, while 1639
![1_image_0.png](1_image_0.png)
Transformer architectures often use prediction heads with a larger output space (e.g., for text generation), we choose a lightweight binary classification head for profiling.
Layerwise Efficiency Metrics We also profile some metrics and models in a **layerwise** fashion to locate their efficiency bottlenecks. Our goal is twofold: a) provide an empirical approach to efficient model design, as an alternative to theoretical analyses or mental models (e.g. self-attention is O(n 2)) and b) empirically answer the question "to what degree is self-attention the bottleneck?"
We identify important layer types (SelfAttention, Feedforward, etc.) and profile the Latency-Inference and Parameter Count metrics per-layer-type to obtain a fine-grained understanding of which layer types and indices (layer 0 vs 11)
contribute the most to model efficiency costs. We use param counts as a proxy for memory (profiling real layerwise memory usage is non-trivial due to Pytorch memory allocation intricacies). We profile the layers depicted in Figure 1; more details in Appendix E.
## 3 Local-Attention Speech Model
Efficient transformers (Xiong et al., 2021; Ma et al., 2021) have not received as much attention in Speech as they have in NLP and CV, perhaps due to two reasons. First, there is a relative lack of longcontext speech benchmarks as compared to those in NLP (LRA (Tay et al., 2021) and QuALITY (Pang et al., 2022)). Second, when performing speech
| Model | WER ↓ | WER (w/ FT) ↓ |
|---------------------|---------------|-----------------|
| HuBERT Base | 7.09 | 3.4 |
| L-HuBERT (32 | 100) | 21.06 | 14.48 | 8.52 | 7.39 |
Table 1: WERs on the SUPERB ASR task.
tasks like automatic speech recognition (ASR), it is typical to segment a long speech signal into small individual utterances and perform ASR independently on each. For example, most Librispeech examples are less than 5 seconds. Many popular speech models like HuBERT (Hsu et al., 2021) tokenize the waveform at 50 tokens per second, implying that a typical utterance has only several hundred tokens; far below the number of tokens in long-context NLP tasks. Nevertheless, textless speech models (Lakhotia et al., 2021) are becoming more feasible, motivating the modelling of long speech utterances.
Local HuBERT Model To investigate the efficiency of the self-attention layer in speech models, we introduce the *Local HuBERT* model which replaces HuBERT's self-attention with the Longformer (Beltagy et al., 2020) sliding-window selfattention. In this attention mechanism, every token attends to tokens within a local window context, rather than the full token sequence. Our model is similar to the temporally windowed-attention Transformer acoustic model proposed by Alastruey et al. (2021) for speech translation; our approach differs by using the self-supervised HuBERT model as our basis, and we evaluate on ASR. Choosing the appropriate window size for the local attention context is key; we explore 32 and 100 token contexts, corresponding to 640 ms and 2 s, inspired by phone recognition models that typically incorporate similar context sizes (Peddinti et al., 2015; feng Yeh et al., 2019).
ASR Results We initialize the L-HuBERT model with pretrained HuBERT Base weights (pretrained with full self-attention), and then replace selfattention with sliding-window self-attention; due to limited compute, we did not pretrain L-HuBERT
from scratch using sliding-window attention. We then evaluate L-HuBERT on Librispeech (Panayotov et al., 2015) ASR via the SUPERB (Yang et al.,
2021) benchmark under two settings; a) **Freeze**:
freezing the model and only training projection weights and b) **Finetune**: fully finetune the model.
We use the default S3PRL2 hyperparams; but we 2https://github.com/s3prl/s3prl
| Model | Emb | Pos | SA Interm Output Others |
|---------|---------------------------|-------------------|---------------------------|
| BERT | 23.8M | - 29M 28.3M 28.3M | 0.6M |
| HuBERT | 4.2M 5.1M 29M 28.3M 28.3M | 0.2M | |
| ViT | 0.6M | - 29M 28.3M 28.3M | 0.6M |
Table 2: Layer-wise parameter counts. Emb: Input Embedding, Pos: Positional Emb. SA: Self-Attention, Interm: Intermediate.
train for 200k steps for Freeze and 104k steps for Finetune. Both models converge by 104k steps; we train Freeze for longer to eke out as much performance as possible, while we stop training Finetune due to limited compute.
We report Word Error Rate (WER) on Librispeech test-clean in Table 1; lower is better. In the frozen setting (middle column), we see a large WER increase over HuBERT; we hypothesize that this is due to the attention layer mismatch since we initialize L-HuBERT with HuBERT weights that were pretrained with full self attention, rather than pretraining L-HuBERT from scratch. However, in the finetuning setting, the gap between HuBERT Base and L-HuBERT narrows considerably and using a larger window size achieves better performance. As our L-HuBERT model is a reasonable architecture capable of moderate ASR performance, we can continue to study its computational efficiency (we profile the window-100 variant).
## 4 Methods And Implementation
We analyze the Base versions of the BERT (Devlin et al., 2019), Longformer (Beltagy et al., 2020)
and Nyströmformer (Xiong et al., 2021) models for text; the HuBERT (Hsu et al., 2021) and LHuBERT (Section 3) models for speech; and Vision Transformer (Dosovitskiy et al., 2021) and Swin Transformer (Liu et al., 2021) models for vision; BERT, HuBERT and ViT are standard Transformer encoder architectures. Longformer, L-HuBERT
and Swin use fixed-pattern self-attention while Nyströmformer uses approximate self-attention.
## 4.1 Sequence Length Ranges
We profile our models on a wide range of input sequence lengths to cover both avg. sequence lengths of commonly used contemporary datasets (Table 3) and typical sequence lengths of long-context tasks. Details about how we compute sequence lengths in Table 3 can be found in Appendix B. Most image datasets use images resized to 224 or 512 pixels.
Below, range(*a, b, c*) means a range from a to b in steps of c. Since there is no difference between synthetic and real inputs from a computational complexity standpoint, we use synthetic inputs to more easily control for their sequence lengths.
Text Modality The input is *'This is a sentence.'*
repeated n times, n ∈ range(10, 560, 10) i.e.
range(62, 3362, 60) tokens for all tokenizers.
Speech Modality The inputs have durations in range(1, 50, 0.5) sec i.e. range(50, 2500, 25) tokens for all featurizers (CNNs with 20 ms framerate). Our sampling strategy is in Appendix A.
Image Modality We use square inputs of dimension in range(32, 1024, 32) pixels by rescaling a fixed image. The \# tokens depend on featurizer patch size, which is different for different models.
## 4.2 Implementational Details
We profile time-based metrics (latency/throughput)
using Pytorch CUDA Events3 by executing 20 iterations sequentially. The first few iterations serve as GPU warm-start; thus, we report the average of the last 10. We record Max-Memory with torch.cuda.max_memory_allocated() and param counts with torchinfo (Yep, 2020).
To profile throughput, we *approximate* the max batch size that fits on a single GPU using a linear estimator; more details in Appendix C. Finally, we profile the layerwise Latency-Inference metric using torchprof (Wong, 2020). We attach profiling hooks to modules of interest (e.g. Self-Attention, Embedding), giving us execution times of their forward() functions (other modules/functions are not profiled). We use the Huggingface (Wolf et al.,
2020) implementations of text and image models and fairseq (Ott et al., 2019) implementations for speech models; more details in Appendix D.
## 5 Profiling Results 5.1 Layerwise Profiling Results
Figure 2 shows the layerwise Latency-Inference for all 3 vanilla architectures in each modality. Figures for efficient models are in Appendix F. Color darkness represents the layer index (layer 0 is darkest).
Table 2 shows the layerwise param count.
Asymptotically, self-attention dominates the computation. However, since the average seq length for most text and speech tasks is less than 1000 tokens and most image datasets are used at
| Text | Speech | | | | | | | | | | | | |
|-------------|----------|------|-----|-----|-----|--------|--------|------|-----|------|-------|---------|---------|
| Dataset | SST | MNLI | SQ | ON | CNN | HPQA | TQA | TEDL | LJS | VoxC | Libri | S-SQuAD | Spotify |
| # of tokens | 23 | 36 | 177 | 506 | 863 | 1, 316 | 6, 589 | 301 | 328 | 390 | 615 | 3080 | 101400 |
Table 3: Average token sequence lengths. Left to right: Stanford Sentiment Treebank, MultiNLI, SQuAD2.0, OntoNotes, CNN-DailyMail, HotpotQA, TriviaQA, TEDLIUM, LJSpeech, VoxCeleb Speaker Recognition, Librispeech, Spoken SQuAD, Spotify Podcasts.
![3_image_0.png](3_image_0.png)
a max dimension of 512, at these points, non-selfattention components take up 35%, 58.8% and 43.75% latency for NLP, speech and images. Additionally, parameter counts of SA are also comparable to Interm/Output layers. This shows that it is also important to direct efficiency efforts for other model components.
While the latency associated with embedding layers is minimal for BERT, they are sizable for HuBERT. HuBERT uses a CNN feature extractor with different strides and kernel sizes and consumes more time in the earlier CNN layers as opposed to later ones, as is visible in Figure 2, which shows darker shades i.e. earlier layers dominating the computation. Optimal efficiency strategies can thus differ across modalities, e.g. Wu et al. (2022) slims down this CNN feature extractor embedding layer.
On the other hand, embedding layers take up a lot of *parameters* in BERT; thus, it may be helpful to shrink the BERT embedding layer for memory purposes (as opposed to *latency* for HuBERT). Finally, analyzing Transformer variants (Appendix F), we see that self-attention in Longformer, Swin and LHuBERT encouragingly scales latency linearly, but with large overhead for smaller inputs.
![4_image_0.png](4_image_0.png)
## 5.2 Overall Profiling Results
Our profiling results are in Figures 3 and 4. Inference Throughput is in the Appendix at Figure 6, exhibiting similar trends as training Throughput.
Tipping Point Analysis We see that most variants are slower and more memory hungry than vanilla models for input lengths of typical-context tasks. We define the *tipping point* for each modality: the input length at which the variant becomes more efficient than the vanilla model. For text and speech, it is 1750 − 2000 tokens for inference latency and max-memory, greater than typical input lengths (Table 3). However, while the tipping point for training max-memory is ≈ 1500 tokens for text
(still a large number), it is ≈ 0 − 250 for speech, an encouraging result. For images, it is 500 − 700 pixels for all metrics apart from throughput. This is less reasonable for 224 pixel datasets but good for high resolution image datasets (512/1024). All variants are either worse or comparable than vanilla models across modalities for throughput.
We hypothesize that some efficient models suffer from additional overheads; while vanilla attention benefits from highly optimized matrix multiplication, windowed attention requires complex reshaping and preprocessing.
## Choosing The Right Model Depends On Resource
Constraints Our results show that the choice of the right model depends on resource constraints. Suppose that one is training models under a time constraint; then, throughput is the bottleneck and efficient models would not be a good fit. On the other hand, efficient models are useful for long context memory-constrained inference.
Local Attention and Excessive Padding The Longformer pads input lengths to be a multiple of 512 and Swin requires input dimension to be a multiple of 224. This slows shorter inputs down and results in extremely low performance (measured by all 3 metrics) as compared to vanilla models.
Comparing Parameter Counts The Longformer uses more parameters compared to vanilla BERT (148M vs. 109M) because it uses two sets of Q,K,V projection matrices for its global and local attention operations; sharing these may decrease its memory usage. For other modalities, efficient models do not incur more parameters.
## 6 Conclusion
We present an empirical efficiency analysis of vanilla Transformers and their self-attention-based efficient variants across modalities, metrics and input context sizes. We find substantial differences across modalities and metrics when analyzing the tipping point for efficient variants. Finally, the layerwise analysis finds that self-attention is not the only bottleneck. We recommend that all efficient model papers should report such cross-modal, layerwise profiling results on multiple efficiency metrics covering a variety of use-cases to provide a full picture of the benefits of the model.
## Limitations
We focus primarily on comparing model efficiencies using a variety of efficiency metrics and do not consider model performance; one can perform a more elaborate analysis of performance-efficiency tradeoffs, which we did not do here.
We only profile a total of seven models across three modalities while there are more efficient variants and vanilla Transformers proposed in the literature. While we choose our models to be as representative of each modality and efficiency technique as possible, we cannot extrapolate results to other model variants and other modalities. In particular, modalities like video and genomics and efficiency approaches like quantization would be interesting to profile, which we did not do.
## Acknowledgements
We thank the reviewers and the meta-reviewer of the ACL community for helpful feedback on the draft. This work was partially funded by a grant from UT Machine Learning Lab.
## References
Belen Alastruey, Gerard I. Gállego, and Marta R. Costajussà. 2021. Efficient Transformer for Direct Speech Translation.
Iz Beltagy, Matthew E. Peters, and Arman Cohan.
2020. Longformer: The Long-Document Transformer. *ArXiv preprint*, abs/2004.05150.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 612, 2020, virtual.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob
Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling Language Modeling with Pathways.
Ann Clifton, Sravana Reddy, Yongze Yu, Aasish Pappu, Rezvaneh Rezapour, Hamed Bonab, Maria Eskevich, Gareth Jones, Jussi Karlgren, Ben Carterette, and Rosie Jones. 2020. 100,000 Podcasts: A Spoken English Document Corpus. In *Proceedings of the 28th* International Conference on Computational Linguistics, pages 5903–5917, Barcelona, Spain (Online).
International Committee on Computational Linguistics.
Mostafa Dehghani, Yi Tay, Anurag Arnab, Lucas Beyer, and Ashish Vaswani. 2022. The Efficiency Misnomer. In *The Tenth International Conference on* Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Ching feng Yeh, Jay Mahadeokar, Kaustubh Kalgaonkar, Yongqiang Wang, Duc Le, Mahaveer Jain, Kjell Schubert, Christian Fuegen, and Michael L. Seltzer. 2019.
Transformer-transducer: End-to-end speech recognition with self-attention. *ArXiv*, abs/1910.12977.
Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In *Advances in Neural Information* Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December
7-12, 2015, Montreal, Quebec, Canada, pages 1693–
1701.
François Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Estève. Ted-lium 3: Twice as much data and corpus repartition for experiments on speaker adaptation. In Speech and Computer, pages 198–208. Springer International Publishing.
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451–
3460.
Keith Ito and Linda Johnson. 2017. The LJ
Speech Dataset. https://keithito.com/
LJ-Speech-Dataset/.
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics.
Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. On generative spoken language modeling from raw audio.
Transactions of the Association for Computational Linguistics, 9:1336–1354.
Chia-Hsuan Li, Szu-Lin Wu, Chi-Liang Liu, and Hungyi Lee. 2018. Spoken SQuAD: A Study of Mitigating the Impact of Speech Recognition Errors on Listening Comprehension. In *Interspeech 2018, 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2-6 September 2018*, pages 3459–3463. ISCA.
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021.
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In *2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021,*
Montreal, QC, Canada, October 10-17, 2021, pages 9992–10002. IEEE.
Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, and Luke Zettlemoyer. 2021.
Luna: Linear Unified Nested Attention. In *NeurIPS*.
Abdelrahman Mohamed, Hung yi Lee, Lasse Borgholt, Jakob D. Havtorn, Joakim Edin, Christian Igel, Katrin Kirchhoff, Shang-Wen Li, Karen Livescu, Lars Maaloe, Tara N. Sainath, and Shinji Watanabe. 2022. Self-Supervised Speech Representation Learning: A
Review. *IEEE Journal of Selected Topics in Signal* Processing, 16(6):1179–1210.
Arsha Nagrani, Joon Son Chung, and Andrew Zisserman. 2017. VoxCeleb: A Large-Scale Speaker Identification Dataset. In *Interspeech 2017, 18th Annual* Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017, pages 2616–2620. ISCA.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*,
pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR
corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2015, South Brisbane, Queensland, Australia, April 19-24, 2015, pages 5206–5210. IEEE.
Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, and Samuel Bowman. 2022. Quality: Question answering with long input texts, yes! In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5336–5358, Seattle, United States. Association for Computational Linguistics.
Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur. 2015. A time delay neural network architecture for efficient modeling of long temporal contexts.
In *Proc. Interspeech 2015*, pages 3214–3218.
Sameer S. Pradhan and Nianwen Xue. 2009. OntoNotes:
The 90% solution. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume:
Tutorial Abstracts, pages 11–12, Boulder, Colorado.
Association for Computational Linguistics.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you don't know: Unanswerable questions for SQuAD. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. 2021. Long Range Arena : A Benchmark for Efficient Transformers. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2022. Efficient Transformers: A Survey. In *ACM Comput. Surv.*, volume 55, New York, NY,
USA. Association for Computing Machinery.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Alexander William Wong. 2020. torchprof. https:
//github.com/awwong1/torchprof.
Felix Wu, Kwangyoun Kim, Jing Pan, Kyu J. Han, Kilian Q. Weinberger, and Yoav Artzi. 2022.
Performance-Efficiency Trade-Offs in Unsupervised Pre-Training for Speech Recognition. In ICASSP
2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
pages 7667–7671.
Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. 2021. Nyströmformer: A Nyström-based Algorithm for Approximating Self-Attention. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI
2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14138–14148. AAAI Press.
Shu-Wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, GuanTing Lin, Tzu-Hsien Huang, Wei-Cheng Tseng, Kotik Lee, Da-Rong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, and Hung-yi Lee. 2021. SUPERB: Speech Processing Universal PERformance Benchmark. In Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August - 3 September 2021, pages 1194–1198. ISCA.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics.
Tyler Yep. 2020. torchinfo. https://github.com/
TylerYep/torchinfo.
## A Sampling Speech Utterances For Profiling
To obtain speech inputs of length i seconds to i + 0.5 seconds for all i less than 12 seconds, we sample 5 speech utterances from the training set of the Librispeech dataset (Panayotov et al., 2015)
whose lengths fall within this range and compute aggregate metrics over these 5 utterances. Since the Librispeech dataset does not contain extremely long speech utterances, for i of length greater than 12 seconds, we adopt a different approach to generate inputs. To generate such an input utterance of length between i and i + 0.5 seconds, we first sample 5 speech utterances from the Librispeech training set of input length ranging from i5 to i+0.5 5 and concatenate them to obtain utterances of length ranging from i to i + 0.5 as desired. We do this 5 times to get 5 different utterances and compute aggregate metrics over these 5 utterances.
## B Computing Token Lengths For Nlp And Speech Datasets
We compute average sequence token lengths for 7 NLP datasets and 6 speech datasets. For all speech datasets, we compute mean utterance durations and multiply durations by 50 to obtain number of tokens (model framerates are 20 ms i.e. ×50).
For TEDLIUM (Hernandez et al.), LJSpeech (Ito and Johnson, 2017), VoxCeleb Speaker Recognition Dataset (Nagrani et al., 2017) and Librispeech (Panayotov et al., 2015), we compute mean validation-set *utterance* durations; for Spoken SQuAD (Li et al., 2018), we report mean validation-set *paragraph* duration and for the Spotify English Podcasts dataset (Clifton et al., 2020),
we report mean *podcast* duration directly obtained from Clifton et al. (2020).
SST (Socher et al., 2013). We use test-set sentences. We use the HuggingFace BERTTokenizer.
MNLI (Williams et al., 2018). We use validation-matched-set examples by concatenating the premise and the hypothesis. We use the HuggingFace BERTTokenizer.
SQuAD2.0 (Rajpurkar et al., 2018). We use validation-set examples by concatenating the context and the question. We use the HuggingFace BERTTokenizer.
OntoNotes (Pradhan and Xue, 2009). We obtain this number from the Longformer (Beltagy et al.,
2020) paper.
CNN-Dailymail (Hermann et al., 2015). We use the 3.0.0 version of the dataset and use test-set articles. We use the HuggingFace BERTTokenizer.
HotpotQA (Yang et al., 2018). We obtain this number from the Longformer (Beltagy et al., 2020) paper.
TriviaQA (Joshi et al., 2017). We obtain this number from the Longformer (Beltagy et al., 2020)
paper.
## C Implementing Throughput Profiling
To profile Throughput, we need to compute the max batch size that can fit on a single GPU. We *approximately* predict this using a linear estimator as follows. We first record the memory B reserved on the GPU after just loading the model. Next, we independently run batches of sizes 1 and 2 and record memory usages M1 and M2. We use an NVIDIA
Quadro RTX 8000 GPU with a maximum memory of 45000 MiB. Thus, assuming a linear relationship between batch size and memory consumption, we predict a maximum batch size of bsz =
45000−B
M2−M1
.
In practice, this is an overestimate; we keep decreasing the batch size by a factor of 0.9 until no OOM errors occur and this is our final estimate.
## D Implementational Details For Models
We use the following HuggingFace configurations: bert-base-uncased for BERT,
allenai/longformer-base-4096 for Longformer, uw-madison/nystromformer-4096 for Nyströmformer, google/vit-base-patch16-224 for ViT and microsoft/swin-base-patch4-window7-224 for Swin. The BERT model natively supports a maximum of 512 tokens as input because it has 512 positional embeddings; we modify the positional embedding computation to allow an arbitrarily long input to be provided. The Longformer internally pads all input lengths to a multiple of 512. For Swin, we pad images to have an input dimension that is a multiple of 224; this is necessary due to the windowed attention mechanism in Swin. In fact, the Swin model natively supports only a 224 × 224 resolution; we make a small modification in order to support resolutions that are multiples of 224. We use the HuBERT Base model for both HuBERT and L-HuBERT.
## E Transformer Layer Types
Input Embedding Layer. ( /red) Maps the input sequence into fixed-dimensional embeddings. This is a linear layer for text and a CNN featurizer for image/speech.
Positional Embedding Layer. ( /fuchsia) For text and image models this is part of the input embedding layer. For speech models, this is a very wide convolution layer.
Self Attention Layer.( /blue) The multi-head self attention block, which computes self-attention outputs and maps the result to the model dimension.
Intermediate Layer.( /yellow) Linear layer of the feedforward block that maps the output from the Self Attention block into the 'feedforward dimension' (typically 4x the model dimension).
Output Layer.( /green) Second linear layer of the feedforward block, which maps the output from Intermediate layer back to the model dimension.
Other Layers.( /black) Other modules (activations, layer normalizations, other linear layers, etc.)
not covered by the above components.
## F Additional Profiling Analyses
We report layerwise profiling runs for efficient selfattention variants and inference-time throughput profiling runs for all variants in this section at Figures 5 and 6.
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
![9_image_2.png](9_image_2.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The Limitations section
✓ A2. Did you discuss any potential risks of your work?
The Limitations section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3, 4
✓ B1. Did you cite the creators of artifacts you used?
Section 3, 4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not explicitly, since we use publicly available Huggingface and Fairseq models that are intended for research use
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We use publicly available Huggingface and Fairseq models that are intended for research use B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4 and 4.2, Appendices B,D
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We only use datasets to profile models over different sequence lengths, but don't use the content of the dataset itself. Thus we report the relevant statistic i.e. dataset sequence length.
## C ✓ **Did You Run Computational Experiments?** Section 3, 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4.1, 4.2, Appendix C.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.2
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3, 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
cai-oconnor-2023-evaluating | Evaluating Zero-Shot Event Structures: Recommendations for Automatic Content Extraction ({ACE}) Annotations | https://aclanthology.org/2023.acl-short.142 | Zero-shot event extraction (EE) methods infer richly structured event records from text, based only on a minimal user specification and no training examples, which enables flexibility in exploring and developing applications. Most event extraction research uses the Automatic Content Extraction (ACE) annotated dataset to evaluate supervised EE methods, but can it be used to evaluate zero-shot and other low-supervision EE? We describe ACE{'}s event structures and identify significant ambiguities and issues in current evaluation practice, including (1) coreferent argument mentions, (2) conflicting argument head conventions, and (3) ignorance of modality and event class details. By sometimes mishandling these subtleties, current work may dramatically understate the actual performance of zero-shot and other low-supervision EE, considering up to 32{\%} of correctly identified arguments and 25{\%} of correctly ignored event mentions as false negatives. For each issue, we propose recommendations for future evaluations so the research community can better utilize ACE as an event evaluation resource. | # Evaluating Zero-Shot Event Structures: Recommendations For Automatic Content Extraction (Ace) Annotations
Erica Cai Brendan O'Connor University of Massachusetts Amherst
{ecai,brenocon}@cs.umass.edu
## Abstract
Zero-shot event extraction (EE) methods infer richly structured event records from text, based only on a minimal user specification and no training examples, which enables flexibility in exploring and developing applications.
Most event extraction research uses the Automatic Content Extraction (ACE) annotated dataset to evaluate *supervised* EE methods, but can it be used to evaluate *zero-shot* and other low-supervision EE? We describe ACE's event structures and identify significant ambiguities and issues in current evaluation practice, including (1) coreferent argument mentions, (2) conflicting argument head conventions, and (3) ignorance of modality and event class details. By sometimes mishandling these subtleties, current work may dramatically understate the actual performance of zero-shot and other lowsupervision EE, considering up to 32% of correctly identified arguments and 25% of correctly ignored event mentions as false negatives. For each issue, we propose recommendations for future evaluations so the research community can better utilize ACE as an event evaluation resource.
## 1 Introduction
Zero-shot event extraction (EE) methods infer richly structured instances of action or relationship occurrences from unstructured text data, based on a user-supplied natural language specification of the desired event—without annotated training examples (Du and Cardie, 2020; Liu et al., 2020; Li et al.,
2021; Lyu et al., 2021). The extracted structure is useful for many applications such as analyzing interactions between entities and performing more intelligent question answering (Gao et al., 2016; Liu et al., 2017a; Cao et al., 2020; Li et al., 2020b),
and the low resources required by zero-shot EE
methods further this practical advantage. We refer to the structure as an event, where each event could have an arbitrary structure as needed. Each structure contains information such as the participants involved, content, and location of the event.
To evaluate *supervised* EE methods, many works use the Automatic Content Extraction (ACE)
dataset—specifically, the Linguistic Data Consortium's *ACE 2005 Multilingual Training Corpus*
(Doddington et al., 2004),1 which includes English, Chinese, and Arabic documents and resulted from the U.S. federal government's ACE program.2 The ACE dataset stores information about entities, relations, and events from 598 (for English) documents in a rich structure; our focus is mostly on its events. ACE is frequently used for event extraction modeling and evaluation, and is often claimed to be the most widely used such dataset
(§3). While there are many somewhat similar structured semantic datasets, ACE still shines in having whole-document annotations (contra FrameNet; Baker et al., 2003; Baker and Sato, 2003; Fillmore et al., 2003), realistically non-lexical-specific event classes (contra PropBank (Palmer et al., 2005),
OntoNotes (Weischedel et al., 2017), and Semantic Dependencies (Oepen et al., 2014)), event modality
(contra PB, ON, SD), English data (contra Entities, Relations, and Events (ERE)),3and specification of event arguments (contra Richer Event Description (RED); O'Gorman et al., 2016) that are simultaneously represented both as text spans 1651
(contra Abstract Meaning Representation (AMR);
Banarescu et al., 2013), and discourse-level entities4(§2). While ACE does not include RED's interesting causal and bridging event-event relations (see also Hovy et al., 2013), its core tasks related to entities and event arguments have important applications and are far from solved.
We investigate using the ACE dataset to evaluate zero-shot and other low-supervision EE methods, which are more real-world relevant than highlysupervised EE methods for requiring few if any annotations, but which may face certain evaluation challenges more severely.5 First, we identify issues related to how evaluations extract gold event argument annotations from ACE and to the possibly clashing use case of a zero-shot EE method versus the annotations in ACE. Evaluation of zero-shot EE methods is particularly sensitive to these issues since they lack knowledge of (sometimes arbitrary)
details in ACE event structures that are implicit in training examples—and their ignorance of them may be correct for many applications. Therefore, we present guidelines and methods to overcome these issues in English, which could in theory be adaptable to other languages, and quantify their potential impact.6
## 2 **Structure Of Events And Entities In Ace**
The Automatic Content Extraction (ACE) dataset stores annotations for entity, relation, and event structures for news, conversations, blog, and transcript textual data. We focus on the ACE event extraction task (Ahn, 2006), which takes a sentence as input and outputs a set of event tuples, which we attempt to precisely specify.
arguments can take. An event tuple has the form
⟨t, g, {a1..an}⟩ where
1. t ∈ T is the event class.
2. g is the span8 of the **event trigger**, a word that identifies or represents the event class.
3. {a1..an} is a (possibly empty) set of **event arguments** explicitly mentioned in the sentence, each with ai = ⟨a
(r)
i, a
(s)
i⟩: the **role** a
(r) ∈ Rt, and argument span a
(s).
9
The full *event extraction* task is to output some number of event tuples from the sentence; research often examines subtasks to identify various subsets of t, g, a(r), a(s), such as event trigger classification or *event detection* (just (*g, t*)). Finally, the tuple has several additional semantic tags such as modality and tense (§4.3).
Entities (Figure 2). An event argument a
(s) may also be a *mention* of an *entity*, a document-level object with its own type information and one or more coreferential mention spans throughout a document. For an argument span a
(s), let C(a
(s)) refer to the set of all its coreferential mentions.10 Additionally, ACE's <entity> data structure defines for each mention a **head** span (§4.2).
In the following example from ACE, a killing (LIFE.DIE) event has agent a
(s)="Iraq's Mukhabarat" (Figure 1); when cross-referencing the entity information C(a
(s)), it turns out this argument is coreferentially mentioned three times in the sentence (Figure 2).
2106
Earlier, from 1979 **to 1983, he headed Iraq's Mukhabarat, or**
![1_image_0.png](1_image_0.png)
![2_image_0.png](2_image_0.png)
## 3 Review Of Using Ace To Evaluate Ee
We reviewed 38 papers published from 2008 through 2022, cited in Li et al. (2022)'s survey of deep learning methods for event extraction, to examine how they use ACE to evaluate EE tasks
(Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011; Li et al., 2013; Nguyen and Grishman, 2015; Chen et al., 2015; Nguyen et al.,
2016; Yang and Mitchell, 2016; Nguyen and Grishman, 2016; Feng et al., 2016; Liu et al., 2016; Huang et al., 2016; Sha et al., 2016; Chen et al., 2017; Liu et al., 2017b; Zhao et al., 2018; Zeng et al., 2018; Hong et al., 2018; Liu et al., 2018; Huang et al., 2018; Liu et al., 2019; Zhang et al.,
2019b; Wang et al., 2019; Zhang et al., 2019a; Yang et al., 2019; Nguyen and Nguyen, 2019; Wadden et al., 2019; Chen et al., 2020; Du and Cardie, 2020; Liu et al., 2020; Li et al., 2020a; Lin et al., 2020; Li et al., 2021; Ahmad et al., 2021; Zhou et al.,
2021; Wang et al., 2021; Lu et al., 2021; Lyu et al.,
2021). Several state that ACE is the most popular dataset for evaluating EE methods (Li et al., 2022; Zhang et al., 2019b; Wang et al., 2019). While the ACE data release does not define a split, these papers, especially after 2011, settled on a shared train/development/test split (§A.6).
When considering event trigger and event argument identification, all papers require matching the gold standard's extent to be considered correct. For arguments, which are usually multiple tokens long, some works require matching the full argument extent a
(s) while others only use its head extent.
(Additional details in §A.6.)
The works that we analyzed identify several challenges with using ACE. Some event subtypes are very sparse; almost 60% of event types have fewer than 100 labeled samples, while three event types each have fewer than ten out of the 5042 samples over all English documents and 33 event classes
(Chen et al., 2017; Liu et al., 2017b, 2018). Second, the manually specified event schemas in ACE are hard to generalize to different domains, such as the WEAPON argument role (Huang et al., 2016).
Third, Ji and Grishman (2008) find that human annotators achieve only about 73% of the F1 score on the argument and trigger identification task and annotation quality continues to be questioned in debates about annotation guidelines (Lin et al., 2020).
In any case, ACE remains a widely used dataset for evaluation.
## 4 Recommendations For Using Ace To Evaluate Zero-Shot And Other Low-Supervision Ee Methods Recommendation 1: Coreference Invariant
Argument Matching. *To evaluate correctness of event arguments using ACE, allow a* match to any coreferent mention of the argument (c ∈ C(a
(s))*), not just the one mention in*
<event_argument_mention> (a
(s)). This (we believe) erroneous practice is widespread, and may consider up to 32% of correctly identified entitytype arguments as incorrect.
Problem. Although ACE stores event triggers and types as part of an *event mention*, it stores event arguments as part of both *event mention*s and entity, time, or *value mention*s. The *event mention argument* stores one reference to the argument (a
(s)),
even if multiple references exist (C(a
(s))). Low supervision EE methods can not learn a training set's potentially superficial convention for which of multiple references to specify.
Issues in the Literature. Alarmingly, although ACE stores multiple gold references as entity mentions, they are often not used. We find that a number of recent works, especially on zero-shot EE, that ignore them. Wadden et al. (2019)'s preprocessing code, which was used in several later works (Du and Cardie, 2020; Lin et al., 2020; Li et al., 2021; Lu et al., 2021; Lyu et al., 2021), does not gather multiple references to a
(s)in an event tuple. While an unofficial update includes entity information, we identify further difficulties in §A.3. Independently, Zeng et al. (2018) acknowledge not applying coreference resolution, which contributes to a higher argument identification task error rate.
While we acknowledge that whether to model coreference is a complex question, using gold standard coreference information at *evaluation* time is an independent issue and ought to be mandatory, for any modeling approach. Even for a purely extent prediction system, gold-standard coreference is necessary for correct evaluation.
![3_image_0.png](3_image_0.png)
Table 1: The percent of arguments with a varying number of references to it (|C(a)|) in the same sentence, excluding duplicates, where pronouns are and are not arguments. (More implementation details in §A.5).
Findings. Table 1 shows that roughly 14.6%
of arguments have multiple references within the same sentence when arguments are not pronouns, and roughly 31.7% do otherwise. In the worst case, an evaluation could consider all such arguments, even if correctly identified, as false negatives. Next, we investigate if a pattern for choosing a
(s) out of C(a
(s)) exist. If multiple references exist, a
(s)is the first reference to appear in the sentence 56.3%
of the time. If one or more is a named entity, a
(s)
is, also, 60.7% of times. (More details in §A.5).
Given the alarming statistics in Table 1 and a nonobservable pattern for choosing a
(s) out of C(a
(s)),
we recommend extracting all possible references to an argument using the <entity> object, instead of only relying on <event mention argument>.
Recommendation 2: Dual ACE and Automatic Head Selection. To evaluate correctness of an event argument using ACE, in addition to comparing its head against the head provided by ACE,
compare its head against the one selected by a Universal Dependency-based parser. We find that 8.1%
of heads that the English portion of ACE identifies are not consistent with a Universal Dependency SpaCy3 parser-based head finder (more details in
§A.4.1).
Problem and Literature. To determine correctness of an event argument, either compare it against a
(s)in ACE or its head against a
(s)'s head.
Comparing against the entire a
(s)is likely to yield false negatives because ACE argument spans can be very long, including the noun phrase's complements and even elaborate relative clauses (e.g.: "the women from Texas who heinously drowned her five kids, aged 6 months to 7 years, one by one, in her bathtub"). Thus most works we reviewed evaluate argument correctness by comparing its head with the head of potential a
(s)s. For zero-shot EE methods with no knowledge of argument constitutions, using the head seems especially appropriate.
Method and Findings. We investigate if the head of a
(s)that ACE specifies is consistent with the Universal Dependency (UD) (Nivre et al.,
2020) definition of head, which we identify from spaCy3's UD parse as the token in the span that is an ancestor to the rest of the span (i.e., the span's subgraph's root); we additionally add a heuristic to address a frequent parse error when the noun phrase head is analyzed as the relative clause's subject, and to extend the head to be multiple tokens
(as sometimes occurs in ACE's heads) when the head token is within an spaCy3-identified named entity. The discrepancies in Figure 3 suggest that ACE often does not follow the UD formalism. (Additional algorithmic details and discrepancies in
§A.4.)
23.82% discrepancies [Africa [Americans]${}_{\text{ACS}}$]${}_{\text{AHT}}$ when ACE considers a head with multiple words [Hifa [university]${}_{\text{AHT}}$]${}_{\text{ACS}}$ 5.41% discrepancies a [[number]${}_{\text{AHT}}$]${}_{\text{AEC}}$ of soldiers otherwise [] over a [million]${}_{\text{AHT}}$ of his own [citizens]${}_{\text{ACS}}$
![3_image_1.png](3_image_1.png)
Figure 3: Examples of discrepancies between the head in ACE and the head identified by the UD-based algorithm, and percentages of such discrepancies when ACE
considers a multi-word head versus a single-word head. Each line contains an argument extent; the head by ACE is in red brackets and that by UD is in blue.
Next, we explore the feasibility of consistently reconstructing the exact head specified by ACE. Given clear inconsistencies in the way that ACE
selects the head in Figure 3 (eg: "Haifa university" and "Israeli army"), we conclude that ACE may not identify the argument head in a systematic or at least easily emulatable way, which may contribute to false negatives. To eliminate the inconsistency issue, we propose to use a UD-based algorithm to select heads from ACE argument extents for matching, *in addition* to the heads specified by ACE. The head from the UD-based parser is not always the most appropriate for a given argument extent (see error analysis of parser behavior in §A.4.1), but our approach does avoid the *inconsistency* issue.
While we only applied our UD-based algorithm to English data, this head-matching approach may be adaptable to other languages with available UD parsers.
Recommendation 3: Analyze a Subset of ACE
Modalities or Event Classes. Consider a subset of annotated events as the ground truth event set to improve the evaluation of zero-shot EE methods that target a particular use case; e.g., sociopolitical analysis.
Problem and Literature. While greater flexibility enables zero-shot EE methods to be more practical, extracting structured data as events without requiring training examples, each practical application has a different objective. For example, social scientists and political forecasters may need to analyze historical events that actually happened in the past (Schrodt et al., 1994; O'Connor et al., 2013; Boschee et al., 2013; Halterman et al., 2021; Hanna, 2017; Hürriyetoglu et al. ˘ , 2021; Giorgi et al., 2021; Stoehr et al., 2021), such as in the widely-used ICEWS automatically generated events dataset
(Boschee et al., 2017). However, in other applications such as those on opinion or sentiment tasks
(Swamy et al., 2017), the aim of zero-shot EE methods may be benefited by hypothetical events.
Many aspects of modality have been explored in computational modeling, such as temporal semantics (Timebank (Pustejovsky et al., 2003)), factual versus uncertain or hypothetical status (Factbank
(Saurí and Pustejovsky, 2009), Pragbank (de Marneffe et al., 2012), (Diab et al., 2009; Prabhakaran et al., 2015; Stanovsky et al., 2017; Rudinger et al.,
2018; Yao et al., 2021; Lee et al., 2015)), and in literary domains (Litbank (Bamman et al., 2019, 2020)). ACE includes a simple modality label for each event instance as either ASSERTED to indicate an event instance that was referred to as a real occurrence, or OTHER for all others: non-grounded beliefs (e.g. rumors), hypotheticals, commands, threats, proposals, desires, promises, etc. In fact, for 25% of event instances in ACE, the modality tag label is OTHER. Yet, the 38 works that we explored in §3 which use ACE to evaluate EE methods do not include modality as part of the task definition.
We propose that future work could better use ACE
by predicting or analyzing subsets of modalities to more clearly support downstream applications.
Finally, modality is important since it may also interact with modeling (Cai and O'Connor, 2023). Zero-shot EE methods involving questionanswering (QA) or text entailment (TE) models
(Lyu et al., 2021), may enforce modality restrictions through the language in the query. For example, the past tense question "did the police arrest someone?" (Halterman et al., 2021) asks for a reported occurrence that the police are arresting or have arrested someone, but not an intended or hypothetical arrest. Whether this matches user intent, and whether models respect or ignore the query's modality restrictions, are important avenues for future work; ACE data can aid such analysis.
## 5 Conclusion
We explore how to use ACE, which is a gold standard dataset containing annotations of events from diverse text data in a rich structure, to evaluate zero-shot and other low-supervision EE methods by identifying issues that may more severely affect their evaluation. We particularly find difficulties with evaluating spans of events due to a lack of training data for zero-shot and low-supervision EE methods to learn superficial annotation quirks from.
However, we present methods to overcome these issues and demonstrate them on the English portion of ACE, noting that in principle they may be adaptable to any language. Ultimately, we advocate for using ACE to evaluate zero-shot and other lowsupervision EE methods after addressing the issues, and discuss the potential for using ACE in smarter ways to evaluate different types of EE methods in the future.
## Acknowledgments
We thank the UMass NLP group and anonymous reviewers for feedback. This work was supported by NSF CAREER 1845576. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
## References
1991. *Third Message Uunderstanding Conference*
(MUC-3): Proceedings of a Conference Held in San Diego, California, May 21-23, 1991.
Jacqueline Aguilar, Charley Beller, Paul McNamee, Benjamin Van Durme, Stephanie Strassel, Zhiyi Song, and Joe Ellis. 2014. A comparison of the events and relations across ACE, ERE, TAC-KBP, and FrameNet annotation standards. In *Proceedings* of the Second Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 45–
53, Baltimore, Maryland, USA. Association for Computational Linguistics.
Wasi Ahmad, Nanyun Peng, and Kai-Wei Chang. 2021.
Gate: Graph attention transformer encoder for crosslingual relation and event extraction. In *The ThirtyFifth AAAI Conference on Artificial Intelligence*
(AAAI-21).
David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events, pages 1–8, Sydney, Australia. Association for Computational Linguistics.
Collin Baker, Charles Fillmore, and Beau Cronin. 2003.
The structure of the framenet database. International Journal of Lexicography, 16:281–296.
Collin F. Baker and Hiroaki Sato. 2003. The FrameNet data and software. In *The Companion Volume to the* Proceedings of 41st Annual Meeting of the Association for Computational Linguistics, pages 161–164, Sapporo, Japan. Association for Computational Linguistics.
David Bamman, Olivia Lewke, and Anya Mansoor.
2020. An annotated dataset of coreference in English literature. In *Proceedings of the Twelfth Language* Resources and Evaluation Conference, pages 44–54, Marseille, France. European Language Resources Association.
David Bamman, Sejal Popat, and Sheng Shen. 2019. An annotated dataset of literary entities. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 2138–2144, Minneapolis, Minnesota. Association for Computational Linguistics.
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
Elizabeth Boschee, Jennifer Lautenschlager, Sean O'Brien, Steve Shellman, James Starz, and Michael Ward. 2017. ICEWS coded event data.
Elizabeth Boschee, Premkumar Natarajan, and Ralph Weischedel. 2013. Automatic extraction of events from open source text for predictive forecasting.
Handbook of Computational Approaches to Counterterrorism, page 51.
Erica Cai and Brendan O'Connor. 2023. A monte carlo language model pipeline for zero-shot sociopolitical event extraction.
Qingqing Cao, Harsh Trivedi, Aruna Balasubramanian, and Niranjan Balasubramanian. 2020. DeFormer: Decomposing pre-trained transformers for faster question answering. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 4487–4497, Online. Association for Computational Linguistics.
Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In *Proceedings of the 2013 Conference on Empirical Methods* in Natural Language Processing, pages 1797–1807, Seattle, Washington, USA. Association for Computational Linguistics.
Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data generation for large scale event extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 409–419, Vancouver, Canada. Association for Computational Linguistics.
Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In *Proceedings of the 53rd Annual Meeting of the Association* for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167–176, Beijing, China. Association for Computational Linguistics.
Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, and Benjamin Van Durme. 2020. Reading the manual: Event extraction as definition comprehension. In Proceedings of the Fourth Workshop on Structured Prediction for NLP, pages 74–83, Online.
Association for Computational Linguistics.
Marie-Catherine de Marneffe, Christopher D. Manning, and Christopher Potts. 2012. Did it happen?
The pragmatic complexity of veridicality assessment.
Computational Linguistics, 38(2):301–333.
Mona Diab, Lori Levin, Teruko Mitamura, Owen Rambow, Vinodkumar Prabhakaran, and Weiwei Guo.
2009. Committed belief annotation and tagging. In Proceedings of the Third Linguistic Annotation Workshop (LAW III), pages 68–73, Suntec, Singapore. Association for Computational Linguistics.
George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ACE) program - tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04),
Lisbon, Portugal. European Language Resources Association (ELRA).
Xinya Du and Claire Cardie. 2020. Event extraction by answering (almost) natural questions. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 671–683, Online. Association for Computational Linguistics.
Xiaocheng Feng, Lifu Huang, Duyu Tang, Heng Ji, Bing Qin, and Ting Liu. 2016. A language-independent neural network for event detection. In *Proceedings* of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
pages 66–71, Berlin, Germany. Association for Computational Linguistics.
Charles J. Fillmore, Christopher R. Johnson, and Miriam R. L. Petruck. 2003. Background to framenet.
International Journal of Lexicography, 16:235–250.
Li Gao, Jia Wu, Zhi Qiao, Chuan Zhou, Hong Yang, and Yue Hu. 2016. Collaborative social group influence for event recommendation. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM '16, page 1941–1944, New York, NY, USA. Association for Computing Machinery.
Salvatore Giorgi, Vanni Zavarella, Hristo Tanev, Nicolas Stefanovitch, Sy Hwang, Hansi Hettiarachchi, Tharindu Ranasinghe, Vivek Kalyan, Paul Tan, Shaun Tan, Martin Andrews, Tiancheng Hu, Niklas Stoehr, Francesco Ignazio Re, Daniel Vegh, Dennis Atzenhofer, Brenda Curtis, and Ali Hürriyetoglu. 2021. ˘
Discovering black lives matter events in the United States: Shared task 3, CASE 2021. In *Proceedings of* the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), pages 218–227, Online. Association for Computational Linguistics.
Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1152–1161, Singapore. Association for Computational Linguistics.
Andrew Halterman, Katherine Keith, Sheikh Sarwar, and Brendan O'Connor. 2021. Corpus-level evaluation for event QA: The IndiaPoliceEvents corpus covering the 2002 Gujarat violence. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 4240–4253, Online. Association for Computational Linguistics.
Alex Hanna. 2017. MPEDS: Automating the generation of protest event data.
Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction.
In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 1127–1136, Portland, Oregon, USA. Association for Computational Linguistics.
Yu Hong, Wenxuan Zhou, Jingli Zhang, Guodong Zhou, and Qiaoming Zhu. 2018. Self-regulation: Employing a generative adversarial network to improve event detection. In *Proceedings of the 56th Annual Meeting* of the Association for Computational Linguistics (Volume 1: Long Papers), pages 515–526, Melbourne, Australia. Association for Computational Linguistics.
Eduard Hovy, Teruko Mitamura, Felisa Verdejo, Jun Araki, and Andrew Philpot. 2013. Events are not simple: Identity, non-identity, and quasi-identity. In Workshop on Events: Definition, Detection, Coreference, and Representation, pages 21–28, Atlanta, Georgia. Association for Computational Linguistics.
Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare R. Voss, Jiawei Han, and Avirup Sil. 2016.
Liberal event extraction and event schema induction.
In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 258–268, Berlin, Germany.
Association for Computational Linguistics.
Lifu Huang, Heng Ji, Kyunghyun Cho, Ido Dagan, Sebastian Riedel, and Clare Voss. 2018. Zero-shot transfer learning for event extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2160–2170, Melbourne, Australia. Association for Computational Linguistics.
Ali Hürriyetoglu, Osman Mutlu, Erdem Yörük, ˘
Farhana Ferdousi Liza, Ritesh Kumar, and Shyam Ratan. 2021. Multilingual protest news detection -
shared task 1, CASE 2021. In *Proceedings of the 4th* Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text
(CASE 2021), pages 79–91, Online. Association for Computational Linguistics.
Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In *Proceedings of ACL-08: HLT*, pages 254–262, Columbus, Ohio. Association for Computational Linguistics.
Kenton Lee, Yoav Artzi, Yejin Choi, and Luke Zettlemoyer. 2015. Event detection and factuality assessment with non-expert supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643–1648, Lisbon, Portugal. Association for Computational Linguistics.
Fayuan Li, Weihua Peng, Yuguang Chen, Quan Wang, Lu Pan, Yajuan Lyu, and Yong Zhu. 2020a. Event extraction as multi-turn question answering. In Findings of the Association for Computational Linguistics:
EMNLP 2020, pages 829–838, Online. Association for Computational Linguistics.
Manling Li, Alireza Zareian, Ying Lin, Xiaoman Pan, Spencer Whitehead, Brian Chen, Bo Wu, Heng Ji, Shih-Fu Chang, Clare Voss, Daniel Napierski, and Marjorie Freedman. 2020b. GAIA: A fine-grained multimedia knowledge extraction system. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations*, pages 77–86, Online. Association for Computational Linguistics.
Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In *Proceedings of the 51st Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 73–82, Sofia, Bulgaria.
Association for Computational Linguistics.
Qian Li, Hao Peng, Jianxin Li, Yiming Hei, Rui Sun, Jiawei Sheng, Shu Guo, Lihong Wang, and Philip S.
Yu. 2022. A survey on deep learning event extraction:
approaches and applications. *IEEE Transactions on* Neural Networks and Learning Systems.
Sha Li, Heng Ji, and Jiawei Han. 2021. Document-level event argument extraction by conditional generation.
In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 894–908, Online. Association for Computational Linguistics.
Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In *Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics*,
pages 789–797, Uppsala, Sweden. Association for Computational Linguistics.
Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020.
A joint neural model for information extraction with global features. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 7999–8009, Online. Association for Computational Linguistics.
Chun-Yi Liu, Chuan Zhou, Jia Wu, Hongtao Xie, Yue Hu, and Li Guo. 2017a. Cpmf: A collective pairwise matrix factorization model for upcoming event recommendation. In *2017 International Joint Conference on Neural Networks (IJCNN)*, pages 1532–
1539.
Jian Liu, Yubo Chen, Kang Liu, Wei Bi, and Xiaojiang Liu. 2020. Event extraction as machine reading comprehension. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1641–1651, Online. Association for Computational Linguistics.
Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2018.
Event detection via gated multilingual attention mechanism. In *Proceedings of the Thirty-Second* AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI Press.
Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2019.
Neural cross-lingual event detection with minimal parallel resources. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language* Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 738–748, Hong Kong, China. Association for Computational Linguistics.
Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016. Leveraging FrameNet to improve automatic event detection. In *Proceedings of the 54th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2134–
2143, Berlin, Germany. Association for Computational Linguistics.
Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017b.
Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 1789–1798, Vancouver, Canada.
Association for Computational Linguistics.
Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, and Shaoyi Chen. 2021. Text2Event: Controllable sequence-tostructure generation for end-to-end event extraction.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2795–2806, Online. Association for Computational Linguistics.
Qing Lyu, Hongming Zhang, Elior Sulem, and Dan Roth. 2021. Zero-shot event extraction via transfer learning: Challenges and insights. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers), pages 322–332, Online.
Association for Computational Linguistics.
Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309, San Diego, California.
Association for Computational Linguistics.
Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 365–371, Beijing, China. Association for Computational Linguistics.
Thien Huu Nguyen and Ralph Grishman. 2016. Modeling skip-grams for event detection with convolutional neural networks. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language* Processing, pages 886–891, Austin, Texas. Association for Computational Linguistics.
Trung Minh Nguyen and Thien Huu Nguyen. 2019. One for all: Neural joint modeling of entities and events.
In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'19/IAAI'19/EAAI'19.
AAAI Press.
Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajic, Christopher D. Manning, Sampo ˇ
Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2:
An evergrowing multilingual treebank collection. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4034–4043, Marseille, France. European Language Resources Association.
Brendan O'Connor, Brandon M. Stewart, and Noah A.
Smith. 2013. Learning to extract international relations from political context. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1094–1104, Sofia, Bulgaria. Association for Computational Linguistics.
Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, Angelina ˇ Ivanova, and Yi Zhang. 2014. SemEval 2014 task 8: Broad-coverage semantic dependency parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 63–72, Dublin, Ireland. Association for Computational Linguistics.
Tim O'Gorman, Kristin Wright-Bettner, and Martha Palmer. 2016. Richer event description: Integrating event coreference with temporal, causal and bridging annotation. In Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016), pages 47–
56, Austin, Texas. Association for Computational Linguistics.
Martha Palmer, Daniel Gildea, and Paul Kingsbury.
2005. The proposition bank: An annotated corpus of semantic roles. *Comput. Linguist.*, 31(1):71–106.
Vinodkumar Prabhakaran, Tomas By, Julia Hirschberg, Owen Rambow, Samira Shaikh, Tomek Strzalkowski, Jennifer Tracey, Michael Arrigo, Rupayan Basu, Micah Clark, Adam Dalton, Mona Diab, Louise Guthrie, Anna Prokofieva, Stephanie Strassel, Gregory Werner, Yorick Wilks, and Janyce Wiebe. 2015.
A new dataset and evaluation for belief/factuality.
In *Proceedings of the Fourth Joint Conference on* Lexical and Computational Semantics, pages 82–91, Denver, Colorado. Association for Computational Linguistics.
James Pustejovsky, Patrick Hanks, Roser Saurí, Andrew See, Rob Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, and Marcia Lazo. 2003. The timebank corpus. *Proceedings of Corpus Linguistics*.
Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018. Neural models of factuality. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 731–744, New Orleans, Louisiana. Association for Computational Linguistics.
Roser Saurí and James Pustejovsky. 2009. FactBank:
a corpus annotated with event factuality. Language resources and evaluation, 43(3):227.
Philip A. Schrodt, Shannon G. Davis, and Judith L.
Weddle. 1994. KEDS - a program for the machine coding of event data. *Social Science Computer Review*, 12(4):561 –587.
Lei Sha, Jing Liu, Chin-Yew Lin, Sujian Li, Baobao Chang, and Zhifang Sui. 2016. RBPB:
Regularization-based pattern balancing method for event extraction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1224–1234, Berlin, Germany. Association for Computational Linguistics.
Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ERE: Annotation of entities, relations, and events. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 89–98, Denver, Colorado. Association for Computational Linguistics.
Gabriel Stanovsky, Judith Eckle-Kohler, Yevgeniy Puzikov, Ido Dagan, and Iryna Gurevych. 2017. Integrating deep linguistic features in factuality prediction over unified datasets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 352–357, Vancouver, Canada. Association for Computational Linguistics.
Niklas Stoehr, Lucas Torroba Hennigen, Samin Ahbab, Robert West, and Ryan Cotterell. 2021. Classifying dyads for militarized conflict analysis. In *Proceedings of the 2021 Conference on Empirical Methods* in Natural Language Processing, pages 7775–7784, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Sandesh Swamy, Alan Ritter, and Marie-Catherine de Marneffe. 2017. "i have a feeling trump will win..................": Forecasting winners and losers from user predictions on Twitter. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1583–1592, Copenhagen, Denmark. Association for Computational Linguistics.
David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784–
5789, Hong Kong, China. Association for Computational Linguistics.
Xiaozhi Wang, Ziqi Wang, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Maosong Sun, Jie Zhou, and Xiang Ren.
2019. HMEAE: Hierarchical modular event argument extraction. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 5777–5783, Hong Kong, China. Association for Computational Linguistics.
Ziqi Wang, Xiaozhi Wang, Xu Han, Yankai Lin, Lei Hou, Zhiyuan Liu, Peng Li, Juanzi Li, and Jie Zhou.
2021. CLEVE: Contrastive Pre-training for Event Extraction. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 6283–6297, Online. Association for Computational Linguistics.
Ralph M. Weischedel, Eduard H. Hovy, Mitchell P. Marcus, and Martha Palmer. 2017. Ontonotes : A large training corpus for enhanced processing.
Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context.
In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 289–299, San Diego, California. Association for Computational Linguistics.
Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, and Dongsheng Li. 2019. Exploring pre-trained language models for event extraction and generation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5284–
5294, Florence, Italy. Association for Computational Linguistics.
Jiarui Yao, Haoling Qiu, Jin Zhao, Bonan Min, and Nianwen Xue. 2021. Factuality assessment as modal dependency parsing. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 1540–1550, Online. Association for Computational Linguistics.
Ying Zeng, Yansong Feng, Rong Ma, Zheng Wang, Rui Yan, Chongde Shi, and Dongyan Zhao. 2018.
Scale up event extraction learning via automatic training data generation. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI Press.
Junchi Zhang, Yanxia Qin, Yue Zhang, Mengchi Liu, and Donghong Ji. 2019a. Extracting entities and events as a single task using a transition-based neural model. In *International Joint Conference on Artificial Intelligence*.
Tongtao Zhang, Heng Ji, and Avirup Sil. 2019b. Joint Entity and Event Extraction with Generative Adversarial Imitation Learning. *Data Intelligence*, 1(2):99–
120.
Yue Zhao, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. 2018. Document embedding enhanced event detection with hierarchical and supervised attention.
In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2:
## A Appendix A.1 Limitations A.2 Risks
Short Papers), pages 414–419, Melbourne, Australia.
Association for Computational Linguistics.
Yang Zhou, Yubo Chen, Jun Zhao, Yin Wu, Jiexin Xu, and Jinlong Li. 2021. What the role is vs. what plays the role: Semi-supervised event argument extraction via dual question answering. In AAAI Conference on Artificial Intelligence.
This work identifies specific issues and provides solutions to them. Recommendations 1 and 3 have solutions that could completely eliminate the issue that they address. The method that we introduce for recommendation 2 eliminates inconsistency in selecting the head of an argument extent; however, more ways of selecting the head may exist. Future work could explore additional ways of selecting the head in order to further reduce the chance that a correctly identified argument is considered as incorrectly identified.
The risks are the same as the risks for event extraction and information extraction. While a large literature, portions of which we reference, exists on ACE event extraction, less attention has been paid to its ethical and social implications. Sociopolitical events, which ACE often focuses on, may be of great interest to social scientists (e.g. the CASE workshop) as well as having government and military intelligence utility (presumably, an original motivation of the ACE program: while its original websites11 and papers (Doddington et al.,
2004) do not appear to explicitly specify a funding agency, they cite the earlier Message Understanding Conference (MUC) as its predecessor, whose proceedings explicitly cite DARPA as a sponsor
(muc, 1991)). See, for example, Li et al. (2020b)'s ethical discussion of dual use issues for their partially ACE-based multimodal tracking/surveillance system.
## A.3 Issues With The Current Literature For Identifying Arguments
In Section 4, we identified that several recent works since 2018, including some on zero-shot EE, do not 11https://www.ldc.upenn.edu/collaborations/
past-projects/ace http://web.archive.org/web/ 20080303183132/https://www.nist.gov/speech/ tests/ace/
evaluate the correctness of an argument by comparing it against all possible references to the argument within a sentence. We discuss more details about such works.
Wadden et al. (2019) state that "the ACE data set lacks coreference annotations," and the original released code12 does not consider evaluating an argument against multiple references to the same argument. (As we note, ACE does in fact include significant coreference annotations.) Later, a third party added a software option to include clusters of entity spans, where a cluster contains spans of references referring to the same entity throughout a document, along with the event information. However, with this option, coreference resolution is still difficult because neither the entity information nor the event argument information in the pre-processed data includes an ID. While the pre-processed data includes entity and event argument spans, the spans may not completely match so mapping an event mention argument to an entity mention to check for multiple references using the pre-processed data becomes very difficult. Another third party also added code to gather coreference information corresponding to each event, but in the Github repository, one of Wadden et al. (2019)'s original authors states that both of these additions are unofficial.
We examine code bases of several works that design their pre-processing code similarly to Wadden et al. (2019) and find that they also do not collect all possible references to arguments from ACE (Du and Cardie, 2020; Lin et al., 2020; Lyu et al., 2021; Lu et al., 2021; Li et al., 2021). The Du and Cardie (2020) pre-processing code is most similar to the Wadden et al. (2019) pre-processing code, and the evaluation code does not compare arguments extracted by an EE method with ACE
annotated references. Lin et al. (2020) and Lyu et al. (2021) state that they follow Wadden et al's pre-processing code and release their code bases.
Although the code is more different than Du and Cardie (2020)'s code is, it does not gather multiple gold references for the same argument. Lyu et al.
(2021) mention that some errors in the evaluation are attributable to this coreference issue. Further, Li et al. (2021) and Lu et al. (2021) both state that they follow Wadden et al. (2019)'s pre-processing and their respective code bases reflect this. Li et al.
(2021) additionally state that they do not need to perform coreference resolution.
## A.4 Exploration Into The Ace Head And Ud-Based Head
We discuss the algorithm for identifying the UDbased head from the argument extent, and then show examples of the head that ACE identifies versus the head that the UD-based algorithm extracts.
## A.4.1 Algorithm
The algorithm identifies the head of an argument extent in a way that is consistent with the Universal Dependency Parsing (UD) definition of head, but has slight modifications to suit the interpretation that a head could be an entire named entity and to work around possible well-known types of misparses by the UD formalism. The first step of the algorithm is to apply a tokenizer on the argument extent such that hyphens and apostrophes do not break words apart. Next, use SpaCy3 to construct a list of named entities that do not include the date, time, ordinal, or cardinal entity types. After, find the lowest common ancestor (LCA) for the argument extent. If the LCA is not within a named entity of the argument extent, select it as the head.
Otherwise, select the named entity that the LCA is a substring of as the head.
The algorithm additionally handles two special cases that could complicate the UD selection of the appropriate head. If a null relativizer exists in an event argument, the UD parser may select a verb as the head. For example, in: "at least seven journalists killed covering the conflict", the parser selects "killed" as the head, which is incorrect. In addition, if a relative pronoun exists in an event argument, as in: "leader of the Iraq arms program who defected for a time", the UD parser may select the relativizer, "who", as the head. To work around these cases, the algorithm considers the argument extent to end after the first instance of a verb or relativizer pronoun that occurs after a noun (after a noun to avoid mis-identifying heads for cases such as: "these battered buildings").
We run the algorithm over all of the argument extents in ACE that are not of the form "[x] and/or [y]" since ACE has an exception of extracting two heads ([x] and [y]) from such extents, and find three mistakes out of a sample of 300. On the rare single-word case that a mistake occurs, the argument span usually contains a noun compound with spaces (most such noun compounds do not indicate a mistake), and none of these spans contain null relativizers.
## A.4.2 Contradictions
We show surprising discrepancies between the head that ACE identifies and the head that the UD-based algorithm identifies with respect to an argument extent below. Similar to the examples in Figure 3 of the main paper, the head that ACE identifies is in red brackets and the head that the UD-based algorithm identifies is in blue brackets.
the [Houston [Center]ACE]AUT
[Wall [street]AUT ]ACE
[aol time [warnerings]AUT ]ACE
[f-14 [aircraft]ACE]AUT
another [half-[brother]ACE]AUT of saddam hussein
[neither]AUT of the [women]ACE
the [[Office]ACE of the President]AUT
the [[president]AUT ]ACE-elect of the American Medical Association several [[parts]ACE]AUT of southern Iraq
[hundreds]AUT of [civilians]ACE in East Timor a [[warren]ACE]AUT of cells
[thousands]AUT of U.S. [troops]ACE
the [[Shah]ACE of Iran]AUT
the [U.S. Army [7th Cavalry]ACE]AUT
[American [Marines]ACE]AUT
two [U.S. [Marines]ACE]AUT killed in combat 21-year- old [Marine Corporal [Randall Kent Rosacker]ACE]AUT
[delma [banks]AUT ]ACE
the [national youth and student peace [coalition]AUT ]ACE
[persian [gulf]AUT ]ACE
the [center]ACE of the second largest city in iraq,
[basra]AUT
the [urbuinano [island]AUT ]ACE
the [catholic [church]ACE]AUT in phoenix, arizona two very strong - [militant groups]ACE
British [Desert [Rats]AUT ]ACE
the [Alfred P. Murrah federal [building]AUT ]ACE
his [ex-[wife]ACE]AUT
[tight [ends]AUT ]ACE
[9]AUT [more]ACE
[19]AUT [more]ACE
[second-[graders]ACE]AUT
## A.5 Ace Experiment Details
To extract statistics about coreference, we modify Wadden et al's pre-processing code. In the analysis, we omit one document due to preprocessing issues and do not consider times and values as arguments; only entities, which is consistent with most of the literature that we reviewed.
From the results in Table 2, we observe that the selected event mention argument does seem to follow a specific pattern; it does not seem to prefer being a named entity, nor consistently be the first of the references to appear in a sentence; etc.
| If multiple non-duplicate refs exist | Excl. | Incl. |
|------------------------------------------------------------------------------------------|---------|---------|
| in the same sentence, the percent that: | Pron. | Pron. |
| the event arg is a named entity, given ≥ 1 reference is a named entity | 67.63 | 60.73 |
| the event arg is not a named entity, given ≥ 1 reference is a named entity | 32.37 | 39.27 |
| the event arg is the first of those references in the sentence | 47.90 | 56.32 |
| the event arg is not the first of those references in the sentence | 52.10 | 43.68 |
| the event arg is not a relativizer pronoun, given ≥ 1 reference is a relativizer pronoun | n/a | 80.63 |
| the event arg is a relativizer pronoun, given ≥ 1 reference is a relativizer pronoun | n/a | 19.37 |
| the event arg is not a different pronoun, given ≥ 1 reference is a different pronoun | n/a | 67.46 |
| the event arg is a different pronoun, given ≥ 1 reference is a different pronoun | n/a | 32.54 |
Table 2: Percentage information about the event mention argument in the case that multiple non-duplicate references (≥ 2) to the same entity exist *in the same sentence*. A relativizer pronoun includes "who", "which";
etc while a different pronoun includes "he", "her"; etc.
We extract this number in cases where arguments can be pronouns and where they cannot be.
## A.6 Literature Review Details
To aim toward fair comparison among EE methods, works use ACE to evaluate them in three general ways. Only the earliest papers (Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011)
use the first split (A), where the evaluation uses all of the text data and 33 separate event subclasses, ignoring the event classes, and where the test set contains 40 newswire texts, the development set contains 10 newswire texts, and the rest of the texts belong to the training set. The second split (B) is an improvement upon the first, with the only difference of using *30 randomly selected* texts in the development set. A zero-shot evaluation of this split variety ignores the training set. A third split variety (C) is for a specific application of event extraction which focuses more on the generalization ability across different domains; in this split, the source domain is news, half of bc is the development set, and the remaining data makes up the test set. Three papers that we reviewed use split
(A) (Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011), at least 28 papers use split (B) (Li et al., 2013; Nguyen and Grishman, 2015; Chen et al., 2015; Nguyen et al., 2016; Yang and Mitchell, 2016; Nguyen and Grishman, 2016; Feng et al., 2016; Liu et al., 2016; Huang et al.,
2016; Sha et al., 2016; Chen et al., 2017; Liu et al.,
2017b; Zhao et al., 2018; Liu et al., 2018, 2019; Zhang et al., 2019b; Wang et al., 2019; Zhang et al.,
2019a; Yang et al., 2019; Nguyen and Nguyen, 2019; Wadden et al., 2019; Liu et al., 2020; Li et al., 2020a; Ahmad et al., 2021; Lu et al., 2021; Lyu et al., 2021; Wang et al., 2021; Zhou et al.,
2021), some for few-shot or zero-shot evaluations use a different, contrived split (e.g. Huang et al.
(2018)) and others use both split (B) and a different split (e.g. Du and Cardie (2020)).
In addition, most works use the evaluation criteria that 1. The *event trigger is considered correct* when its offsets match a gold trigger and event class is correct and 2. An *argument is considered correct* when its offsets and event class match a gold argument and its event role is correct. However, the criteria does not include many more details and is not in formal math notation, allowing discrepancies in the way that different works implement them.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section A.1 of the Appendix
✓ A2. Did you discuss any potential risks of your work?
Section A.2 of the Appendix
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1 of the main paper
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We use the Automatic Content Extraction (ACE) dataset, introducing it in Sections 1 and 2, and using it in Section 4.
✓ B1. Did you cite the creators of artifacts you used?
Sections 1 and 2 of the main paper and A.2 of the Appendix
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 1 and A.2 of the Appendix
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Sections 1, 2, and 3 of the main paper
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section A.2 of the Appendix
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Sections 1 and 2 of the main paper
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We discuss them in Sections 1, 2, and 4 of the main paper and discuss more details in the Appendix.
C ✓ **Did you run computational experiments?**
Section 4 C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 of the main paper and Sections A.4 and A.5 of the Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 of the main paper and Section A.5 of the Appendix
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 of the main paper and Section A.5 of the Appendix
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
lu-etal-2023-event | Event Extraction as Question Generation and Answering | https://aclanthology.org/2023.acl-short.143 | Recent work on Event Extraction has reframed the task as Question Answering (QA), with promising results. The advantage of this approach is that it addresses the error propagation issue found in traditional token-based classification approaches by directly predicting event arguments without extracting candidates first. However, the questions are typically based on fixed templates and they rarely leverage contextual information such as relevant arguments. In addition, prior QA-based approaches have difficulty handling cases where there are multiple arguments for the same role. In this paper, we propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates. We also propose dynamic templates to assist the training of QG model. Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset. | # Event Extraction As Question Generation And Answering
Di Lu Shihao Ran Joel Tetreault Alejandro Jaimes Dataminr Inc.
{dlu,sran,jtetreault,ajaimes}@dataminr.com
## Abstract
Recent work on Event Extraction has reframed the task as Question Answering (QA), with promising results. The advantage of this approach is that it addresses the error propagation issue found in traditional token-based classification approaches by directly predicting event arguments without extracting candidates first.
However, the questions are typically based on fixed templates and they rarely leverage contextual information such as relevant arguments. In addition, prior QA-based approaches have difficulty handling cases where there are multiple arguments for the same role. In this paper, we propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates. We also propose dynamic templates to assist the training of QG
model. Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.1
## 1 Introduction
Event Extraction (EE) aims to extract core information elements (e.g. who, what, where, when) from text, and is a very important task in Natural Language Processing (NLP). It provides inputs to downstream applications such as Summarization (Filatova and Hatzivassiloglou, 2004), Knowledge Base Population (Ji and Grishman, 2011), and Recommendation (Lu et al., 2016).
Previous work (Li et al., 2013; Nguyen et al.,
2016; Sha et al., 2018) is typically based on a pipeline approach, which first identifies the event trigger word/phrase and argument candidates, and then applies a classifier to the pair-wise features to classify the roles of the candidates. Unfortunately, errors tend to propagate down the pipeline.
Recently, some approaches have formulated EE
1Our code is available at https://github.com/
dataminr-ai/Event-Extraction-as-QuestionGeneration-and-Answering for research purposes.
![0_image_0.png](0_image_0.png)
made a hasty retreat.
Figure 1: An event mention example from ACE. An ACE Conflict.Attack event with *pummeled* as trigger word and three event arguments: *coalition* (Attacker),
jets (Instrument) and *hills* (Place).
as a Question Answering (QA) problem (Du and Cardie, 2020; Li et al., 2020; Lyu et al., 2021) to mitigate the issue, in which questions for each argument role are manually defined by templates. For example, extracting the Attack argument from the Conflict.Attack event in the sentence 'That's because coalition fighter jets pummeled this Iraqi position on the hills above Chamchamal and Iraqi troops made a hasty retreat.' is reframed as answering the question *'Who was the attacking agent?'*
These approaches have shown promising results, but template-based questions are limiting: since the templates are built manually, they are fixed and rarely include contextual information (i.e., specific to the inputs), except for trigger words in some work (Du and Cardie, 2020). Formulating good questions, however, has been shown to improve performance for standard QA tasks (Rajpurkar et al., 2018). For QA-based EE, a question that incorporates richer contextual information such as other event arguments could yield better results (e.g.
'Who used jets in the attack in *hills?'* in Figure 1).
In this paper, we propose QGA-EE, which consists of 1) a QG model for generating a contextaware question conditioned on a target argument 1666 role and 2) a QA model for answering the contextaware question to extract the event argument. We also design dynamic templates to generate the gold context-aware questions for QG model training.
To the best of our knowledge, this is the first QA-based EE work that utilizes dynamic templates and focuses on generating context-aware questions.
Li et al. (2020) also propose a model to generate questions that incorporate contextual information for both event trigger and arguments. However, our work has two main advantages. First, in Li et al. (2020) the question only incorporates the contextual information at the ontology level (e.g.
argument role, event type). In our work, the generated questions incorporate contextual information at an event mention-level. For example, the question generated by our model includes the real event argument rather than just the argument role (e.g.
'hills' vs 'Place'). Second, the questions in their work are generated by filling in the templates, but our templates are dynamic and used to train the QG model which can automatically generate the optimal question given a specific event mention and the concerned argument role.
Experimental results show that QGA-EE outperforms all of the single-task-based models on the Automatic Content Extraction (ACE) 2005 English dataset (Doddington et al., 2004) and even achieves competitive performance with state-of-the-art joint IE models.
## 2 Model
Figure 1 shows the overall framework of QGA-EE.
It focuses on Event Argument Extraction (EAE)
only, but can be paired with any event trigger tagger to perform end-to-end EE. In Section 4, we pair it with a standard sequence labeling trigger tagger to evaluate its end-to-end EE performance.
## 2.1 Question Generation Model
Previous QA-based EE work (Du and Cardie, 2020)
fills in pre-designed templates with trigger information to generate the input questions to the QA
model. However missing contextual information in the questions is a bottleneck for the performance of the QA model.
QGA-EE uses a QG model to generate contextaware questions conditioned on the input sentence and target role, which is based on a sequence-tosequence architecture (e.g. BART(Lewis et al.,
2020), T5(Raffel et al., 2020)). In order to train the QG model, we design **Dynamic Templates**
for each role in the ACE ontology.2 We design multiple templates for each role, and each of them includes different combinations of other argument roles.
Who was the attacking agent?
Who attacked [Target]? Who used [Instrument] in the attack? Who made the attack in [Place]? Who attacked [Target] using [Instrument]?
Who attacked [Target] in [Place]?
Who used [Instrument] in the attack in [Place]? Who attacked [Target] using [Instrument] in [Place]?
Table 1: Dynamic templates for Attacker role in Conflict.Attack event with different combinations of known argument roles based on ACE ontology.
For example, the Conflict.Attack event in ACE has four predefined argument roles:
Attacker, Target, Instrument and Place.
3 For the Attacker role, we exhaustively design eight templates using all of the possible combinations of the other roles included in the question (Table 1).
When the model fills in the templates given a specific event mention, it is common that some of the predefined argument roles do not exist in the event mention. Thus the model only keeps the templates that contain the slots for argument roles appearing in the event mention. For the example in Figure 1, the Target role is not mentioned. So we ignore all of the templates that contain the [Target]
slot, and we obtain four candidate questions for the Attacker role with corresponding arguments filled in: (1)*Who was the attacking agent?* (2) Who used jets in the attack? (3) *Who made the attack in hills?*
(4) *Who used jets in the attack in hills?*
To train a QG model to generate the questions that cover as many contextual information as possible, we use the question that contains the most contextual arguments as the ground truth. For the example in Figure 1, we choose the question *'Who* used jets in the attack in hills?', because it contains two arguments: *'jets'* and *'hills'*, the other three candidate questions listed above contain one or zero arguments. If more than one candidate question contains the most contextual arguments, we then pick the first one. The input and output examples for the QG model are as follows:
2https://www.ldc.upenn.edu/sites/www.ldc.
upenn.edu/files/english-events-guidelinesv5.4.3.pdf 3We follow the experimental setting of prior work, which excludes all the Value and Timex. Thus the argument roles such as Time are not included.
Input: role: attacker context: That's because coalition fighter jets * pummeled * this Iraqi position on the hills above Chamchamal and Iraqi troops made a hasty retreat.
Output: Who used jets in the attack in hills?
## 2.2 Question Answering Model
Different from prior QA-based EE work that adapt an encoder-only architecture and predict the offsets of the event arguments (Chen et al., 2019; Du and Cardie, 2020; Li et al., 2020; Liu et al., 2020; Feng et al., 2020; Lyu et al., 2021; Zhou et al., 2021),
our QA model is based on a sequence-to-sequence architecture (e.g. BART, T5), and generates the answer string directly. This enables prediction of multiple event arguments that are associated with the same role. Li et al. (2021) also adapts a generation model, but the input template is fixed. The examples of input and output are as follows:
Input: question: Who was harmed in * injured
* event? context: Injured Russian diplomats and a convoy of America's Kurdish comrades in arms were among unintended victims caught in crossfire and friendly fire Sunday.
Output: *diplomats; convoy; victims* < /s >
Post-processing We split the output into a list of candidates (by ';'), and retrieve the arguments with offsets by exactly matching against the original sentence. We dynamically change the start position for searching to preserve the order of the retrieved event arguments. If an argument candidate cannot be matched with the original sentence, we discard it. Unlike the QG model, we use all of the possible questions as inputs during training for data augmentation purposes, and the size of the training data increases from 15,426 to 20,681.
But in the testing phase, we use the single question generated by the QG model for each argument role.
## 3 Experimental Setup 3.1 Dataset And Evaluation Metrics
We conduct the experiments on the ACE 2005 English corpora, which has 33 event types and 22 argument roles. It contains 599 documents collected from newswire, weblogs, broadcast conversations, and broadcast news. More specifically, we follow the pre-processing steps in Wadden et al. (2019),4 and evaluate our models on the resulting ACE05-E dataset.
4https://github.com/dwadden/dygiepp For evaluation, we use the same criteria as prior work (Li et al., 2013): An **event trigger** is correctly identified if its offsets exactly match a reference. It is correctly classified if both its offsets and event type match a reference. An **event argument** is correctly identified (Arg-I) if its offsets and event type match a reference in the ground truth. It is correctly classified (Arg-C) if all of its offsets, event type, and argument role match a reference.
## 3.2 Compared Baselines
Model Variants. To evaluate the generalizability of our approach, we evaluate two QGA-EE variants:
QGA-EE*BART* and **QGA-EE**T5, which use BART
and T5 as backbones respectively.
We compare the proposed models against SOTA
EE models. **BERT QA** (Du and Cardie, 2020) use BERT as the encoder and predict the positions of the argument directly with role-driven questions.
TANL (Paolini et al., 2021) transfers input sentences into augmented natural language sentences for structured prediction. **TEXT2EVENT** (Lu et al., 2021) is a sequence-to-structure network for event extraction.5 Ma et al. (2020) utilizes dependency parses as additional features. **BART-Gen** (Li et al., 2021) is a BART-based generation model proposed for document-level event extraction.
We also compare with joint IE models trained on all of the ACE annotations which include entities, relations, and events. They benefit from additional information from other tasks and usually achieve better performance than the models trained on a single task. It is not fair to directly compare our model with the joint models since they incorporate more information beyond the standard EE training sets, but we still list their scores as a reference. **DYGIE++** (Wadden et al., 2019) is a BERT-based model that models span representations with within-sentence and cross-sentence context. **ONEIE** (Lin et al., 2020) leverages global features. **FourIE** (Nguyen et al., 2021) and GraphIE (Van Nguyen et al., 2022) are Graph Convolutional Networks-based models and **AMRIE** (Zhang and Ji, 2021) utilizes AMR (Banarescu et al., 2013) parser.
## 3.3 Implementation Details
We conduct all of the experiments on a single V100 GPU. For finetuning, we use the Adafactor (Shazeer and Stern, 2018) optimizer with a 5DEGREE (Hsu et al., 2022) is not included because it is not evaluated on all of the argument roles.
learning rate of 1 ∗ 10−4, weight decay of 1 ∗ 10−5, and clip threshold of 1.0. We train the model for 20 epochs. Further details such as hyperparameters and data statics for model training and evaluation are in Appendix C.
## 4 Results 4.1 Event Argument Extraction Performance
Table 2: Event Extraction Results on ACE05-E test data (F1, %) with gold triggers. ∗ models are trained with additional entity and relation data. + numbers are reported from Hsu et al. (2022), and others are from the original papers.
Table 2 shows the performance of QGA-EE
models on ACE05-E test set with gold triggers.6 Both QGA-EE variants outperform all other approaches, and using T5 as backbone provides an improvement of 2.5% over BART. The improvement over the prior QA-based models BERT_QA
shows that generation-based QA models are more effective than position-based QA models for EE.
QGA-EE*BART* outperforms the BART-based baseline BART-Gen and QGA-EET5 outperforms the T5-based baseline TANL, which demonstrates the effectiveness of our models with different backbones. Our models even outperform the joint IE
models DYGIE++ and ONEIE, which leverage additional information from entities and relations.
| Arg-I | Arg-C | |
|---------------------------------|---------|------|
| BERT_QA (Du and Cardie, 2020) | 68.2 | 65.4 |
| TANL+ (Paolini et al., 2021) | 65.9 | 61.0 |
| Ma et al. (2020) | - | 62.1 |
| BART-Gen (Li et al., 2021) | 69.9 | 66.7 |
| DYGIE++∗+ (Wadden et al., 2019) | 66.2 | 60.7 |
| ONEIE∗+ (Lin et al., 2020) | 73.2 | 69.3 |
| QGA-EEBART (ours) | 72.4 | 70.3 |
| QGA-EET 5 (ours) | 75.0 | 72.8 |
## 4.2 Event Extraction Performance
We also evaluate our models on ACE05-E in a more "real world" fashion with *predicted* triggers extracted by an ALBERT-based (Lan et al., 2019)
sequence-labeling model (Table 3).7 Similar to the performance on gold triggers, QGA-EE benefits more from the T5 backbone on predicted triggers.
Both QGA-EE variants outperform all the EE-taskcentered baselines by more than 1% on Arg-C.
Table 3: Event Extraction Results on ACE05-E test data
(F1, %) with predicted triggers. ∗ models are trained with additional entity and relation data. All numbers of baselines are reported from the original papers.
We also include the scores from SOTA joint IE
models, DYGIE++, ONEIE, FourIE, AMR-IE and GraphIE, as reference. But, as stated earlier, it is not fair to compare our models directly with them, as they benefit from being trained with all of the annotations from entities, relations, and events. Also it should be noted that their trigger labeling models have more complicated architectures and thus perform significantly better than the sequence-labeling based tagger we use (F1 75.4% from FourIE and F1 74.7% from OneIE). This further boosts the end-to-end EE performance.
| BERT_QA (Du and Cardie, 2020) | 54.1 | 53.1 |
|------------------------------------|--------|--------|
| TANL (Paolini et al., 2021) | 50.1 | 47.6 |
| TEXT2EVENT (Lu et al., 2021) | - | 53.8 |
| Ma et al. (2020) | 56.7 | 54.3 |
| BART-Gen (Li et al., 2021) | - | 53.7 |
| DYGIE++∗ (Wadden et al., 2019) | 54.1 | 51.4 |
| ONEIE∗ (Lin et al., 2020) | 59.2 | 56.8 |
| FourIE∗ (Nguyen et al., 2021) | 60.7 | 58.0 |
| AMR-IE∗ (Zhang and Ji, 2021) | 60.9 | 58.6 |
| GraphIE∗ (Van Nguyen et al., 2022) | - | 59.4 |
| QGA-EEBART (ours) | 57.1 | 55.6 |
| QGA-EET 5 (ours) | 59.8 | 57.9 |
## 4.3 Ablation Study
Table 4 shows the ablation study of the QGAEET5 model on the ACE05 test set with gold triggers. By replacing the QG model with simple context-unaware templates, the F1 score decreases by 1.65%. It demonstrates that the context-aware questions generated by our QG component enhance the end-to-end event argument extraction performance. Additionally, the generation-based QA
model deals with multi-argument situations better and provides an improvement of 4.24%.
Table 4: Ablation study with gold triggers on ACE05-E
test set (F1, %).
## 4.4 Impact Of Data Augmentation
| Arg-I | Arg-C | |
|-----------------------------------------|---------|-------|
| QGA-EET 5 | 75.04 | 72.78 |
| - w/o pretrained QG | 73.57 | 71.13 |
| - w/o pretrained QG & mutli-arg support | 69.61 | 66.89 |
As we mentioned in Section 2.2, the size of the training data increases from 15,426 to 20,681 as a benefit of our proposed dynamic templates. To evaluate the contribution of the data augmentation, we evaluate the performance of QGA-EE on ACE05 test data with partial training data (with gold triggers). With 40% of the training examples after data augmentation (8,272), QGA-EE achieves a F1 score of 71.42% on ACE05-E test set with gold triggers. It outperforms all of the baselines in Table 2, which demonstrates the effectiveness of our proposed model.
| Arg-I | Arg-C | |
|-----------------------------------|---------|-------|
| QGA-EET 5 with 100% training data | 75.04 | 72.78 |
| QGA-EET 5 with 80% training data | 73.86 | 71.64 |
| QGA-EET 5 with 60% training data | 73.15 | 71.63 |
| QGA-EET 5 with 40% training data | 73.47 | 71.42 |
| QGA-EET 5 with 20% training data | 71.15 | 69.13 |
## 4.5 Analysis And Discussion
![4_Image_1.Png](4_Image_1.Png)
The average length of the questions generated by QGA-EET5 is 10.5 tokens, compared with 6.7 in Du and Cardie (2020). They contain more context. For example, QGA-EE generates *'Who was attacked by mob in state?'* for the Target role in 'At least three members of a family in Indias northeastern state of Tripura were **[hacked***Conf lict.Attack*]
to death by a tribal mob for allegedly practicing witchcraft, police said Thursday.' It incorporates Attacker ('mob') and Place ('state') information.
We categorize the errors into four groups:
1. Bad question generated by the QG model.
For example, QGA-EE generates *'What did* state buy in * sell * event?' for the Artifact role in '... that the Stalinist state had developed nuclear weapons and hinted it may sell or use them, depending on US actions.'. It should have been 'What did state sell in * sell
* event?' and this introduces an error to the QA model.
2. Errors resulting from a mismatch of the QA
output result. QGA-EE may retrieve wrong offsets if a target candidate matches with multiple text strings in the original sentence.
For example, QGA-EE matches the candidate *'Welch'* with the first mention in 'He also wants to subpoena all documents maintained in Jane Beasley Welch's personnel file by Shearman; Sterling, a prestigious corporate law firm where she worked before she
[marriedLife.Marry] *Welch.'*, where the correct one is the second mention.
3. Errors resulting from missing entity conference. For example, QGA-EE identifies *'Jacques Chirac'* as the Entity of
![4_image_0.png](4_image_0.png)
the Contact.Phone-Write event in 'French President Jacques Chirac received only a reserved response when he tried to mend fences by placing a telephone call Tuesday to Bush.'.
But *'he'* is the ground truth and refers to
'Jacques Chirac'.
4. Predictions not explicitly mentioned. For example, in *'Kelly, the US assistant secretary* for East Asia and Pacific Affairs, arrived in Seoul from Beijing Friday to brief Yoon, the foreign minister.', QGA-EE infers *'Seoul'* as the Place of the Contact.Meet event, but it is not explicitly mentioned in the context, thus not covered by the gold annotations.
We manually analyzed a subset of the errors from the test set (50 examples), and show the portion of each category of error in Figure 2.
## 5 Conclusion
In this paper, we present QGA-EE, a novel sequence-to-sequence based framework for EE,
which utilizes a QG model to generate contextaware questions as inputs to a QA model for EAE.
Our model naturally supports the cases in which multiple event arguments play the same role within a specific event mention. We conduct experiments on the ACE05-E dataset and the proposed model outperforms all of the single-task-based models and achieves competitive results with state-of-theart joint IE models. In the future, we plan to utilize the extensibility of the QA framework to incorporate knowledge from semi-structured eventrelevant data such as Wikipedia Infoboxes. We also plan to extend our approach to multilingual EE and joint IE.
## Limitations
The design of the dynamic templates requires knowledge of the event ontology and is timeconsuming. The authors of the paper spent 30 hours designing the exclusive templates that cover all of the possible argument combinations for each argument role in ACE ontology. With a more complicated ontology, a much larger amount of time is required.
Another limitation of our approach is the offset retrieval method. If one sentence contains multiple mentions of the same entities, or even multiple text strings that have the same spellings but refer to different entities, the QGA-EE model always retrieves the position where the mention appears for the first time in the sentence as the offset of the extracted target. It may be improved by asking the model to generate contextual text as a position reference.
## Acknowledgements
We thank our colleague Aoife Cahill and the anonymous reviewers for their constructive comments and suggestions.
## References
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In *Proceedings of the 7th linguistic annotation workshop and interoperability with* discourse, pages 178–186.
Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, and Benjamin Van Durme. 2019. Reading the manual: Event extraction as definition comprehension. *arXiv preprint arXiv:1912.01586*.
George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ACE) program - tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04).
Xinya Du and Claire Cardie. 2020. Event extraction by answering (almost) natural questions. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Rui Feng, Jie Yuan, and Chao Zhang. 2020. Probing and fine-tuning reading comprehension models for few-shot event extraction. arXiv preprint arXiv:2010.11325.
Elena Filatova and Vasileios Hatzivassiloglou. 2004.
Event-based extractive summarization. In *Text Summarization Branches Out*.
I-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei Chang, and Nanyun Peng. 2022. DEGREE: A data-efficient generation-based event extraction model. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Heng Ji and Ralph Grishman. 2011. Knowledge base population: Successful approaches and challenges.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2019. Albert: A lite bert for self-supervised learning of language representations. *arXiv preprint* arXiv:1909.11942.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*.
Fayuan Li, Weihua Peng, Yuguang Chen, Quan Wang, Lu Pan, Yajuan Lyu, and Yong Zhu. 2020. Event extraction as multi-turn question answering. In *Findings of the Association for Computational Linguistics:*
EMNLP 2020.
Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features.
In *Proceedings of the 51st Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers).
Sha Li, Heng Ji, and Jiawei Han. 2021. Document-level event argument extraction by conditional generation.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81.
Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020.
A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Jian Liu, Yubo Chen, Kang Liu, Wei Bi, and Xiaojiang Liu. 2020. Event extraction as machine reading comprehension. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Di Lu, Clare Voss, Fangbo Tao, Xiang Ren, Rachel Guan, Rostyslav Korolov, Tongtao Zhang, Dongang Wang, Hongzhi Li, Taylor Cassidy, Heng Ji, Shih-fu Chang, Jiawei Han, William Wallace, James Hendler, Mei Si, and Lance Kaplan. 2016. Cross-media event extraction and recommendation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics:
Demonstrations.
Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, and Shaoyi Chen. 2021. Text2Event: Controllable sequence-tostructure generation for end-to-end event extraction. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers).
Qing Lyu, Hongming Zhang, Elior Sulem, and Dan Roth. 2021. Zero-shot event extraction via transfer learning: Challenges and insights. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers).
Jie Ma, Shuai Wang, Rishita Anubhai, Miguel Ballesteros, and Yaser Al-Onaizan. 2020. Resourceenhanced neural model for event argument extraction.
In *Findings of the Association for Computational* Linguistics: EMNLP 2020.
Minh Van Nguyen, Viet Dac Lai, and Thien Huu Nguyen. 2021. Cross-task instance representation interactions and label dependencies for joint information extraction with graph convolutional networks.
In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In *Proceedings of the 2016 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages. In 9th International Conference on Learning Representations, ICLR 2021.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21(140):1–67.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you don't know: Unanswerable questions for SQuAD. In *Proceedings of the 56th Annual*
Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).
Lei Sha, Feng Qian, Baobao Chang, and Zhifang Sui.
2018. Jointly extracting event triggers and arguments by dependency-bridge rnn and tensor-based argument interaction. In *Proceedings of the AAAI Conference* on Artificial Intelligence.
Noam Shazeer and Mitchell Stern. 2018. Adafactor:
Adaptive learning rates with sublinear memory cost.
In *International Conference on Machine Learning*,
pages 4596–4604. PMLR.
Minh Van Nguyen, Bonan Min, Franck Dernoncourt, and Thien Nguyen. 2022. Joint extraction of entities, relations, and events via modeling inter-instance and inter-label dependencies. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4363–4374.
David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations.
Zixuan Zhang and Heng Ji. 2021. Abstract meaning representation guided graph encoding and decoding for joint information extraction. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 39–49.
Yang Zhou, Yubo Chen, Jun Zhao, Yin Wu, Jiexin Xu, and Jinlong Li. 2021. What the role is vs. what plays the role: Semi-supervised event argument extraction via dual question answering. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 14638–14646.
## A Ace05-E Data Preprocessing
We follow the preprocessing steps in Wadden et al.
(2019) to preprocess ACE2005 corpora. More specifically, we use the preprocessing script at https://github.com/dwadden/dygiepp. In addition, we retrieve the character positions of the event triggers and arguments, because T5 uses a SentencePiece tokenizer. Table 6 shows the statistics of the ACE05-E dataset.
| Split | #Sents | #Events | #Arguments |
|---------|----------|-----------|--------------|
| Train | 17,172 | 4,202 | 4,859 |
| Dev | 923 | 450 | 605 |
| Test | 832 | 403 | 576 |
Table 6: Data statistics of the ACE05-E dataset.
## B **Complete Dynamic Templates For Ace** Ontology
Table 12 shows the complete list of templates with different combinations of known argument roles for each ACE event argument role.
## C Implementation Details
We use Huggingface Transformers library (Wolf et al., 2020) to load the model checkpoints.
## C.1 Event Trigger Labeling Model
Table 7: Hyperparameter for Event Trigger Labeling Model training.
| Hyperparamter | Value |
|-----------------------------|---------|
| Learning rate | 3e-5 |
| Learning rate decay | 1e-5 |
| Epoch | 20 |
| Batch size | 4 |
| Gradient accumulation steps | 4 |
We implemented an ALBERT-based sequence labeling model for event trigger detection. We simply apply Softmax on top of the ALBERT encoder to predict the BIO schema based event label. We finetune the albert-xxlarge-v2 checkpoint provided by Huggingface during training. 8. We use the Adam optimizer with clip threshold of 1.0 and warmup proportion of 0.1. Table 7 shows the hyperparameter to train the Event Trigger Labeling Model.
## C.2 Qg Model
When generating the groundtruth for QG model training, we use the basic template (e.g. 'Who was the attacking agent?') without incorporating any arguments if the target event role does not exist in the event mention. And we do not restrict the QG model to generate verbs that only appear in the 8https://huggingface.co/albert-xxlarge-v2 Table 8: Hyperparameter for QG Model training.
templates. They are preserved for training the QA
model.
We finetune the T5-large checkpoint provided by Huggingface during training. 9 with the Adafactor optimizer with clip threshold of 1.0 and warmup proportion of 0.1. Table 8 shows the hyperparameter to train the QG Model. And Table 9 shows the numbers of examples used to train and evaluate the QG model.
| Hyperparamter | Value |
|-----------------------------|---------|
| Learning rate | 1e-4 |
| Learning rate decay | 1e-5 |
| Epoch | 20 |
| Batch size | 2 |
| Gradient accumulation steps | 32 |
| Number of beam | 4 |
| Length penalty | 0.0 |
Table 9: Number of examples used to train and evaluate the QG and QA models.
| Train | Dev | Test | |
|----------|--------|--------|-------|
| QG Model | 15,785 | 1,767 | 1,434 |
| QA Model | 20,681 | 1,713 | 1,391 |
## C.3 Qa Model
Table 10: Hyperparameter for QA Model training.
| Hyperparamter | Value |
|-----------------------------|---------|
| Learning rate | 2e-4 |
| Learning rate decay | 1e-5 |
| Epoch | 20 |
| Batch size | 2 |
| Gradient accumulation steps | 32 |
| Number of beam | 4 |
| Length penalty | -2.5 |
For the QA model training, we use the Adafactor optimizer with a learning rate of 2e-4, and weight decay of 1e-5, and clip threshold as 1.0. We set all of the relative_step, scale_parameter, and warmup_init parameters to False. For optimizer scheduler, we set the warmup proportion to 0.1.
If there are no event arguments for the argument role, the output is empty, as the following example.
We include them to train the QA model. Table 9 9https://huggingface.co/t5-large shows the numbers of examples used to train and evaluate the QA model.
Input: question: What device was used to inflict the harm in * injured * event? context: Injured Russian diplomats and a convoy of America's Kurdish comrades in arms were among unintended victims caught in crossfire and friendly fire Sunday.
Output: < /s >
In postprocessing, we dynamically change the start position for searching to keep the order of the retrieved event arguments.
## D Experiment Details
For all of the scores reported in the paper, the numbers are based on a single run with a fixed random seed 42.
## D.1 Event Trigger Labeling Model
Table 11 shows the performance of the Event Trigger Labeling model on ACE05-E test set.
| Trigger Identification | Trigger Classification | | | | |
|--------------------------|--------------------------|-------|-------|-------|-------|
| P | R | F1 | P | R | F1 |
| 72.52 | 79.9 | 76.03 | 69.59 | 76.67 | 72.96 |
Table 11: Performance of our event trigger labeling model on ACE05-E test data (%).
## D.2 Qg Model
We use Rouge (Lin, 2004) score (ROUGE-1) as the evaluation metric for QG model training, and the score on ACE05-E test set is 0.892.
| Person | - | Who was born? | | | | | |
|----------------------|-----------------------------------------------------------------|------------------------------------------------|------------------------------------|----------|-------|--------------|------|
| Place | Who was born in [Place]? | | | | | | |
| Life.Be-Born | Place | - | Where did the birth take place? | | | | |
| Person | Where was [Person] born? | | | | | | |
| Person | - | Who was married? | | | | | |
| Place | Who was married in [Place]? | | | | | | |
| Life.Marry | Place | - | Where did the marriage take place? | | | | |
| Person | Where was [Person] married? | | | | | | |
| Person | - | Who was divorced? | | | | | |
| Place | Who was divorced in [Place]? | | | | | | |
| Life.Divorce | Place | - | Where did the divorce take place? | | | | |
| Person | Where was [Person] divorced? | | | | | | |
| - | Who enacted the harm? | | | | | | |
| Victim | Who harmed [Victim]? | | | | | | |
| Instrument | Who enacted the harm using [Instrument]? | | | | | | |
| Place | Who enacted the harm in [Place]? | | | | | | |
| Victim, Instrument | Who harmed [Victim] using [Instrument]? | | | | | | |
| Victim, Place | Who harmed [Victim] in [Place]? | | | | | | |
| Instrument, Place | Who enacted the harm using [Instrument] in [Place]? | | | | | | |
| Victim, | Instrument, | Who harmed [Victim] using [Instrument] in | | | | | |
| Place | [Place]? | | | | | | |
| Agent | - | Who was harmed? | | | | | |
| Agent | Who was harmed by [Agent]? | | | | | | |
| Instrument | Who was harmed with [Instrument]? | | | | | | |
| Place | Who was harmed in [Place]? | | | | | | |
| Agent, Instrument | Who was harmed by [Agent] with [Instrument]? | | | | | | |
| Agent, Place | Who was harmed by [Agent] in [Place]? | | | | | | |
| Instrument, Place | Who was harmed with [Instrument] in [Place]? | | | | | | |
| Agent, | Instrument, | Who was harmed by [Agent] with [Instrument] in | | | | | |
| Place | [Place]? | | | | | | |
| Victim | | | | | | | |
| Life.Injure | - | What device was used to inflict the harm? | | | | | |
| Agent | What device was used by [Agent] to inflict the harm? | | | | | | |
| Victim | What device was used to harm [Victim]? | | | | | | |
| Place | What device was used to inflict the harm in [Place]? | | | | | | |
| Agent, Victim | What device was used by [Agent] to harm [Victim]? | | | | | | |
| Agent, Place | What device was used by [Agent] to inflict the harm in [Place]? | | | | | | |
| Victim, Place | What device was used to harm [Victim] in [Place]? | | | | | | |
| Agent, Victim, Place | What device was used by [Agent] to harm [Victim] in [Place]? | | | | | | |
| Instrument | - | Where did the injuring take place? | | | | | |
| Agent | Where did [Agent] enact the harm? | | | | | | |
| Victim | Where was [Victim] harmed? | | | | | | |
| Instrument | Where was [Instrument] used to inflict the harm? | | | | | | |
| Agent, Victim | Where did [Agent] harm [Victim]? | | | | | | |
| Agent, Instrument | Where | did | [Agent] | enact | the | harm | with |
| [Instrument]? | | | | | | | |
| Victim, Instrument | Where was [Victim] harmed with [Instrument]? | | | | | | |
| Agent, | Victim, | Where | did | [Agent] | harm | [Victim] | with |
| Instrument | [Instrument]? | | | | | | |
| Place | - | Who was the killer? | | | | | |
| Victim | Who killed [Victim]? | | | | | | |
| Instrument | Who killed others using [Instrument]? | | | | | | |
| Place | Who killed others in [Place]? | | | | | | |
| Victim, Instrument | Who killed [Victim] using [Instrument]? | | | | | | |
| Victim, Place | Who killed [Victim] in [Place]? | | | | | | |
| Instrument, Place | Who killed others using [Instrument] in [Place]? | | | | | | |
| Victim, | Instrument, | Who | killed | [Victim] | using | [Instrument] | in |
| Place | [Place]? | | | | | | |
| Agent | - | Who was killed? | | | | | |
| Agent | Who was killed by [Agent]? | | | | | | |
| Instrument | Who was killed with [Instrument]? | | | | | | |
| Place | Who was killed in [Place]? | | | | | | |
| Agent, Instrument | Who was killed by [Agent] with [Instrument]? | | | | | | |
| Agent, Place | Who was killed by [Agent] in [Place]? | | | | | | |
| Instrument, Place | Who was killed with [Instrument] in [Place]? | | | | | | |
| Agent, | Instrument, | Who was killed by [Agent] with [Instrument] in | | | | | |
| Place | [Place]? | | | | | | |
| Victim | | | | | | | |
| Life.Die | | | | | | | |
| Instrument Place Agent Artifact |
|-----------------------------------|
| - | What device was used to kill? | | | | | | |
|-----------------------------|------------------------------------------------------------|-------------------------------------------------|-------------|-------------|---------------|----------|------|
| Agent | What device did [Agent] use to kill others? | | | | | | |
| Victim | What device was used to kill [Victim]? | | | | | | |
| Place | What device was used to kill others in [Place]? | | | | | | |
| Agent, Victim | What device did [Agent] use to kill [Victim]? | | | | | | |
| Agent, Place | What device did [Agent] use to kill others in [Place]? | | | | | | |
| Victim, Place | What device was used to kill [Victim] in [Place]? | | | | | | |
| Agent, Victim, Place | What device did [Agent] use to kill [Victim] in [Place]? | | | | | | |
| - | Where did the death take place? | | | | | | |
| Agent | Where did [Agent] kill others? | | | | | | |
| Victim | Where was [Victim] killed? | | | | | | |
| Instrument | Where were people killed with [Instrument]? | | | | | | |
| Agent, Victim | Where did [Agent] kill [Victim]? | | | | | | |
| Agent, Instrument | Where did [Agent] kill others with [Instrument]? | | | | | | |
| Victim, Instrument | Where was [Victim] killed with [Instrument]? | | | | | | |
| Agent, | Victim, | Where | did | [Agent] | kill | [Victim] | with |
| Instrument | [Instrument]? | | | | | | |
| - | Who is responsible for the transport event? | | | | | | |
| Artifact | Who transported [Artifact]? | | | | | | |
| Vehicle | Who transported artifact using [Vehicle]? | | | | | | |
| Origin | Who transported artifact from [Origin]? | | | | | | |
| Destination | Who transported artifact to [Destination]? | | | | | | |
| Artifact, Vehicle | Who transported [Artifact] using [Vehicle]? | | | | | | |
| Artifact, Origin | Who transported [Artifact] from [Origin]? | | | | | | |
| Artifact, | Who transported [Artifact] to [Destination]? | | | | | | |
| Destination Vehicle, Origin | Who transported artifact from [Origin] using [Vehicle]? | | | | | | |
| Vehicle, Destination | Who transported artifact to [Destination] using [Vehicle]? | | | | | | |
| Origin, Destination | Who | transported | artifact | from | [Origin] | to | |
| [Destination]? | | | | | | | |
| Artifact, | Vehicle, | Who transported [Artifact] from [Origin] using | | | | | |
| Origin | [Vehicle]? | | | | | | |
| Artifact, | Vehicle, | Who transported [Artifact] to [Destination] using [Vehicle]? | | | | | |
| Destination Artifact, | Origin, | Who transported [Artifact] from [Origin] to | | | | | |
| Destination | [Destination]? | | | | | | |
| Vehicle, | Origin, | Who | transported | artifact | from | [Origin] | to |
| Destination | [Destination] using [Vehicle]? | | | | | | |
| Artifact, | Vehicle, | Who transported [Artifact] from [Origin] to | | | | | |
| Origin, Destination | [Destination] using [Vehicle]? | | | | | | |
| - | Who was transported? | | | | | | |
| Agent | Who was transported by [Agent]? | | | | | | |
| Vehicle | Who was transported with [Vehicle]? | | | | | | |
| Origin | Who was transported from [Origin]? | | | | | | |
| Destination | Who was transported to [Destination]? | | | | | | |
| Agent, Vehicle | Who was transported by [Agent] with [Vehicle]? | | | | | | |
| Agent, Origin | Who was transported from [Origin] by [Agent]? | | | | | | |
| Agent, Destination | Who was transported to [Destination] by [Agent]? | | | | | | |
| Vehicle, Origin | Who | was | transported | from | [Origin] | with | |
| [Vehicle]? | | | | | | | |
| Vehicle, Destination | Who | was | transported | to | [Destination] | with | |
| [Vehicle]? | | | | | | | |
| Origin, Destination | Who | was | transported | from | [Origin] | to | |
| [Destination]? | | | | | | | |
| Agent, | Vehicle, | Who was transported from [Origin] by [Agent] | | | | | |
| Origin | with [Vehicle]? | | | | | | |
| Agent, | Vehicle, | Who was transported to [Destination] by [Agent] | | | | | |
| Destination | with [Vehicle]? | | | | | | |
| Agent, | Origin, | Who | was | transported | from | [Origin] | to |
| Destination | [Destination] by [Agent]? | | | | | | |
| Vehicle, | Origin, | Who | was | transported | from | [Origin] | to |
| Destination | [Destination] with [Vehicle]? | | | | | | |
| Agent, | Vehicle, | Who | was | transported | from | [Origin] | to |
| Origin, Destination | [Destination] by [Agent] with [Vehicle]? | | | | | | |
| - | What vehicle was used for transporting? | | | | | | | |
|-------------------------------|------------------------------------------------------------------------|----------------------------------------------------|------------|-------------|-------------|--------------|------|----|
| Agent | What vehicle did [Agent] use for transporting? | | | | | | | |
| Artifact | What vehicle was used for transporting [Artifact]? | | | | | | | |
| Origin | What vehicle was used for transporting from [Origin]? | | | | | | | |
| Destination | What | vehicle | was | used | for | transporting | to | |
| [Destination]? | | | | | | | | |
| Agent, Artifact | What vehicle did [Agent] use for transporting [Artifact]? | | | | | | | |
| Agent, Origin | What vehicle did [Agent] use for transporting from [Origin]? | | | | | | | |
| Agent, Destination | What vehicle did [Agent] use for transporting to [Destination]? | | | | | | | |
| Artifact, Origin | What vehicle was used for transporting [Artifact] from [Origin]? | | | | | | | |
| Artifact, | What vehicle was used for transporting [Artifact] | | | | | | | |
| Destination | to [Destination]? | | | | | | | |
| Origin, Destination | What vehicle was used for transporting from [Origin] to [Destination]? | | | | | | | |
| Agent, | Artifact, | What vehicle did [Agent] use for transporting | | | | | | |
| Origin | [Artifact] from [Origin]? | | | | | | | |
| Agent, | Artifact, | What vehicle did [Agent] use for transporting | | | | | | |
| Destination | [Artifact] to [Destination]? | | | | | | | |
| Agent, | Origin, | What vehicle did [Agent] use for transporting from | | | | | | |
| Destination | [Origin] to [Destination]? | | | | | | | |
| Artifact, | Origin, | What vehicle was used for transporting [Artifact] | | | | | | |
| Destination | from [Origin] to [Destination]? | | | | | | | |
| Agent, | Artifact, | What vehicle did [Agent] use for transporting | | | | | | |
| Origin, Destination | [Artifact] from [Origin] to [Destination]? | | | | | | | |
| Vehicle | | | | | | | | |
| Movement. Transport | - | Where did the transporting originate? | | | | | | |
| Agent | Where did [Agent] transport artifact from? | | | | | | | |
| Artifact | Where was [Artifact] transported from? | | | | | | | |
| Vehicle | Where | was | artifact | transported | from | with | | |
| [Vehicle]? | | | | | | | | |
| Destination | Where | was | artifact | transported | from | to | | |
| [Destination]? | | | | | | | | |
| Agent, Artifact | Where did [Agent] transport [Artifact] from? | | | | | | | |
| Agent, Vehicle | Where did [Agent] transport artifact from with [Vehicle]? | | | | | | | |
| Agent, Destination | Where | did | [Agent] | transport | artifact | from | to | |
| [Destination]? | | | | | | | | |
| Artifact, Vehicle | Where was [Artifact] transported from with [Vehicle]? | | | | | | | |
| Artifact, | Where | was | [Artifact] | transported | from | to | | |
| Destination | [Destination]? | | | | | | | |
| Vehicle, Destination | Where | was | artifact | transported | from | to | | |
| [Destination] with [Vehicle]? | | | | | | | | |
| Agent, | Artifact, | Where did [Agent] transport [Artifact] from with | | | | | | |
| Vehicle | [Vehicle]? | | | | | | | |
| Agent, | Artifact, | Where did [Agent] transport [Artifact] from to | | | | | | |
| Destination | [Destination]? | | | | | | | |
| Agent, | Vehicle, | Where | did | [Agent] | transport | artifact | from | to |
| Destination | [Destination] with [Vehicle]? | | | | | | | |
| Artifact, | Vehicle, | Where | was | [Artifact] | transported | from | to | |
| Destination | [Destination] with [Vehicle]? | | | | | | | |
| Agent, | Artifact, | Where did [Agent] transport [Artifact] from to | | | | | | |
| Vehicle, Destination | [Destination] with [Vehicle]? | | | | | | | |
| - | Where was the transporting directed? | | | | | | | |
| Agent | Where did [Agent] transport artifact to? | | | | | | | |
| Artifact | Where was [Artifact] transported to? | | | | | | | |
| Vehicle | Where was artifact transported to with [Vehicle]? | | | | | | | |
| Origin | Where was artifact transported to from [Origin]? | | | | | | | |
| Agent, Artifact | Where did [Agent] transport [Artifact] to? | | | | | | | |
| Agent, Vehicle | Where | did | [Agent] | transport | artifact | to | with | |
| [Vehicle]? | | | | | | | | |
| Agent, Origin | Where | did | [Agent] | transport | artifact | to | from | |
| [Origin]? | | | | | | | | |
| Origin | | | | | | | | |
| Artifact, Vehicle | Where | was | [Artifact] | transported | to | with | | |
|----------------------------------|-----------------------------------------------------------------|-------------------------------------------------|--------------|---------------|-------------|------------|------|------|
| [Vehicle]? | | | | | | | | |
| Artifact, Origin | Where | was | [Artifact] | transported | to | from | | |
| [Origin]? | | | | | | | | |
| Vehicle, Origin | Where was artifact transported to from [Origin] with [Vehicle]? | | | | | | | |
| Agent, | Artifact, | Where did [Agent] transport [Artifact] to with | | | | | | |
| Vehicle | [Vehicle]? | | | | | | | |
| Agent, | Artifact, | Where did [Agent] transport [Artifact] to from | | | | | | |
| Origin | [Origin]? | | | | | | | |
| Agent, | Vehicle, | Where | did | [Agent] | transport | artifact | to | from |
| Origin | [Origin] with [Vehicle]? | | | | | | | |
| Artifact, | Vehicle, | Where | was | [Artifact] | transported | to | from | |
| Origin | [Origin] with [Vehicle]? | | | | | | | |
| Agent, | Artifact, | Where did [Agent] transport [Artifact] to from | | | | | | |
| Vehicle, Origin | [Origin] with [Vehicle]? | | | | | | | |
| Destination | - | Who is the buying agent? | | | | | | |
| Seller | Who bought things from [Seller]? | | | | | | | |
| Beneficiary | Who bought things for [Beneficiary]? | | | | | | | |
| Artifact | Who bought [Artifact]? | | | | | | | |
| Place | Who bought things in [Place]? | | | | | | | |
| Seller, Beneficiary | Who | bought | things | from | [Seller] | for | | |
| [Beneficiary]? | | | | | | | | |
| Seller, Artifact | Who bought [Artifact] from [Seller]? | | | | | | | |
| Seller, Place | Who bought things from [Seller] in [Place]? | | | | | | | |
| Beneficiary, | Who bought [Artifact] for [Beneficiary]? | | | | | | | |
| Artifact | | | | | | | | |
| Buyer | Beneficiary, Place | Who bought things for [Beneficiary] in [Place]? | | | | | | |
| Artifact, Place | Who bought [Artifact] in [Place]? | | | | | | | |
| Seller, Beneficiary, | Who | bought | [Artifact] | from | [Seller] | for | | |
| Artifact | [Beneficiary]? | | | | | | | |
| Seller, Beneficiary, | Who | bought | things | from | [Seller] | for | | |
| Place | [Beneficiary] in [Place]? | | | | | | | |
| Seller, | Artifact, | Who bought [Artifact] from [Seller] in [Place]? | | | | | | |
| Place Beneficiary, | Who bought [Artifact] for [Beneficiary] in | | | | | | | |
| Artifact, Place | [Place]? | | | | | | | |
| Seller, Beneficiary, | Who | bought | [Artifact] | from | [Seller] | for | | |
| Artifact, Place | [Beneficiary] in [Place]? | | | | | | | |
| - | Who is the selling agent? | | | | | | | |
| Buyer | Who sold things to [Buyer]? | | | | | | | |
| Beneficiary | Who did buyer buy things from for [Beneficiary]? | | | | | | | |
| Artifact | Who sold [Artifact]? | | | | | | | |
| Place | Who sold things in [Place]? | | | | | | | |
| Buyer, Beneficiary | Who | did | [Buyer] | buy | things | from | for | |
| [Beneficiary]? | | | | | | | | |
| Buyer, Artifact | Who sold [Artifact] to [Buyer]? | | | | | | | |
| Buyer, Place | Who sold things to [Buyer] in [Place]? | | | | | | | |
| Beneficiary, | Who | did | buyer | buy | [Artifact] | from | for | |
| Artifact | [Beneficiary]? | | | | | | | |
| Beneficiary, Place | Who did buyer buy things from for [Beneficiary] in [Place]? | | | | | | | |
| Artifact, Place | Who sold [Artifact] in [Place]? | | | | | | | |
| Buyer, | Beneficiary, | Who | did | [Buyer] | buy | [Artifact] | from | for |
| Artifact | [Beneficiary]? | | | | | | | |
| Buyer, | Beneficiary, | Who | did | [Buyer] | buy | things | from | for |
| Place | [Beneficiary] in [Place]? | | | | | | | |
| Buyer, | Artifact, | Who sold [Artifact] to [Buyer] in [Place]? | | | | | | |
| Place | | | | | | | | |
| Seller | | | | | | | | |
| Transaction. Transfer -Ownership | Beneficiary, | Who | did | buyer | buy | [Artifact] | from | for |
| Artifact, Place | [Beneficiary] in [Place]? | | | | | | | |
| Buyer, | Beneficiary, | Who | did | [Buyer] | buy | [Artifact] | from | for |
| Artifact, Place | [Beneficiary] in [Place]? | | | | | | | |
| - | Who benefits from the transaction? | | | | | | | |
|-----------------------|-------------------------------------------------------------------|--------------------------------------------------|------------------------------------------------|---------|------------|----------|---------------|-----|
| Buyer | Who did [Buyer] buy things for? | | | | | | | |
| Seller | Who did buyer buy things from [Seller] for? | | | | | | | |
| Artifact | Who did buyer buy [Artifact] for? | | | | | | | |
| Place | Who did buyer buy things for in [Place]? | | | | | | | |
| Buyer, Seller | Who did [Buyer] buy things from [Seller] for? | | | | | | | |
| Buyer, Artifact | Who did [Buyer] buy [Artifact] for? | | | | | | | |
| Buyer, Place | Who did [Buyer] buy things for in [Place]? | | | | | | | |
| Seller, Artifact | Who did buyer buy [Artifact] from [Seller] for? | | | | | | | |
| Seller, Place | Who did buyer buy things from [Seller] for in [Place]? | | | | | | | |
| Artifact, Place | Who did buyer buy [Artifact] for in [Place]? | | | | | | | |
| Buyer, | Seller, | Who did [Buyer] buy [Artifact] from [Seller] | | | | | | |
| Artifact | for? | | | | | | | |
| Buyer, Seller, Place | Who did [Buyer] buy things from [Seller] for in [Place]? | | | | | | | |
| Buyer, | Artifact, | Who did [Buyer] buy [Artifact] for in [Place]? | | | | | | |
| Place | | | | | | | | |
| Beneficiary | Seller, | Artifact, | Who did buyer buy [Artifact] from [Seller] for | | | | | |
| Place | in [Place]? | | | | | | | |
| Buyer, | Seller, | Who did [Buyer] buy [Artifact] from [Seller] for | | | | | | |
| Artifact, Place | in [Place]? | | | | | | | |
| - | What was bought? | | | | | | | |
| Buyer | What did [Buyer] buy? | | | | | | | |
| Seller | What did [Seller] sell? | | | | | | | |
| Beneficiary | What was bought for [Beneficiary]? | | | | | | | |
| Place | What was bought in [Place]? | | | | | | | |
| Buyer, Seller | What did [Buyer] buy from [Seller]? | | | | | | | |
| Buyer, Beneficiary | What did [Buyer] buy for [Beneficiary]? | | | | | | | |
| Buyer, Place | What did [Buyer] buy in [Place]? | | | | | | | |
| Seller, Beneficiary | What | did | buyer | buy | from | [Seller] | for | |
| [Beneficiary]? | | | | | | | | |
| Seller, Place | What did [Seller] sell in [Place]? | | | | | | | |
| Beneficiary, Place | What was bought for [Beneficiary] in [Place]? | | | | | | | |
| Buyer, | Seller, | What | did | [Buyer] | buy | from | [Seller] | for |
| Beneficiary | [Beneficiary]? | | | | | | | |
| Buyer, Seller, Place | What did [Buyer] buy from [Seller] in [Place]? | | | | | | | |
| Buyer, | Beneficiary, | What | did | [Buyer] | buy | for | [Beneficiary] | in |
| Place | [Place]? | | | | | | | |
| Seller, Beneficiary, | What | did | buyer | buy | from | [Seller] | for | |
| Place | [Beneficiary] in [Place]? | | | | | | | |
| Buyer, | Seller, | What | did | [Buyer] | buy | from | [Seller] | for |
| Beneficiary, Place | [Beneficiary] in [Place]? | | | | | | | |
| Artifact | - | Where did the sale take place? | | | | | | |
| Buyer | Where did [Buyer] buy things? | | | | | | | |
| Seller | Where did [Seller] sell things? | | | | | | | |
| Beneficiary | Where did buyer buy things for [Beneficiary]? | | | | | | | |
| Artifact | Where did buyer buy [Artifact]? | | | | | | | |
| Buyer, Seller | Where did [Buyer] buy things from [Seller]? | | | | | | | |
| Buyer, Beneficiary | Where did [Buyer] buy things for [Beneficiary]? | | | | | | | |
| Buyer, Artifact | Where did [Buyer] buy [Artifact]? | | | | | | | |
| Seller, Beneficiary | Where did buyer buy things for [Beneficiary] from [Seller]? | | | | | | | |
| Seller, Artifact | Where did buyer buy [Artifact] from [Seller]? | | | | | | | |
| Beneficiary, | Where | did | buyer | buy | [Artifact] | for | | |
| Artifact | [Beneficiary]? | | | | | | | |
| Buyer, | Seller, | Where did [Buyer] buy things from [Seller] for | | | | | | |
| Beneficiary | [Beneficiary]? | | | | | | | |
| Buyer, | Seller, | Where did [Buyer] buy [Artifact] from [Seller]? | | | | | | |
| Artifact | | | | | | | | |
| Place | Buyer, | Beneficiary, | Where | did | [Buyer] | buy | [Artifact] | for |
| Artifact | [Beneficiary]? | | | | | | | |
| Seller, Beneficiary, | Where | did | buyer | buy | [Artifact] | for | | |
| Artifact | [Beneficiary] from [Seller]? | | | | | | | |
| Buyer, | Seller, | | | | | | | |
| Beneficiary, Artifact | Where did [Buyer] buy [Artifact] from [Seller] for [Beneficiary]? | | | | | | | |
| - | Who gave money to others? | | | | | |
|-----------------------------|-----------------------------------------------------------|---------------------------------------------------|----------------------------------------|------------|-------------|-----|
| Recipient | Who gave money to [Recipient]? | | | | | |
| Beneficiary | Who gave money to others for [Beneficiary]? | | | | | |
| Place | Who gave money to others in [Place]? | | | | | |
| Recipient, | Who | gave | money | to | [Recipient] | for |
| Beneficiary | [Beneficiary]? | | | | | |
| Recipient, Place | Who gave money to [Recipient] in [Place]? | | | | | |
| Beneficiary, Place | Who gave money to others for [Beneficiary] in [Place]? | | | | | |
| Recipient, | Who | gave | money | to | [Recipient] | for |
| Beneficiary, Place | [Beneficiary] in [Place]? | | | | | |
| Giver | - | Who was given money? | | | | |
| Giver | Who did [Giver] give money to? | | | | | |
| Beneficiary | Who was given money for [Beneficiary]? | | | | | |
| Place | Who was given money in [Place]? | | | | | |
| Giver, Beneficiary | Who did [Giver] give money to for [Beneficiary]? | | | | | |
| Giver, Place | Who did [Giver] give money to in [Place]? | | | | | |
| Beneficiary, Place | Who was given money for [Beneficiary] in [Place]? | | | | | |
| Giver, | Beneficiary, | Who did [Giver] give money to for [Beneficiary] | | | | |
| Place | in [Place]? | | | | | |
| Recipient | | | | | | |
| Transaction. Transfer-Money | - | Who benefited from the transfer? | | | | |
| Giver | Who did [Giver] give money for? | | | | | |
| Recipient | Who was [Recipient] given money for? | | | | | |
| Place | Who benefited from the transfer in [Place]? | | | | | |
| Giver, Recipient | Who did [Giver] give money to [Recipient] for? | | | | | |
| Giver, Place | Who did [Giver] give money for in [Place]? | | | | | |
| Recipient, Place | Who was [Recipient] given money for in [Place]? | | | | | |
| Giver, | Recipient, | Who did [Giver] give money to [Recipient] for in | | | | |
| Place | [Place]? | | | | | |
| Beneficiary | - | Where was the amount transferred? | | | | |
| Giver | Where did [Giver] give money to others? | | | | | |
| Recipient | Where was [Recipient] given money? | | | | | |
| Beneficiary | Where did giver give money for [Beneficiary]? | | | | | |
| Giver, Recipient | Where did [Giver] give money to [Recipient]? | | | | | |
| Giver, Beneficiary | Where did [Giver] give money to others for [Beneficiary]? | | | | | |
| Recipient, | Where | was | [Recipient] | given | money | for |
| Beneficiary | [Beneficiary]? | | | | | |
| Giver, | Recipient, | Where did [Giver] give money to [Recipient] for | | | | |
| Beneficiary | [Beneficiary]? | | | | | |
| Place | - | Who started the organization? | | | | |
| Agent | Org | Who started [Org]? | | | | |
| Place | Who started the organization in [Place]? | | | | | |
| Org, Place | Who started [Org] in [Place]? | | | | | |
| - | What organization was started? | | | | | |
| Agent | What organization was started by [Agent]? | | | | | |
| Org | Place | What organization was started in [Place]? | | | | |
| Agent, Place | What organization | was started | by | [Agent] in | | |
| [Place]? | | | | | | |
| Business. Start-Org | - | Where was the organization started? | | | | |
| Agent | Where was the organization started by [Agent]? | | | | | |
| Place | Org | Where was [Org] started? | | | | |
| Agent, Org | Where was [Org] started by [Agent]? | | | | | |
| Business. | Org | - | What organization was merged? | | | |
| Merge-Org Business. | Org | - | What organization declared bankruptcy? | | | |
| Declare- | Place | What organization declared bankruptcy in [Place]? | | | | |
| Bankruptcy | Place | - | Where was the bankruptcy declared? | | | |
| Org | Where did [Org] declare the bankruptcy? | | | | | |
| Org | - | What organization was ended? | | | | |
| Business. | Place | What organization was ended in [Place]? | | | | |
| Place | - | Where was the organization ended? | | | | |
| End-Org | Org | Where was [Org] ended? | | | | |
| - | Who was the attacking agent? | | | | | | |
|---------------------------|--------------------------------------------------------------|----------------------------------------------|-----------------------------------------|------------|------------|----------|-------|
| Target | Who attacked [Target]? | | | | | | |
| Instrument | Who used [Instrument] in the attack? | | | | | | |
| Place | Who made the attack in [Place]? | | | | | | |
| Target, Instrument | Who attacked [Target] using [Instrument]? | | | | | | |
| Target, Place | Who attacked [Target] in [Place]? | | | | | | |
| Instrument, Place | Who used [Instrument] in the attack in [Place]? | | | | | | |
| Target, | Instrument, | Who attacked [Target] using [Instrument] in | | | | | |
| Place | [Place]? | | | | | | |
| Attacker | - | Who was the target of the attack? | | | | | |
| Attacker | Who was attacked by [Attacker]? | | | | | | |
| Instrument | Who was attacked with [Instrument]? | | | | | | |
| Place | Who was the target of the attack in [Place]? | | | | | | |
| Attacker, Instrument | Who | was | attacked | by | [Attacker] | using | |
| [Instrument]? | | | | | | | |
| Attacker, Place | Who was attacked by [Attacker] in [Place]? | | | | | | |
| Instrument, Place | Who was attacked with [Instrument] in [Place]? | | | | | | |
| Attacker, | Who | was | attacked | by | [Attacker] | using | |
| Instrument, Place | [Instrument] in [Place]? | | | | | | |
| Target | | | | | | | |
| Conflict. Attack | - | What instrument was used in the attack? | | | | | |
| Attacker | What instrument did [Attacker] use in the attack? | | | | | | |
| Target | What instrument was used to attack [Target]? | | | | | | |
| Place | What instrument was used in the attack in [Place]? | | | | | | |
| Attacker, Target | What instrument did [Attacker] use to attack [Target]? | | | | | | |
| Attacker, Place | What instrument did [Attacker] use in the attack in [Place]? | | | | | | |
| Target, Place | What instrument was used to attack [Target] in [Place]? | | | | | | |
| Attacker, | Target, | What instrument did [Attacker] use to attack | | | | | |
| Place | [Target] in [Place]? | | | | | | |
| Instrument | - | Where did the attack take place? | | | | | |
| Attacker | Where did [Attacker] make an attack? | | | | | | |
| Target | Where was [Target] attacked? | | | | | | |
| Instrument | Where was [Instrument] used in the attack? | | | | | | |
| Attacker, Target | Where did [Attacker] attack [Target]? | | | | | | |
| Attacker, Instrument | Where did [Attacker] use [Instrument] to make an attack? | | | | | | |
| Target, Instrument | Where was [Instrument] used to attack [Target]? | | | | | | |
| Attacker, | Target, | Where | did | [Attacker] | attack | [Target] | using |
| Instrument | [Instrument]? | | | | | | |
| Place | | | | | | | |
| Conflict. | Entity | - | Who demonstrated? | | | | |
| Place | Who demonstrated in [Place]? | | | | | | |
| Demonstrate | Place | - | Where did the demonstration take place? | | | | |
| Entity | Where did [Entity] demonstrate? | | | | | | |
| Entity | - | Who met with others? | | | | | |
| Place | Who met others in [Place]? | | | | | | |
| Contact.Meet | Place | - | Where did the meeting takes place? | | | | |
| Entity | Where did [Entity] meet others? | | | | | | |
| Contact. | Entity | - | Who communicated with others? | | | | |
| Phone-Write | - | Who is the employee? | | | | | |
| Entity | Who was hired by [Entity]? | | | | | | |
| Person | Place | Who was hired in [Place]? | | | | | |
| Entity, Place | Who was hired by [Entity] in [Place]? | | | | | | |
| Personnel. Start-Position | - | Who is the the employer? | | | | | |
| Person | Who hired [Person]? | | | | | | |
| Entity | Place | Who hired employee in [Place]? | | | | | |
| Person, Place | Who hired [Person] in [Place]? | | | | | | |
| - | Where did the employment relationship begin? | | | | | | |
| Person | Where was [Person] hired? | | | | | | |
| Place | Entity | Where did [Entity] hire employee? | | | | | |
| Person, Entity | Where did [Entity] hire [Person]? | | | | | | |
| - | Who ended the position? | | | | | | |
|-------------------------------|-------------------------------------------|--------------------------------------------|------------------------------|-------|-------|-------------|----|
| Entity | Who was fired by [Entity]? | | | | | | |
| Person | Place | Who ended the position in [Place]? | | | | | |
| Entity, Place | Who was fired by [Entity] in [Place]? | | | | | | |
| - | Who fired employee? | | | | | | |
| Person | Who fired [Person]? | | | | | | |
| Entity | Place | Who fired employee in [Place]? | | | | | |
| Person, Place | Who fired [Person] in [Place]? | | | | | | |
| Personnel. End-Position | - | Where did the employment relationship end? | | | | | |
| Person | Where did [Person] end the position? | | | | | | |
| Place | Entity | Where did [Entity] fire employee? | | | | | |
| Person, Entity | Where did [Entity] fire [Person]? | | | | | | |
| Person | - | Who was nominated? | | | | | |
| Personnel. | Agent | Who was nominated by [Agent]? | | | | | |
| Nominate | Agent | - | Who is the nominating agent? | | | | |
| Person | Who nominated [Person]? | | | | | | |
| - | Who was elected? | | | | | | |
| Agent | Who was elected by [Agent]? | | | | | | |
| Person | Place | Who was elected in [Place]? | | | | | |
| Agent, Place | Who was elected by [Agent] in [Place]? | | | | | | |
| - | Who was the voting agent? | | | | | | |
| Person | Who elected [Person]? | | | | | | |
| Agent | Place | Who elected person in [Place]? | | | | | |
| Person, Place | Who elected [Person] in [Place]? | | | | | | |
| Personnel. Elect | - | Where did the election takes place? | | | | | |
| Person | Where was [Person] elected? | | | | | | |
| Place | Agent | Where did [Agent] elect person? | | | | | |
| Person, Agent | Where did [Agent] elect [Person]? | | | | | | |
| - | Who was arrested? | | | | | | |
| Agent | Who was arrested by [Agent]? | | | | | | |
| Person | Place | Who was arrested in [Place]? | | | | | |
| Agent, Place | Who was arrested by [Agent] in [Place]? | | | | | | |
| - | Who made the arrest? | | | | | | |
| Person | Who arrested [Person]? | | | | | | |
| Agent | Place | Who made the arrest in [Place]? | | | | | |
| Person, Place | Who arrested [Person] in [Place]? | | | | | | |
| Justice. Arrest-Jail | - | Where did the arrest take place? | | | | | |
| Person | Where was [Person] arrested? | | | | | | |
| Place | Agent | Where did [Agent] arrest person? | | | | | |
| Person, Agent | Where did [Agent] arrest [Person]? | | | | | | |
| - | Who was released? | | | | | | |
| Entity | Who was released by [Entity]? | | | | | | |
| Person | Place | Who was released in [Place]? | | | | | |
| Entity, Place | Who was released by [Entity] in [Place]? | | | | | | |
| Justice. Release-Parole | - | Who released the person? | | | | | |
| Person | Who released [Person]? | | | | | | |
| Entity | Place | Who released the person in [Place]? | | | | | |
| Person, Place | Who released [Person] in [Place]? | | | | | | |
| - | Where did the release take place? | | | | | | |
| Person | Where was [Person] released? | | | | | | |
| Place | Entity | Where did [Entity] release person? | | | | | |
| Person, Entity | Where did [Entity] release [Person]? | | | | | | |
| - | Who was on trial? | | | | | | |
| Prosecutor | Who | was | on | trial | being | prosecuted | by |
| [Prosecutor]? | | | | | | | |
| Adjudicator | Who | was | on | trial | being | adjudicated | by |
| [Adjudicator]? | | | | | | | |
| Place | Who was on trial in [Place]? | | | | | | |
| Prosecutor, | Who was tried by [Prosecutor] with being adjudicated by [Adjudicator]? | | | | | | |
| Adjudicator Prosecutor, Place | Who was tried by [Prosecutor] in [Place]? | | | | | | |
| Adjudicator, Place | Who | was | on | trial | being | adjudicated | by |
| [Adjudicator] in [Place]? | | | | | | | |
| Prosecutor, | Who was tried by [Prosecutor] with being adjudicated by [Adjudicator] in [Place]? | | | | | | |
| Adjudicator, Place | | | | | | | |
| Defendant | | | | | | | |
| Prosecutor | |
|------------------------|----------------------------------------|
| Justice. Trial-Hearing | Adjudicator Place Defendant Prosecutor |
| - | Who tried defendant? | | | | | |
|------------------------------------------------|----------------------------------------------------------------------------|-------|-------------|-------|-------------|----|
| Defendant | Who tried [Defendant]? | | | | | |
| Adjudicator | Who tried the defendant being adjudicated by [Adjudicator]? | | | | | |
| Place | Who tried defendant in [Place]? | | | | | |
| Defendant, | Who | tried | [Defendant] | being | adjudicated | by |
| Adjudicator | [Adjudicator]? | | | | | |
| Defendant, Place | Who tried [Defendant] in [Place]? | | | | | |
| Adjudicator, Place | Who tried the defendant being adjudicated by [Adjudicator] in [Place]? | | | | | |
| Defendant, | Who | tried | [Defendant] | being | adjudicated | by |
| Adjudicator, Place | [Adjudicator] in [Place]? | | | | | |
| - | Who adjudicated the trial? | | | | | |
| Defendant | Who adjudicated the trial [Defendant] was on? | | | | | |
| Prosecutor | Who adjudicated the trial being prosecuted by [Prosecutor]? | | | | | |
| Place | Who adjudicated the trial in [Place]? | | | | | |
| Defendant, | Who adjudicated the trial [Defendant] was on being | | | | | |
| Prosecutor | prosecuted by [Prosecutor]? | | | | | |
| Defendant, Place | Who adjudicated the trial [Defendant] was on in [Place]? | | | | | |
| Prosecutor, Place | Who adjudicated the trial being prosecuted by [Prosecutor] in [Place]? | | | | | |
| Defendant, | Who adjudicated the trial [Defendant] was on being | | | | | |
| Prosecutor, Place | prosecuted by [Prosecutor] in [Place]? | | | | | |
| - | Where did the trial take place? | | | | | |
| Defendant | Where was [Defendant] tried? | | | | | |
| Prosecutor | Where did [Prosecutor] try the defendant? | | | | | |
| Adjudicator | Where did [Adjudicator] adjudicate the trial? | | | | | |
| Defendant, | Where did [Prosecutor] try [Defendant]? | | | | | |
| Prosecutor Defendant, | Where did [Adjudicator] adjudicate the trial | | | | | |
| Adjudicator | [Defendant] was on? | | | | | |
| Prosecutor, | Where did [Prosecutor] try the defendant with being adjudicated by [Adjudicator]? | | | | | |
| Adjudicator Defendant, Prosecutor, Adjudicator | Where did [Prosecutor] try [Defendant] with being adjudicated by [Adjudicator]? | | | | | |
| - | Who was indicated for crime? | | | | | |
| Prosecutor | Who was indicated for crime by [Prosecutor]? | | | | | |
| Adjudicator | Who was indicated for crime being adjudicated by [Adjudicator]? | | | | | |
| Place | Who was indicated for crime in [Place]? | | | | | |
| Prosecutor, | Who was indicated for crime by [Prosecutor] being | | | | | |
| Adjudicator | adjudicated by [Adjudicator]? | | | | | |
| Prosecutor, Place | Who was indicated for crime by [Prosecutor] in [Place]? | | | | | |
| Adjudicator, Place | Who was indicated for crime being adjudicated by [Adjudicator] in [Place]? | | | | | |
| Prosecutor, | Who was indicated for crime by [Prosecutor] being | | | | | |
| Adjudicator, Place | adjudicated by [Adjudicator] in [Place]? | | | | | |
| - | Who executed the indictment? | | | | | |
| Defendant | Who indicated [Defendant] for crime? | | | | | |
| Adjudicator | Who executed the indictment being adjudicated by [Adjudicator]? | | | | | |
| Place | Who executed the indictment in [Place]? | | | | | |
| Defendant, | Who indicated [Defendant] for crime being adjudicated by [Adjudicator]? | | | | | |
| Adjudicator Defendant, Place | Who indicated [Defendant] for crime in [Place]? | | | | | |
| Adjudicator, Place | Who executed the indictment being adjudicated by [Adjudicator] in [Place]? | | | | | |
| Defendant, | Who indicated [Defendant] for crime being adjudicated by [Adjudicator] in [Place]? | | | | | |
| Adjudicator, Place | | | | | | |
| - | Who adjudicated the indictment? | | | | | |
|------------------------------------------------|-------------------------------------------------------------------------------------------|--------------------------------------|----------------------------|------------|-------------|----|
| Defendant | Who adjudicated the indictment [Defendant] was charged in? | | | | | |
| Prosecutor | Who | adjudicated | the | indictment | executed | by |
| [Prosecutor]? | | | | | | |
| Place | Who adjudicated the indictment in [Place]? | | | | | |
| Defendant, | Who adjudicated the indictment [Defendant] was | | | | | |
| Prosecutor | charged in by [Prosecutor]? | | | | | |
| Defendant, Place | Who adjudicated the indictment [Defendant] was charged in in [Place]? | | | | | |
| Prosecutor, Place | Who | adjudicated | the | indictment | executed | by |
| [Prosecutor] in [Place]? | | | | | | |
| Defendant, | Who adjudicated the indictment [Defendant] was | | | | | |
| Prosecutor, Place | charged in by [Prosecutor] in [Place]? | | | | | |
| Adjudicator | | | | | | |
| Justice. Charge-Indict | - | Where did the indictment take place? | | | | |
| Defendant | Where was [Defendant] indicated? | | | | | |
| Prosecutor | Where did [Prosecutor] execute the indictment? | | | | | |
| Adjudicator | Where did [Adjudicator] adjudicate the indictment? | | | | | |
| Defendant, | Where did [Prosecutor] indicate [Defendant] for | | | | | |
| Prosecutor | crime? | | | | | |
| Defendant, | Where was [Defendant] indicated for crime being | | | | | |
| Adjudicator | adjudicated by [Adjudicator]? | | | | | |
| Prosecutor, | Where did [Prosecutor] execute the indictment being adjudicated by [Adjudicator]? | | | | | |
| Adjudicator Defendant, Prosecutor, Adjudicator | | | | | | |
| Place | Where did [Prosecutor] indicate [Defendant] for crime being adjudicated by [Adjudicator]? | | | | | |
| - | Who sued defendant? | | | | | |
| Defendant | Who sued [Defendant]? | | | | | |
| Adjudicator | Who | sued | defendant | being | adjudicated | by |
| [Adjudicator]? | | | | | | |
| Place | Who sued defendant in [Place]? | | | | | |
| Defendant, | Who | sued | [Defendant] | being | adjudicated | by |
| Adjudicator | [Adjudicator]? | | | | | |
| Defendant, Place | Who sued [Defendant] in [Place]? | | | | | |
| Adjudicator, Place | Who | sued | defendant | being | adjudicated | by |
| [Adjudicator] in [Place]? | | | | | | |
| Defendant, | Who | sued | [Defendant] | being | adjudicated | by |
| Adjudicator, Place | [Adjudicator] in [Place]? | | | | | |
| Plaintiff | - | Who was sued? | | | | |
| Plaintiff | Who was sued by [Plaintiff]? | | | | | |
| Adjudicator | Who was sued for crime being adjudicated by [Adjudicator]? | | | | | |
| Place | Who was sued in [Place]? | | | | | |
| Plaintiff, | Who was sued by [Plaintiff] for crime being adjudicated by [Adjudicator]? | | | | | |
| Adjudicator Plaintiff, Place | Who was sued by [Plaintiff] in [Place]? | | | | | |
| Adjudicator, Place | Who was sued for crime being adjudicated by [Adjudicator] in [Place]? | | | | | |
| Plaintiff, | Who was sued by [Plaintiff] for crime being adjudicated by [Adjudicator] in [Place]? | | | | | |
| Adjudicator, Place | | | | | | |
| Justice.Sue | Defendant | - | Who adjudicated the suing? | | | |
| Plaintiff | Who adjudicated the suing made by [Plaintiff]? | | | | | |
| Defendant | Who adjudicated the suing against [Defendant]? | | | | | |
| Place | Who adjudicated the suing in [Place]? | | | | | |
| Plaintiff, Defendant | Who adjudicated the suing against [Defendant] made by [Plaintiff]? | | | | | |
| Plaintiff, Place | Who adjudicated the suing made by [Plaintiff] in [Place]? | | | | | |
| Defendant, Place | Who adjudicated the suing against [Defendant] in [Place]? | | | | | |
| Plaintiff, | Who adjudicated the suing against [Defendant] | | | | | |
| Defendant, Place | made by [Plaintiff] in [Place]? | | | | | |
| - | Where did the suit take place? | | | | | |
| Plaintiff | Where did [Plaintiff] sue defendant? | | | | | |
| Defendant | Where was [Defendant] sued? | | | | | |
| Adjudicator | Where did [Adjudicator] adjudicate the suing? | | | | | |
| Adjudicator | | | | | | |
| Plaintiff, Defendant | Where did [Plaintiff] sue [Defendant]? | | | | | |
|-----------------------------------|----------------------------------------------------------|-----------------------------------------------------------|--------|------|----------|----|
| Place | Plaintiff, | Where did [Plaintiff] sue defendant being adjudicated by [Adjudicator]? | | | | |
| Adjudicator Defendant, | Where was [Defendant] sued being adjudicated by | | | | | |
| Adjudicator | [Adjudicator]? | | | | | |
| Plaintiff, Defendant, Adjudicator | Where did [Plaintiff] sue [Defendant] being adjudicated by [Adjudicator]? | | | | | |
| - | Who was convicted for crime? | | | | | |
| Adjudicator | Who was convicted by [Adjudicator] for crime? | | | | | |
| Defendant | Place | Who was convicted for crime in [Place]? | | | | |
| Adjudicator, Place | Who was convicted by [Adjudicator] for crime in [Place]? | | | | | |
| Justice. Convict | - | Who convicted defendant for crime? | | | | |
| Defendant | Who convicted [Defendant] for crime? | | | | | |
| Adjudicator | Place | Who convicted defendant for crime in [Place]? | | | | |
| Defendant, Place | Who convicted [Defendant] for crime in [Place]? | | | | | |
| - | Where did the conviction take place? | | | | | |
| Defendant | Where was [Defendant] convicted for crime? | | | | | |
| Place | Adjudicator | Where did [Adjudicator] convict the defendant for crime? | | | | |
| Defendant, | Where did [Adjudicator] convict [Defendant] for | | | | | |
| Adjudicator | crime? | | | | | |
| - | Who was sentenced for crime? | | | | | |
| Adjudicator | Who was sentenced by [Adjudicator] for crime? | | | | | |
| Defendant | Place | Who was sentenced for crime in [Place]? | | | | |
| Adjudicator, Place | Who was sentenced by [Adjudicator] for crime in [Place]? | | | | | |
| Justice. Sentence | - | Who sentenced the defendant for crime? | | | | |
| Defendant | Who sentenced [Defendant] for crime? | | | | | |
| Adjudicator | Place | Who sentenced the defendant for crime in [Place]? | | | | |
| Defendant, Place | Who sentenced [Defendant] for crime in [Place]? | | | | | |
| - | Where did the sentencing take place? | | | | | |
| Defendant | Where was [Defendant] sentenced for crime? | | | | | |
| Place | Adjudicator | Where did [Adjudicator] sentence the defendant for crime? | | | | |
| Defendant, | Where did [Adjudicator] sentence [Defendant] | | | | | |
| Adjudicator | for crime? | | | | | |
| - | Who was fined for crime? | | | | | |
| Adjudicator | Who was fined by [Adjudicator] for crime? | | | | | |
| Entity | Place | Who was fined for crime in [Place]? | | | | |
| Adjudicator, Place | Who was fined by [Adjudicator] for crime in [Place]? | | | | | |
| - | Who fined the entity for crime? | | | | | |
| Entity | Who fined [Entity] for crime? | | | | | |
| Adjudicator | Place | Who fined the entity for crime in [Place]? | | | | |
| Entity, Place | Who fined [Entity] for crime in [Place]? | | | | | |
| Justice.Fine | - | Where did the fining take place? | | | | |
| Entity | Where was [Entity] fined for crime? | | | | | |
| Place | Adjudicator | Where did [Adjudicator] fine the entity for crime? | | | | |
| Entity, Adjudicator | Where did [Adjudicator] fine [Entity] for crime? | | | | | |
| - | Who was executed for crime? | | | | | |
| Agent | Who was executed by [Agent] for crime? | | | | | |
| Person | Place | Who was executed for crime in [Place]? | | | | |
| Agent, Place | Who was executed by [Agent] for crime in [Place]? | | | | | |
| - | Who executed person for crime? | | | | | |
| Person | Who executed [Person] for crime? | | | | | |
| Agent | Place | Who executed person for crime in [Place]? | | | | |
| Person, Place | Who executed [Person] for crime in [Place]? | | | | | |
| Justice. Execute | - | Where did the execution take place? | | | | |
| Person | Where was [Person] executed for crime? | | | | | |
| Place | Agent | Where did [Agent] execute person for crime? | | | | |
| Person, Agent | Where did [Agent] execute [Person] for crime? | | | | | |
| - | Who extradited person? | | | | | |
| Destination | Who extradited person to [Destination]? | | | | | |
| Agent | Origin | Who extradited person from [Origin]? | | | | |
| Destination, Origin | Who | extradited | person | from | [Origin] | to |
| [Destination]? | | | | | | |
| Justice. Extradite Justice. Acquit Justice. Pardon Justice. Appeal |
|----------------------------------------------------------------------|
Justice. Acquit
Defendant - Who was acquited of crime?
Adjudicator Who was acquited of crime by [Adjudicator]?
Adjudicator - Who acquited the defendant of crime?
Defendant Who acquited [Defendant] of crime?
Table 12: Complete Templates for argument roles in ACE ontology.
| - | Where was the person extradited to? | | | | | | |
|--------------------|------------------------------------------------------------|--------------------------------------|-------------|-----------|------------|--------|------|
| Agent | Where did [Agent] extradite person to? | | | | | | |
| Origin | Where was the person extradited to from [Origin]? | | | | | | |
| Agent, Origin | Where | did | [Agent] | extradite | person | to | from |
| [Origin]? | | | | | | | |
| - | Where was the person extradited from? | | | | | | |
| Agent | Where did [Agent] extradite person from? | | | | | | |
| Destination | Where | was | the | person | extradited | from | to |
| [Destination]? | | | | | | | |
| Agent, Destination | Where | did | [Agent] | extradite | person | from | to |
| [Destination]? | | | | | | | |
| Defendant | - | Who was acquited of crime? | | | | | |
| Adjudicator | Who was acquited of crime by [Adjudicator]? | | | | | | |
| Adjudicator | - | Who acquited the defendant of crime? | | | | | |
| Defendant | Who acquited [Defendant] of crime? | | | | | | |
| - | Who was pardoned for crime? | | | | | | |
| Adjudicator | Who was pardoned by [Adjudicator] for crime? | | | | | | |
| Place | Who was pardoned for crime in [Place]? | | | | | | |
| Adjudicator, Place | Who was pardoned by [Adjudicator] for crime in [Place]? | | | | | | |
| - | Who pardoned defendant for crime? | | | | | | |
| Defendant | Who pardoned [Defendant] for crime? | | | | | | |
| Place | Who pardoned defendant for crime in [Place]? | | | | | | |
| Defendant, Place | Who pardoned [Defendant] for crime in [Place]? | | | | | | |
| - | Where did the pardon take place? | | | | | | |
| Defendant | Where was [Defendant] pardoned for crime? | | | | | | |
| Adjudicator | Where did [Adjudicator] pardon the defendant for crime? | | | | | | |
| Defendant, | Where did [Adjudicator] pardon [Defendant] for | | | | | | |
| Adjudicator | crime? | | | | | | |
| - | Who made the appeal? | | | | | | |
| Adjudicator | Who made the appeal to [Adjudicator]? | | | | | | |
| Place | Who made the appeal in [Place]? | | | | | | |
| Adjudicator, Place | Who made the appeal to [Adjudicator] in [Place]? | | | | | | |
| - | Who adjudicated the appeal? | | | | | | |
| Defendant | Who adjudicated the appeal made by [Defendant]? | | | | | | |
| Place | Who adjudicated the appeal in [Place]? | | | | | | |
| Defendant, Place | Who adjudicated the appeal made by [Defendant] in [Place]? | | | | | | |
| - | Where did the appeal take place? | | | | | | |
| Defendant | Where did [Defendant] make the appeal? | | | | | | |
| Adjudicator | Where did [Adjudicator] adjudicate the appeal? | | | | | | |
| Defendant, | Where | did | [Defendant] | make | the | appeal | to |
| Adjudicator | [Adjudicator]? | | | | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 4.5 and Limitation Section
✗ A2. Did you discuss any potential risks of your work?
the work is for foundational research and experiments are conducted on public dataset
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
B ✓ **Did you use or create scientific artifacts?**
section 3.1, section 3.2, section 4.1, section 4.2
✓ B1. Did you cite the creators of artifacts you used?
section 3.1, section 3.2, section 4.1, section 4.2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
ACE 2005 Corpus was released by Linguistic Data Consortium with license: LDC User Agreement for Non-Members The code of our work will be released on Github with license TBD.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
ACE 2005 Corpus is for research purposes only Our work is for research purposes only
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The task itself is designed for identifying person/organization involved in events. The dataset contains violent events such as attack, die, and injure.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 3.1 and appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
appendix
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
section 3.3 and appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 3 and appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
section 4 and appendix
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
liu-etal-2023-sample | Are Sample-Efficient {NLP} Models More Robust? | https://aclanthology.org/2023.acl-short.144 | Recent results in image classification and extractive question answering have observed that pre-trained models trained on less in-distribution data have better out-ofdistribution performance. However, it is unclear how broadly these trends hold. We conduct a large empirical study across three tasks, three broadly-applicable modeling interventions (increasing model size, using a different adaptation method, and pre-training on more data), and 14 diverse datasets to investigate the relationship between sample efficiency (amount of data needed to reach a given ID accuracy) and robustness (how models fare on OOD evaluation). We find that higher sample efficiency is only correlated with better average OOD robustness on some modeling interventions and tasks, but not others. On individual datasets, models with lower sample efficiency can even be more robust. These results suggest that general-purpose methods for improving sample efficiency are unlikely to yield universal OOD robustness improvements, since such improvements are highly dataset- and task-dependent. Even in an era of large, multi-purpose pre-trained models, task-specific decisions may often be necessary for OOD generalization. | # Are Sample-Efficient Nlp Models More Robust?
Nelson F. Liu♠ Ananya Kumar♠ Percy Liang♠ **Robin Jia**♥
♠Computer Science Department, Stanford University, Stanford, CA
♥Department of Computer Science, University of Southern California, Los Angeles, CA
{nfliu, ananya, pliang}@cs.stanford.edu [email protected]
## Abstract
Recent results in image classification and extractive question answering have observed that pre-trained models trained on less in-distribution data have better out-ofdistribution performance. However, it is unclear how broadly these trends hold. We conduct a large empirical study across three tasks, three broadly-applicable modeling interventions (increasing model size, using a different adaptation method, and pre-training on more data), and 14 diverse datasets to investigate the relationship between sample efficiency (amount of data needed to reach a given ID accuracy) and robustness (how models fare on OOD evaluation). We find that higher sample efficiency is only correlated with better average OOD robustness on some modeling interventions and tasks, but not others. On individual datasets, models with lower sample efficiency can even be *more* robust.
These results suggest that general-purpose methods for improving sample efficiency are unlikely to yield universal OOD robustness improvements, since such improvements are highly dataset- and task-dependent. Even in an era of large, multi-purpose pre-trained models, task-specific decisions may often be necessary for OOD generalization.
## 1 Introduction
NLP models perform well when evaluated on data drawn from their training distribution (indistribution / ID), but they typically suffer large drops in performance when evaluated on data distributions unseen during training (out-of-distribution
/ OOD; Blitzer, 2008).
How does exposure to ID training examples affect the ID-OOD gap? If two models have the same ID performance, will models trained on fewer ID examples (higher *sample efficiency*) also have higher OOD performance (higher *robustness*)? At one extreme, zero-shot models will not learn IDspecific patterns because they are not exposed to any labeled ID examples. Similarly, few-shot models trained on very few ID examples may also rely less on ID-specific patterns; if a model never sees the token *"cat"* while training on SNLI, then it will not learn that its presence is spuriously predictive of the contradiction label (Gururangan et al., 2018; Utama et al., 2021). Supporting this intuition, recent work in image classification (Radford et al.,
2021) and extractive question answering (Awadalla et al., 2022) show that zero-shot inference and fewshot fine-tuning improve *average* robustness across a range of OOD test sets. However, it is unclear how universal these trends are across various tasks and methods for reducing exposure to ID examples, or how predictive they are for any individual test set of interest. Figure 1 illustrates this central question.
We conduct a broad empirical study over 14 datasets across three tasks to investigate the relationship between exposure to ID training examples (sample efficiency) and robustness. We experiment with three modeling interventions that improve sample efficiency: (1) using natural language prompts for zero-shot prediction and during finetuning (Brown et al., 2020; Schick and Schütze, 2021; Gao et al., 2021); (2) fine-tuning models of increasing size; (3) fine-tuning models pre-trained on increasing amounts of data.
We find that higher sample efficiency is only sometimes correlated with better robustness, and the effect of specific modeling interventions varies by task. For example, increasing pre-trained model size substantially improves sample efficiency and results in higher average robustness in sentiment experiments, but these sample efficiency gains do not translate to higher average robustness in NLI and extractive QA experiments. On individual datasets, models with better sample efficiency can even be less robust (e.g., increasing model size when training on SST-2 and evaluating OOD on IMDb).
Overall, these results indicate that general1689
![1_image_0.png](1_image_0.png)
purpose methods for improving sample efficiency are far from guaranteed to yield significant OOD
robustness improvements—their success is highly dataset- and task-dependent. Furthermore, even in this era of large, multi-purpose pre-trained language models, task-specific decisions are often necessary to achieve OOD generalization.
## 2 Measuring Sample Efficiency And Robustness.
Consider two data distributions Diid and Dood. Let M be a model trained on examples drawn from Diid (i.e., the ID training data). We study the relationship between three properties of M: (1) the number of ID examples it was trained on; (2) M's performance on held-out examples from Diid (i.e.,
the ID performance); (3) M's performance on examples from Dood (i.e., the OOD performance).
Let M1 and M2 be two models with equivalent performance on held-out ID data. If M1 was trained on fewer ID examples than M2, then it has higher *sample efficiency*. If M1 has higher OOD
performance than M2, it has higher *effective robustness* (henceforth "robustness"; Taori et al., 2020).
Comparing models with equivalent ID performance controls for its effect on OOD performance, since improving ID performance usually yields commensurate improvements on OOD performance—in this study, we focus on OOD performance improvements *beyond what is expected* from ID gains.
Satisfying this equivalent-ID constraint is often difficult in practice; given an arbitrary model M1 and its corresponding ID performance, it is difficult to produce a different model M2 with identical ID
performance. Rather than explicitly training models to identical ID performance, we train models on varying-size subsamples of a given ID dataset and interpolate between the results to estimate (1) the number of labeled ID training examples necessary to achieve a particular ID performance (sample efficiency) and (2) OOD performance, given ID performance (robustness). These interpolated curves approximate the ideal setting of training a model for every possible ID value. Figure 1 provides a schematized example, with model B having better sample efficiency and robustness than model A.
## 3 Experimental Setup
We study three modeling interventions—using natural language prompts, increasing pre-trained model size, and pre-training on more data—on 14 total datasets spanning natural language inference
(NLI), sentiment analysis, and extractive question answering (QA). See Appendix A for further details about experimental settings.
Tasks and Datasets. In our natural language inference (NLI) experiments, we use MultiNLI
(Williams et al., 2018), SNLI (Bowman et al.,
2015), and MedNLI (Romanov and Shivade, 2018).
For sentiment analysis, we use IMDb reviews Maas et al. (2011), SST-2 (Socher et al., 2013), and reviews from the "Movies and TV" subsection of the Amazon Reviews corpus (Ni et al., 2019).
Lastly, for extractive question answering, we use SQuAD (Rajpurkar et al., 2016), NaturalQuestions
(Kwiatkowski et al., 2019), TriviaQA, BioASQ
(Tsatsaronis et al., 2015), and the four SQuADShifts test sets (Miller et al., 2020).
Modeling Interventions. To understand the effect of a particular modeling intervention on sample efficiency and robustness, we evaluate pre-trained models that differ *only* along the axis of interest
(e.g., model size or fine-tuning method). Since the optimal fine-tuning hyperparameters depend on the ID training dataset size, we separately tune hyperparameters for each model on each training dataset subsample size, taking the models that achieve the best held-out ID performance for each setting. See
![2_image_0.png](2_image_0.png)
## 4 Results And Discussion
Our results show that models with higher sample efficiency may not necessarily have higher average OOD robustness—different tasks and modeling interventions affect robustness in different ways (Figures 2-4). For example, prompt-based fine-tuning consistently improves both sample efficiency and average robustness, but only in low-data settings
(Figure 2). In contrast, increasing model size improves sample efficiency across the range of training dataset sizes and tasks, but only improves average robustness on sentiment analysis (Figure 3). On individual datasets, we even observe cases where models with *lower* sample efficiency have higher robustness (Figure 3d). See Appendix C for full results on every ID-OOD setting.
Natural Language Prompting. We compare BERTBASE models using (1) standard fine-tuning,
(2) prompt-based fine-tuning, and (3) zero-shot prompting. We also compare these results with zero-shot prompting of text-davinci-001, a much larger model trained on substantially more data. We run experiments on NLI and sentiment analysis, since extractive QA is not amenable to prompt-based fine-tuning with masked language models.
Figures 2a and 2b plot the average performance on all OOD datasets as a function of ID performance and the ID performance as a function of the number of labeled training examples. Sample efficiency improvements from prompt-based finetuning also translate to higher average robustness.
However these improvements only apply in the few-shot setting. As the size of the training dataset increases, the improvements in sample efficiency and average robustness steadily diminish. When using sufficiently large training datasets, models trained with prompt-based fine-tuning yield essentially the same sample efficiency and robustness results as standard fine-tuning (∼1K examples for NLI, ∼130 examples for sentiment).
However, results on individual OOD test sets can significantly differ from averaged-OOD trends.
For example, Figure 2c shows that prompt-based fine-tuning on MNLI and evaluating on SNLI improves sample efficiency in the few-shot setting but without any robustness improvements.
Surprisingly, we also find that zero-shot inference does not necessarily improve average robustness over prompt-based fine-tuning—zero-shot performance lies on or below the trend line formed by prompt-based fine-tuning, despite not using any ID-specific data at all. See Appendix C.1 for full results of increasing pre-trained model size for every ID-OOD setting.
Increasing Pre-Trained Model Size. We run experiments with the checkpoints of Turc et al.
(2019), who pre-train BERT models with various numbers of transformer layers (L) and hidden embedding sizes (H). We run experiments on NLI,
sentiment analysis, and extractive QA to compare pre-trained models of five sizes: (1) Large (L=24, H=1024), (2) Base (L=12, H=768), (3) Medium
![3_image_0.png](3_image_0.png)
(L=8, H=512), (4) Small (L=4, H=512), and (5) Tiny (L=2, H=128). Although increasing the pre-trained model size improves sample efficiency on every task, it does not always improve average robustness (Figure 3). In particular, increasing model size minimally affects average robustness in NLI and extractive QA (Figure 3a,3c), but substantially improves average robustness on sentiment analysis (Figure 3b).1 However, results on individual ID-OOD pairs can again significantly differ from average OOD performance trends. For example, when training on SST-2 and evaluating on IMDb, larger models actually have *lower* OOD
performance. This occurs because SST-2 examples (single sentences) are significantly shorter than IMDb examples (paragraphs). As a result, models trained on the shorter SST-2 examples struggle when evaluated on IMDb because this particular ID-OOD pair requires length extrapolation, and increasing pre-trained model size does not help models generalize to longer input sequences. As a result, effective robustness decreases because larger models have higher ID (SST-2) performance but unchanged OOD (IMDb) performance. See Appendix C.2 for full results of natural language prompting for every ID-OOD setting.
Pre-Training on More Data. We conduct NLI,
sentiment, and QA experiments with RoBERTa models pre-trained on 10M, 100M, and 1B tokens of web text (Zhang et al., 2021).
Pre-training on more data consistently improves sample efficiency, but only yields average robustness improvements in NLI and sentiment analysis
(Figure 4a,b). In extractive QA experiments, varying the amount of pre-training data does not significantly change average robustness (Figure 4c).
Again, we find that results on average OOD performance are not predictive of results on individual test sets—despite unchanged average OOD robustness when pre-training on more data, OOD performance can be higher on individual extractive QA test sets (e.g., SQuAD → BioASQ; Figure 4d).
See Appendix C.3 for full results of pre-training on
![4_image_0.png](4_image_0.png)
## 5 Conclusion
We study the relationship between sample efficiency and robustness across three tasks and three modeling interventions, finding that sample efficiency improvements often fail to translate to improved robustness. As larger models quickly become more sample efficient, our results caution that sample efficiency and robustness are different axes of improvement and that optimizing for sample efficiency will not necessarily always yield robustness gains.
## Acknowledgments
We thank the anonymous reviewers for their feedback and comments that helped improve this work.
We also thank Kevin Lin and Eric Wallace for their feedback and useful discussions. NL was supported by an NSF Graduate Research Fellowship under grant number DGE-1656518. Other funding was provided by a PECASE Award and the Open Philantropy Project.
## Limitations
Our study focuses on natural language understanding tasks, though it may also be interesting to study whether these trends apply in natural language generation tasks (e.g., summarization). In particular, it's possible that zero- or few-shot pre-trained models may do better on generation tasks because these tasks are more similar to the models' original pretraining objective (e.g., language modeling).
Furthermore, we compared few-shot promptbased fine-tuning, zero-shot inference, and standard fine-tuning. However, other methods of adapting models to labeled ID data can have very different sample efficiency properties (e.g., in-context learning). Future work could explore whether these results hold with few-shot in-context learning or parameter-efficient fine-tuning tuning (e.g., adapaters; Houlsby et al., 2019).
## References
Anas Awadalla, Mitchell Wortsman, Gabriel Ilharco, Sewon Min, Hannaneh Hajishirzi, and Ludwig Schmidt. 2022. Exploring the landscape of distri-
butional robustness for question answering modelsn.
In *Findings of EMNLP*.
John Blitzer. 2008. *Domain adaptation of natural language processing systems*. Ph.D. thesis, University of Pennsylvania.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proc. of EMNLP*.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Proc. of NeurIPS.
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In *Proc. of MRQA*.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In *Proc. of ACL*.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A.
Smith. 2018. Annotation artifacts in natural language inference data. In *Proc. of NAACL*.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly.
2019. Parameter-efficient transfer learning for nlp. In *Proc. of ICML*.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019.
Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In *Proc. of ACL*.
R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In *Proc. of* ACL.
John Miller, Karl Krauth, Benjamin Recht, and Ludwig Schmidt. 2020. The effect of natural distribution shift on question answering models. In *Proc. of* ICML.
Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019.
Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In *Proc. of* EMNLP.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. ArXiv:2103.00020.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proc. of* EMNLP.
Alexey Romanov and Chaitanya Shivade. 2018.
Lessons from natural language inference in the clinical domain. In *Proc. of EMNLP*.
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proc. of EACL*.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proc. of EMNLP*.
Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt.
2020. Measuring robustness to natural distribution shifts in image classification. In *Proc. of NeurIPS*.
George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artiéres, Axel-Cyrille Ngonga Ngomo, Norman Heino, Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015.
An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. *BMC bioinformatics*, 16(1):1–28.
Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better:
On the importance of pre-training compact models.
ArXiv:1908.08962.
Prasetya Ajie Utama, Nafise Sadat Moosavi, Victor Sanh, and Iryna Gurevych. 2021. Avoiding inference heuristics in few-shot prompt-based finetuning.
In *Proc. of EMNLP*.
Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proc.
of NAACL.
Yian Zhang, Alex Warstadt, Xiaocheng Li, and Samuel R. Bowman. 2021. When do you need billions of words of pretraining data? In *Proc. of ACL*.
## A Experimental Setup Details
Natural Language Inference. We use MultiNLI
(Williams et al., 2018) and SNLI (Bowman et al.,
2015) as ID datasets. We use MultiNLI, SNLI and MedNLI (Romanov and Shivade, 2018) as OOD
test sets. All of our ID datasets have three labels
(entailment, contradiction, *neutral*).
We also evaluate OOD on HANS (McCoy et al.,
2019), a diagnostic dataset targeting lexical overlap, an ID-specific pattern in SNLI and MultiNLI. In MultiNLI and SNLI, the majority of examples with high lexical overlap between the NLI premise and hypothesis have the "entailment" label. In HANS,
50% of examples support this heuristic, and 50%
contradict it, so a model that exclusivly relies on the word overlap heuristic would have an accuracy of 50%.but HANS has two labels (entailment, *nonentailment*). To evaluate our 3-class models on 2-class HANS, we follow McCoy et al. (2019) and translate contradiction or *neutral* model predictions to *non-entailment*.
We train on the MultiNLI and SNLI training sets. We evaluate on the MultiNLI matched development set, the SNLI test set, and the HANS evaluation split. When evaluating OOD on MedNLI, we evaluate on the *training set* (∼11K examples) because the development and test sets are quite small
(∼1.5K examples each).
Sentiment Analysis. We use the IMDb reviews dataset of (Maas et al., 2011), SST-2 (Socher et al.,
2013) as ID datasets. We use IMDb, SST-2, and reviews from the "Movies and TV" subsection of the Amazon Reviews corpus (Ni et al., 2019) as OOD datasets.
These datasets are all binary classification, where reviews are labeled as positive or *negative* sentiment. To construct the "Movies and TV" Amazon review sentiment dataset, we randomly select one- or two-star (negative) reviews and four- or five-star (positive) reviews from the full Amazon Reviews corpus, using 25,000 examples for training, 10,000 examples for development, and 10,000 examples for testing. Each of these splits is balanced.
We train on the IMDb, SST, and Amazon Reviews training splits, and use the corresponding evaluation splits to measure ID performance. When evaluating OOD on SST, we use the concatenation of the train and test sets (8471 examples in total),
since the original test set is quite small (1821 examples). Beyond this exception, we use each dataset's evaluation split for OOD evaluation.
Extractive Question Answering. We use SQuAD (Rajpurkar et al., 2016) and NaturalQuestions (Kwiatkowski et al., 2019) as ID datasets.
We use SQuAD, NaturalQuestions, TriviaQA,
BioASQ (Tsatsaronis et al., 2015), and the SQuADShifts test sets of Miller et al. (2020) as OOD datasets.
The SQuADShifts test sets were constructed following the original SQuAD crowdsourcing procedure, but with passages drawn from both the original Wikipedia domain, as well as the New York Times (NYT), Amazon reviews, and Reddit. For NaturalQuestions, we only consider questions over paragraphs (as opposed to those over tables and lists). We use the MRQA 2019 Shared Task versions of TriviaQA and BioASQ (Fisch et al., 2019).
We also use the MRQA 2019 Shared Task version of NaturalQuetsions, but only include examples questions over paragraphs (removing those with questions over tables or lists). In all of these extractive QA datasets, models are given a passage and a question and tasked with identifying a substring of the passage that answers the question.
We train on the SQuAD and NaturalQuestions training splits, and use the corresponding evaluation splits to measure ID performance. When evaluating OOD on BioASQ, we use the concatenation of the train, development, and test sets (3977 examples in total), since the original test set is quite small (1518 examples). Beyond this exception, we use each dataset's evaluation split for OOD evaluation.
## B Hyperparameter Optimization Details
We conduct extensive hyperparameter optimization when training models on a particular ID dataset (or a subsample thereof). We re-tune hyperparameters for each subsample size, since the optimal value of certain hyperparameters may depend on number of available training examples (e.g., batch size and learning rate). For each experimental setting, we use a combination of (1) previously-reported hyperparameters (taken from prior work) and (2) random search (10 samples) over a pre-defined grid of reasonable hyperparameter values. For each experiment, we take the checkpoint with the best ID
performance.
Natural Language Inference. For every NLI
ID-OOD setting, we run experiments with the cross-product of learning rates in {1e-5, 2e-5, 3e5} with batch sizes of {16, 32}. We also sample additional runs from the following grid:
- Random seed: [0, 100000]
- Learning rate: {1e-5, 2e-5, 3e-5}
- Batch size: {16, 32}
- Number of training epochs: {10}
Sentiment Analysis. For every sentiment analysis ID-OOD setting, we run experiments with the cross-product of learning rates in {1e-5, 2e-5, 3e-5, 5e-5} with batch sizes of {16, 32} and training for
{20, 50} epochs. We also sample additional runs from the following grid:
- Random seed: [0, 100000] - Learning rate: {1e-5, 2e-5, 3e-5, 5e-5}
- Batch size: {16, 32} - Number of training epochs: {20, 50}
Extractive Question Answering. For every extractive question answering ID-OOD setting, we run experiments with the cross-product of learning rates in {2e-5, 3e-5, 5e-5} with batch sizes of
{16, 32}. We also sample additional runs from the following grid:
- Random seed: [0, 100000] - Learning rate: {2e-5, 3e-5, 5e-5} - Batch size: {16, 32}
- Number of training epochs: {4}
## C Results Of All Methods On All Id-Ood Settings
![9_image_0.png](9_image_0.png)
C.1 Natural Language Prompting
![10_image_0.png](10_image_0.png)
![11_image_0.png](11_image_0.png)
C.2 Increasing Pre-Trained Model Size
![12_image_1.png](12_image_1.png)
![12_image_0.png](12_image_0.png)
![13_image_0.png](13_image_0.png)
![14_image_0.png](14_image_0.png)
C.3 Pre-Training on More Data
![15_image_0.png](15_image_0.png)
![16_image_0.png](16_image_0.png)
![17_image_0.png](17_image_0.png)
![18_image_0.png](18_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Last section A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sec 4
✓ B1. Did you cite the creators of artifacts you used?
Sec 4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
All artifacts were open-access
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Sec 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sec 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Sec 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
li-etal-2023-diversity | Diversity-Aware Coherence Loss for Improving Neural Topic Models | https://aclanthology.org/2023.acl-short.145 | The standard approach for neural topic modeling uses a variational autoencoder (VAE) framework that jointly minimizes the KL divergence between the estimated posterior and prior, in addition to the reconstruction loss. Since neural topic models are trained by recreating individual input documents, they do not explicitly capture the coherence between words on the corpus level. In this work, we propose a novel diversity-aware coherence loss that encourages the model to learn corpus-level coherence scores while maintaining high diversity between topics. Experimental results on multiple datasets show that our method significantly improves the performance of neural topic models without requiring any pretraining or additional parameters. | # Diversity-Aware Coherence Loss For Improving Neural Topic Models
Raymond Li†, Felipe González-Pizarro†, Linzi Xing†, Gabriel Murray‡**, Giuseppe Carenini**†
†University of British Columbia, Vancouver, BC, Canada
‡University of Fraser Valley, Abbotsford, BC, Canada
{raymondl, felipegp, lzxing, carenini}@cs.ubc.ca [email protected]
## Abstract
The standard approach for neural topic modeling uses a variational autoencoder (VAE) framework that jointly minimizes the KL divergence between the estimated posterior and prior, in addition to the reconstruction loss. Since neural topic models are trained by recreating individual input documents, they do not explicitly capture the coherence between topic words on the corpus level. In this work, we propose a novel diversity-aware coherence loss that encourages the model to learn corpus-level coherence scores while maintaining a high diversity between topics. Experimental results on multiple datasets show that our method significantly improves the performance of neural topic models without requiring any pretraining or additional parameters.
## 1 Introduction
The main goal of topic modeling is to discover latent topics that best explain the observed documents in the corpus. The topics, conceptualized as a multidimensional distribution over the vocabulary, are useful for many downstream applications, including summarization (Wang et al., 2020; Xiao et al., 2022), text generation (Wang et al., 2019; Nevezhin et al., 2020), dialogue modeling (Xu et al., 2021; Zhu et al., 2021), as well as analyzing the data used for pretraining large language models
(Chowdhery et al., 2022). When presented to humans, they are often represented as lists of the most probable words to assist the users in exploring and understanding the underlying themes in a large collection of documents. While the extrinsic quality of topics can be quantified by the performance of their downstream tasks, the intrinsic interpretability of topics appears to be strongly correlated with two important factors, namely *coherence* and *diversity*
(Dieng et al., 2020).
The topic *coherence* measures to what extent the words within a topic are related to each other in a 1710 meaningful way. Although human studies provide a direct method for evaluation, they can be costly, especially when a large number of models are waiting to be assessed. Therefore, various automatic metrics have been developed to measure topic coherence (Newman et al., 2010; Mimno et al., 2011; Xing et al., 2019; Terragni et al., 2021). For instance, the well-established Normalized Pointwise Mutual Information (NPMI) metric (Lau et al.,
2014), based on word co-occurrence within a fixed window, has been found to have a strong correlation with human judgment (Röder et al., 2015). On the other hand, topic *diversity* measures to what extent the topics are able to capture different aspects of the corpus based on the uniqueness of the topic words (Nan et al., 2019). Importantly, studies have shown that optimizing for coherence can come at the expense of diversity (Burkhardt and Kramer, 2019). Even without accounting for topic diversity, directly optimizing for topic coherence by itself is a non-trivial task, due to the computational overhead and non-differentiability of the score matrix (Ding et al., 2018).
While traditional topic modeling algorithms are in the form of statistical models such as the Latent Dirichlet Allocation (LDA) (Blei et al., 2003),
advancements in variational inference methods
(Kingma and Welling, 2014; Rezende et al., 2014)
have led to the rapid development of neural topic model (NTM) architectures (Miao et al., 2016, 2017; Srivastava and Sutton, 2017). More recently, follow-up works have focused on the integration of additional knowledge to improve the coherence of NTMs. Their attempts include the incorporation of external embeddings (Ding et al., 2018; Card et al.,
2018; Dieng et al., 2020; Bianchi et al., 2021a,b), knowledge distillation (Hoyle et al., 2020), and model pretraining (Zhang et al., 2022). However, as the model is designed to operate on a documentlevel input, one significant limitation of NTMs is their inability to explicitly capture the corpuslevel coherence score, which assesses the extent to which words within specific topics tend to occur together in a comparable context within a given corpus. For example, semantically irrelevant words such as "*politics*" and "*sports*" might be contextually relevant in a given corpus (e.g., government funding for the national sports body). Recently, one closely related work addresses this gap by reinterpreting topic modeling as a coherence optimization task with diversity as a constraint (Lim and Lauw, 2022).
While traditional topic models tend to directly use corpus-level coherence signals, such as factorizing the document-term matrix (Steyvers and Griffiths, 2007), and topic segment labeling with random walks on co-occurrence graphs (Mihalcea and Radev, 2011; Joty et al., 2013), to the best of our knowledge, no existing work have explicitly integrated corpus-level coherence scores into the training of NTMs without sacrificing topic diversity. To address this gap, we propose a novel coherence-aware diversity loss, which is effective to improve both the coherence and diversity of NTMs by adding as an auxiliary loss during training. Experimental results show that this method can significantly improve baseline models without any pretraining or additional parameters1.
## 2 Background
Latent Dirichlet Allocation (LDA) (Blei et al.,
2003) is a simple yet effective probabilistic generative model trained on a collection of documents.
It is based on the assumption that each document w in the corpus is described by a random mixture of latent topics z sampled from a distribution parameterized by θ, where the topics β are represented as a multidimensional distribution over the vocabulary V . The formal algorithm describing the generative process is presented in Appendix A. Under this assumption, the marginal likelihood of the document p(w|*α, β*) is described as:
$$\int_{\theta}\left(\prod_{i}^{|V|}\sum_{z_{i}}^{K}p(w_{i}|z_{i},\beta)p(z_{i}|\theta)\right)p(\theta|\alpha)d\theta\tag{1}$$ **IV**: **we can view the next-dimensional distribution ($\alpha$ 10)**
However, since the posterior distribution p(zi|θ)
is intractable for exact inference, a wide variety of approximate inference algorithms have been used for LDA (e.g., Hoffman et al. (2010)).
1The implementation of our work is available at:
https://github.com/raymondzmc/Topic-Model-DiversityAware-Coherence-Loss A common strategy to approximate such posterior is employing the variational auto-encoder
(VAE) (Kingma and Welling, 2014). In particular, NTMs use an encoder network to compress the document representation into a continuous latent distribution and pass it to a generative decoder to reconstruct the bag-of-words (BoW) representation of the documents. The model is trained to minimize the evidence lower bound (ELBO) of the marginal log-likelihood described by the LDA generative process:
$$\begin{array}{c}{{L_{\mathrm{ELBO}}=-\,D_{\mathrm{KL}}[q(\theta,z|w)||p(\theta,z|\alpha)]}}\\ {{\qquad\qquad+\,\mathbb{E}_{q(\theta,z|w)}[\log p(w|z,\theta,\alpha,\beta)]}}\end{array}\quad(2)$$
In Equation 2, the first term attempts to match the variational posterior over latent variables to the prior, and the second term ensures that the variational posterior favors values of the latent variables that are good at explaining the data (i.e., reconstruction loss). While standard Gaussian prior has typically been used in VAEs, **ProdLDA** (Srivastava and Sutton, 2017) showed that using a Laplace approximation of the Dirichlet prior achieved superior performance. To further improve topic coherence, CombinedTM (Bianchi et al., 2021a) concatenated the BoW input with contextualized SBERT embeddings (Reimers and Gurevych, 2019), while **ZeroshotTM** (Bianchi et al., 2021b) used only contextualized embeddings as input. These are the three baselines included in our experiments.
## 3 Proposed Methodology
Despite the recent advancements, one significant limitation of the NTM is that since the model is trained on document-level input, it does not have direct access to corpus-level coherence information
(i.e., word co-occurrence). Specifically, the topicword distribution β is optimized on the documentlevel reconstruction loss, which may not be an accurate estimate of the true corpus distribution due to the inherent stochasticity of gradient-descent algorithms. We address this problem by explicitly integrating a corpus-level coherence metric into the training process of NTMs using an auxiliary loss.
## 3.1 Optimizing Corpus Coherence
To improve the topic-word distribution β, we maximize the corpus-level coherence through the wellestablished NPMI metric2(Bouma, 2009; Lau et al.,
2Detailed definition of NPMI is presented in Appendix B.
2014). After computing the pairwise NPMI matrix N ∈ R|V |×|V | on the corpus, we use the negative β-weighted NPMI scores of the top-n words within each topic as the weight for the coherence penalty of β, where n is a hyperparameter that equals to the number of topic words to use. Specifically, we apply a mask Mc to keep the top-n words of each topic and apply the row-wise softmax operation σ to ensure the value of the penalty is always positive.
We define the coherence weight WC in Equation 3.
$$W_{C}=1-\mathrm{normalize}(\sigma(\beta\odot M_{c})N)\quad\quad(3)$$
Intuitively, each value in σ(β ⊙ Mk)N represents the β-weighted average NPMI score with other words in the topic. Then we use row-wise normalization to scale the values, so WC ∈ [0, 1].
## 3.2 Improving Topic Diversity
One problem with the coherence weight WC is that it does not consider the diversity across topics. To account for this, we propose an additional method to simultaneously improve topic diversity by encouraging words unused by other topics to have higher probabilities. To achieve this, we bin the words within each topic into two groups, where the words in the first group consist of those that already have a high probability in other topics (i.e.,
appear within top-n words), while the second group does not. The intuition is that we want to penalize the words in the first group more than the words in the second group. In practice, we use a mask Md ∈ R
K×Vfor selecting β logits in the first group, where hyperparameter λd ∈ [0.5, 1] is a balancing constant between the two groups and n is the number of topic words to use. We then compute the diversity-aware coherence weight WD as the λd-weighted sum of WC:
$$W_{D}=\lambda_{d}M_{d}\odot W_{C}+(1-\lambda_{d})(\neg M_{d})\odot W_{C}\,\,\,(4)$$
From Equation 4, we see that when λd = 0.5, there are no constraints on diversity since the two groups are penalized equally (2WD = WC).
## 3.3 Auxiliary Loss
From the two definitions of coherence weight
(WC, WD), we propose an auxiliary loss that can be directly combined with the ELBO loss (Equation 2) when training the NTM. Since β are unnormalized logits containing negative values, we apply the softmax operation σ(β) to avoid unbound optimization.
$${\cal L}_{\mathrm{AUX}}=\frac{1}{2}[\sigma(\beta)]^{2}\odot W_{D}$$
$$(5)$$
In Equation 5, the topic probabilities are penalized by their negative weighted coherence score with the top-n words. The square operation ensures that words with very high probability are penalized to avoid the global minima, we justify this decision based on its partial derivatives in the next subsection.
The final objective function is the multitask loss consisting of the ELBO and our defined auxiliary loss:
L = LELBO + λaLAUX (6)
$${\cal L}={\cal L}_{\mathrm{ELBO}}+\lambda_{a}{\cal L}_{\mathrm{AUX}}$$
During training, we employ a linear warm-up schedule to increase λa gradually, so the model can learn to reconstruct the BoW representation based on the topic distribution α before optimizing for coherence and diversity.
## 3.4 Derivatives
We justify our auxiliary loss defined in Equation 5 using the derivatives w.r.t. the β parameters. For simplicity, we define pk,i = σ(βk)i as the softmax probability for word i in topic k. Since we detach the gradients when computing W, it can be treated as a constant w in the derivatives.
$$\begin{array}{c}{{\frac{\partial L_{\mathrm{AUX}}}{\partial\beta_{k,i}}=\ w\cdot p_{k,i}\cdot p_{k,i}(1-p_{k,i})+}}\\ {{\qquad\qquad w\cdot\sum_{j\neq i}p_{k,j}(-p_{k,j}p_{k,i})}}\end{array}\tag{7}$$
In Equation 7, the partial derivatives w.r.t. βk,i can be broken down into two terms. In the first term, the softmax derivative pk,i(1 − pk,i) is zero when pk,i is either 0 or 1 (really small or really large). The additional pk,i (from the square operation) penalizes over-confident logits and leads to better topics. Similarly for the second term, since Pi pk,i = 1,Pj̸=i pk,jpk,iis zero (global minima) when one logit dominates the others. Therefore, the additional pk,j has the same penalizing effect on the over-confident logits.
## 4 Experiments
In this section, we describe the experimental settings and present the quantitative results to assess the benefits of our proposed loss.
| Dataset | 20NewsGroup | Wiki20K | GoogleNews | | | | | | | | | |
|-----------------|---------------|-----------|--------------|-------|--------|-------|-------|-------|--------|-------|-------|-------|
| Metrics | NPMI | WE | I-RBO | TU | NPMI | WE | I-RBO | TU | NPMI | WE | I-RBO | TU |
| LDA | .0426 | .1624 | .9880 | .8077 | -.0470 | .1329 | .9934 | .8664 | -.2030 | .0989 | .9973 | .9065 |
| ProdLDA | .0730 | .1626 | .9923 | .7739 | .1712 | .1883 | .9948 | .7674 | .0919 | .1240 | .9974 | .8460 |
| CombinedTM | .0855 | .1643 | .9922 | .7705 | .1764 | .1893 | .9941 | .7509 | .1062 | .1316 | .9943 | .7498 |
| ZeroshotTM | .1008 | .1749 | .9910 | .7214 | .1783 | .1896 | .9916 | .6999 | .1218 | .1321 | .9967 | .8200 |
| ProdLDA + WC | .1233 | .1775 | .9916 | .7526 | .2386 | .2094 | .9905 | .6933 | .1236 | .1262 | .9973 | .8400 |
| CombinedTM + WC | .1301 | .1781 | .9910 | .7477 | .2392 | .2113 | .9890 | .6748 | .1378 | .1339 | .9938 | .7421 |
| ZeroshotTM + WC | .1456 | .1882 | .9895 | .6975 | .2455 | .2147 | .9862 | .6350 | .1562 | .1349 | .9964 | .8131 |
| ProdLDA + WD | .1235 | .1786 | .9940 | .7901 | .2367 | .2101 | .9929 | .7556 | .1275 | .1274 | .9975 | .8504 |
| CombinedTM + WD | .1309 | .1790 | .9935 | .7833 | .2404 | .2137 | .9918 | .7366 | .1429 | .1354 | .9942 | .7541 |
| ZeroshotTM + WD | .1482 | .1899 | .9919 | .7343 | .2460 | .2156 | .9890 | .6904 | .1569 | .1350 | .9967 | .8228 |
## 4.1 Datasets And Evaluation Metrics
To test the generality of our approach, we train and evaluate our models on three publicly available datasets: 20NewsGroups, Wiki20K (Bianchi et al.,
2021b), and GoogleNews (Qiang et al., 2022). We provide the statistics of the three datasets in Table 23.
Table 2: Statistics of the three datasets used in our experiments.
| Dataset | Domain | Docs | Vocabulary |
|--------------|----------|--------|--------------|
| 20Newsgroups | Email | 18,173 | 2,000 |
| Wiki20K | Article | 20,000 | 2,000 |
| Google News | News | 11,108 | 8,110 |
We use automatic evaluation metrics to measure the topic coherence and diversity of the models.
For coherence, we use the NPMI and Word Embedding (WE) (Fang et al., 2016) metrics, which measure the pairwise NPMI score and word embedding similarity, respectively, between the top-10 words of each topic. For diversity, we use Topic Uniqueness (TU) (Dieng et al., 2020), which measures the proportion of unique topic words, and Inversed Rank-Biased Overlap (I-RBO) (Terragni et al., 2021; Bianchi et al., 2021a), measuring the rank-aware difference between all combinations of topic pairs.
## 4.2 Baselines
We plug our proposed auxiliary loss to three baseline NTMs' training process to demonstrate the benefits of our approach across different settings.
Specifically, the three models are (1) ProdLDA
3Detailed description of the three datasets is provided in Appendix C.
(Srivastava and Sutton, 2017), (2) CombinedTM
(Bianchi et al., 2021a), and (3) ZeroshotTM
(Bianchi et al., 2021b). For comparison, we also include the results of the standard LDA algorithm
(Blei et al., 2003).
## 4.3 Hyperparemeter Settings
We follow the training settings reported by Bianchi et al. (2021a), with 100 epochs and a batch size of 100. The models are optimized using the ADAM
optimizer (Kingma and Ba, 2015) with the momentum set to 0.99 and a fixed learning rate of 0.002.
We do not modify the architecture of the models, where the inference network is composed of a single hidden layer and 100 dimensions of softplus activation units (Zheng et al., 2015). The priors over the topic and document distributions are learnable parameters. A 20% Dropout (Srivastava et al.,
2014) is applied to the document representations.
During our evaluation, we follow the same setup and used the top-10 words of each topic for the coherence and diversity metrics.
For the hyperparameters introduced in the diversity-aware coherence loss, both Mc and Md are computed using the top-20 words of each topic.
The scaling factor λa is linearly increased for the first 50 epochs and kept constant for the last 50 epochs, we set λa to be 100 in order to balance the loss magnitude of LELBO and LAUX. The λd in the diversity loss is set by taking a mid-range value of 0.7 in the [0.5, 1] range. We do not perform any searches over our defined hyperparameters; we believe that additional experiments will yield better results (i.e., by using a validation set).
## 4.4 Results
Table 1 shows improvements across all settings.
However, with the basic coherence loss (WC), the significant coherence increase comes at the expense of topic diversity, where a slight decrease can be observed in the I-RBO and TU scores. In contrast, with the diversity-aware coherence loss (WD),
we observe that the model improves in coherence while having a significantly higher diversity over the basic loss (WC). The further coherence improvements can be attributed to the regularization effects, where words with a high probability of belonging to another topic are less likely to be related to words in the current topic. Lastly, it is worth noting that due to the gradual increase in λa, our proposed loss has a negligible effect on the original document-topic distribution θ, and only modifies the word distribution within the established topics. We provide some sample model outputs in Appendix D.
## 4.5 Coherence And Diversity Trade-Off
To study the effects of λd on the trade-off between coherence and diversity, we perform experiments with different values of λd with the ZeroshotTM
baseline, which has the best overall performance.
Note that when λd = 0.5, the objective is equivalent to the basic coherence loss. From results on the 20NewsGroups Dataset (Table 3), we see that coherence peaks at λd = 0.7 before the diversity penalty begins to dominate the loss. Further, while a higher value of λd leads to a lower coherency score, both coherency and diversity are still improved over the baselines for all values of λd, demonstrating the effectiveness of our method without the need for extensive hyperparameter tuning. We observe an identical trend in other datasets.
Table 3: Results on the 20NewsGroups dataset for different values of λd with ZeroshotTM.
## 4.6 Comparison With Composite Activation
The recent work by Lim and Lauw (2022) proposed a model-free technique to refine topics based on the parameters of the trained model. Specifically, they solve an optimization problem (with the NPMI
score as the objective) using a pool of candidates while setting the diversity score as a constraint.
| NPMI | WE | I-RBO | TU | |
|------------|-------|---------|-------|-------|
| ZeroshotTM | .1008 | .1749 | .9910 | .7214 |
| λd = 0.5 | .1456 | .1882 | .9895 | .6975 |
| λd = 0.6 | .1428 | .1875 | .9908 | .7198 |
| λd = 0.7 | .1482 | .1899 | .9919 | .7343 |
| λd = 0.8 | .1443 | .1890 | .9925 | .7499 |
| λd = 0.9 | .1369 | .1867 | .9933 | .7724 |
| λd = 1.0 | .1193 | .1816 | .9951 | .8086 |
Since their goal is similar to ours, we run further evaluations to compare the respective approaches.
In particular, we experiment with ZeroshotTM
on the 20NewsGroups dataset for K = 25, 50.
For comparison, we use their Multi-Dimensional Knapsack Problem (MDKP) formulation, since it achieved the best overall performance. Regrettably, considering larger topic numbers was not possible due to the NP-hard runtime complexity of MDKP. From the results in Table 4, we see that while our methods have similar coherence scores, MDKP archives higher topic diversity due to its selectivity of less-redundant topics. However, when combining MDKP with our proposed loss (+ WD
+ MDKP), we achieve the highest overall performance across all metrics. This is expected since the pool of potential topic candidates is generated based on the trained model, and better-performing models lead to superior candidates.
K = 25 NPMI WE I-RBO TU
ZeroshotTM .1059 .1791 .9927 .9152
+ MDKP .1481 .1895 **.9991** .9804
+ WD .1433 .1921 .9981 .9688
+ WD + MDKP **.1657 .2043** .9989 **.9808**
K = 50 NPMI WE I-RBO TU
ZeroshotTM .1109 .1746 .9937 .8498
+ MDKP .1578 .1903 .9983 .9452
+ WD .1581 .1921 .9963 .8840 + WD + MDKP **.1783 .1932 .9985 .9500**
## 5 Conclusion And Future Work
In this work, we present a novel diversity-aware coherence loss to simultaneously improve the coherence and diversity of neural topic models. In contrast to previous methods, our approach directly integrates corpus-level coherence scores into the training of Neural Topic Models. The extensive experiments show that our proposal significantly improves the performance across all settings without requiring any pretraining or additional parameters.
For future work, we plan to perform extensive user studies to examine the extent to which improvements in quantitative metrics affect human preference. Further, we would like to extend our approach to other quantitative metrics (e.g., semantic similarity), and perform extrinsic evaluation to study the effects of our approach when the topics are used for downstream tasks (e.g., summarization, dialogue modeling, text generation).
## Limitations
We address several limitations with regard to our work. First, the publicly available datasets used in our experiments are limited to English. Documents in different languages (i.e., Chinese) might require different segmentation techniques and may contain unique characteristics in terms of vocabulary size, data sparsity, and ambiguity. Secondly, we only evaluate the quality of the topic models in terms of coherence and diversity. Future work should explore how our method impacts other characteristics, such as document coverage (i.e., how well documents match their assigned topics) and topic model comprehensiveness (i.e., how thoroughly the model covers the topics appearing in the corpus).
## Ethics Statement
The datasets used in this work are publicly available and selected from recent literature. There could exist biased views in their content, and should be viewed with discretion.
Our proposed method can be applied to extract topics from a large collection of documents. Researchers wishing to apply our method should ensure that the input corpora are adequately collected and do not violate any copyright infringements.
## References
Federico Bianchi, Silvia Terragni, and Dirk Hovy.
2021a. Pre-training is a hot topic: Contextualized document embeddings improve topic coherence. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 759–766, Online. Association for Computational Linguistics.
Federico Bianchi, Silvia Terragni, Dirk Hovy, Debora Nozza, and Elisabetta Fersini. 2021b. Cross-lingual contextualized topic models with zero-shot learning.
In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational* Linguistics: Main Volume, pages 1676–1683, Online.
Association for Computational Linguistics.
David M Blei, Andrew Y Ng, and Michael I Jordan.
2003. Latent dirichlet allocation. *Journal of Machine* Learning Research, 3(Jan):993–1022.
Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. *Proceedings* of GSCL, 30:31–40.
Sophie Burkhardt and Stefan Kramer. 2019. Decoupling sparsity and smoothness in the dirichlet variational
autoencoder topic model. *Journal of Machine Learning Research*, 20(131):1–27.
Dallas Card, Chenhao Tan, and Noah A. Smith. 2018.
Neural models for documents with metadata. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 2031–2040, Melbourne, Australia. Association for Computational Linguistics.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Adji B. Dieng, Francisco J. R. Ruiz, and David M. Blei.
2020. Topic modeling in embedding spaces. *Transactions of the Association for Computational Linguistics*, 8:439–453.
Ran Ding, Ramesh Nallapati, and Bing Xiang. 2018.
Coherence-aware neural topic modeling. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, pages 830–836, Brussels, Belgium. Association for Computational Linguistics.
Anjie Fang, Craig Macdonald, Iadh Ounis, and Philip Habel. 2016. Using word embedding to evaluate the coherence of topics from twitter data. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '16, page 1057–1060, New York, NY, USA.
Association for Computing Machinery.
Matthew Hoffman, Francis Bach, and David Blei. 2010.
Online learning for latent dirichlet allocation. In Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc.
Alexander Miserlis Hoyle, Pranav Goel, and Philip Resnik. 2020. Improving Neural Topic Models using Knowledge Distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1752–1771, Online. Association for Computational Linguistics.
Shafiq Joty, Giuseppe Carenini, and Raymond T Ng.
2013. Topic segmentation and labeling in asynchronous conversations. *Journal of Artificial Intelligence Research*, 47:521–573.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In *2nd International* Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.
Jey Han Lau, David Newman, and Timothy Baldwin.
2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality.
In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539, Gothenburg, Sweden.
Association for Computational Linguistics.
Jia Peng Lim and Hady Lauw. 2022. Towards reinterpreting neural topic models via composite activations.
In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 3688–3703, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Yishu Miao, Edward Grefenstette, and Phil Blunsom.
2017. Discovering discrete latent topics with neural variational inference. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings of Machine Learning Research*, pages 2410–2419. PMLR.
Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of *Proceedings of Machine Learning Research*, pages 1727–1736, New York, New York, USA. PMLR.
Rada Mihalcea and Dragomir Radev. 2011. Graphbased Natural Language Processing and Information Retrieval. Cambridge University Press.
David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 262–272, Edinburgh, Scotland, UK. Association for Computational Linguistics.
Feng Nan, Ran Ding, Ramesh Nallapati, and Bing Xiang. 2019. Topic modeling with Wasserstein autoencoders. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 6345–6381, Florence, Italy. Association for Computational Linguistics.
Egor Nevezhin, Nikolay Butakov, Maria Khodorchenko, Maxim Petrov, and Denis Nasonov. 2020. Topicdriven ensemble for online advertising generation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2273–2283, Barcelona, Spain (Online). International Committee on Computational Linguistics.
David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In *Human Language Technologies: The* 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 100–108, Los Angeles, California.
Association for Computational Linguistics.
Jipeng Qiang, Zhenyu Qian, Yun Li, Yunhao Yuan, and Xindong Wu. 2022. Short text topic modeling techniques, applications, and performance: A survey.
IEEE Transactions on Knowledge and Data Engineering, 34(3):1427–1445.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, volume 32 of *Proceedings of Machine Learning Research*, pages 1278–1286, Bejing, China. PMLR.
Michael Röder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In *Proceedings of the Eighth ACM International Conference on Web Search and Data Mining*,
WSDM '15, page 399–408, New York, NY, USA.
Association for Computing Machinery.
Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014.
Dropout: A simple way to prevent neural networks from overfitting. *Journal of Machine Learning Research*, 15(56):1929–1958.
Mark Steyvers and Tom Griffiths. 2007. Probabilistic topic models. In Handbook of Latent Semantic Analysis, pages 439–460. Psychology Press.
Silvia Terragni, Elisabetta Fersini, and Enza Messina.
2021. Word embedding-based topic similarity measures. In *Natural Language Processing and Information Systems: 26th International Conference on* Applications of Natural Language to Information Systems, NLDB 2021, Saarbrücken, Germany, June 23–
25, 2021, Proceedings, pages 33–45. Springer.
Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019. Topic-guided variational auto-encoder for text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 166–177, Minneapolis, Minnesota. Association for Computational Linguistics.
Zhengjue Wang, Zhibin Duan, Hao Zhang, Chaojie Wang, Long Tian, Bo Chen, and Mingyuan Zhou.
2020. Friendly topic assistant for transformer based abstractive summarization. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 485–497, Online. Association for Computational Linguistics.
Wen Xiao, Lesly Miculicich, Yang Liu, Pengcheng He, and Giuseppe Carenini. 2022. Attend to the right context: A plug-and-play module for content-controllable summarization. arXiv preprint arXiv:2212.10819.
Linzi Xing, Michael J. Paul, and Giuseppe Carenini.
2019. Evaluating topic quality with posterior variability. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3471–3477, Hong Kong, China. Association for Computational Linguistics.
Yi Xu, Hai Zhao, and Zhuosheng Zhang. 2021. Topicaware multi-turn dialogue modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14176–14184.
Linhai Zhang, Xuemeng Hu, Boyu Wang, Deyu Zhou, Qian-Wen Zhang, and Yunbo Cao. 2022. Pre-training and fine-tuning neural topic model: A simple yet effective approach to incorporating external knowledge.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 5980–5989, Dublin, Ireland. Association for Computational Linguistics.
Hao Zheng, Zhanlei Yang, Wenju Liu, Jizhong Liang, and Yanpeng Li. 2015. Improving deep neural networks using softplus units. In 2015 International Joint Conference on Neural Networks (IJCNN), pages 1–4. IEEE.
Lixing Zhu, Gabriele Pergola, Lin Gui, Deyu Zhou, and Yulan He. 2021. Topic-driven and knowledgeaware transformer for dialogue emotion detection.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1571–1582, Online. Association for Computational Linguistics.
## A Lda Generative Process
The formal generative process of a corpus under the LDA assumption can be described by the following algorithm.
## Algorithm 1 Generative Process Of Lda
for each document w do Sample topic distribution θ ∼ Dirichlet(α)
for each word wi do Sample topic zi ∼ Multinomial(θ) Sample word wi ∼ Multinomial(βzi
)
end for end for
## B Normalized Pointwise Mutual Information
Normalized Pointwise Mutual Information
(NPMI) (Lau et al., 2014) measures how much more likely the most representative terms of a topic co-occur than if they were independent. The method for computing the NPMI score between word wi and wj is described in Equation 8, where P(wi, wj ) is computed using a window size of 10.
This metric ranges from −1 to 1.
$$\text{NPPMI}(w_{i},w_{j})=\frac{\log\frac{P(w_{i},w_{j})}{P(w_{i})P(w_{j})}}{-\log P(w_{i},w_{j})}\tag{8}$$
In practice, the pairwise NPMI matrix is computed by first counting the word co-occurrence of all words in the corpus and then calculating the pairwise score following Equation 8. In summary, the NPMI matrix can be computed in O(|W| + |V | 2)
for a corpus of |W| words and vocab size |V |.
Since the matrix is computed only once for each corpus prior to training, it does not increase the runtime complexity of training time.
## C Datasets
This section provides details regarding the datasets we used. The 20NewsGroup4 dataset is a collection of email documents partitioned evenly across 20 categories (e.g., electronics, space), we use the same filtered subset provided by Bianchi et al.
(2021a). The Wiki20K dataset5contains randomly sampled subsets from the English Wikipedia abstracts from DBpedia6. GoogleNews7(Qiang et al.,
2022) is downloaded from the Google news site by crawling the titles and snippets. We do not perform any additional pre-processing and directly use the data provided by the sources to create contextualized and BoW representation.
## D Sample Output
Table 5 provides a qualitative comparison of the topics generated by our proposed method using ZeroshotTM on the 20NewsGroups dataset.
## E Implementation Details
We base our implementation using the code provided by the authors of ZeroshotTM and CombinedTM (Bianchi et al., 2021a,b). Their repository8also provides the evaluation metrics used in our experiments. Our Python code base includes external open-source libraries including NumPy9, SciPy10, PyTorch11, SentenceTransformers12, Pandas13, Gensim14 and scikit-learn15.
## F Computing Details
All our experiments are run on Linux machines with single 1080Ti GPU (CUDA version 11.4).
Each epoch with 100 batch size on the most computationally intensive setting (GoogleNews with K = 150) takes on average 3 seconds to run for the baselines models and 8 and 15 seconds, for WC and WD, respectively. Under this setting, a maximum VRAM usage of 800MB was recorded.
4http://qwone.com/~jason/20Newsgroups 5https://github.com/vinid/data 6https://wiki.dbpedia.org/downloads-2016-10 7https://github.com/qiang2100/STTM/tree/
master/dataset 8https://github.com/MilaNLProc/
contextualized-topic-models 9https://numpy.org/
10https://scipy.org/
11https://pytorch.org/
12https://www.sbert.net/
13https://pandas.pydata.org/
14https://radimrehurek.com/gensim/
15https://scikit-learn.org/stable/
Table 5: Sample model output K = 25 by running ZeroshotTM (Z) with our proposed method (+WC and +WD)
on the 20NewsGroups dataset. We visualize the top-10 keywords of each topic with unique keywords in **bold**.
| on the 20NewsGroups dataset. We visualize the top-10 keywords of each topic with unique keywords in bold. Model Top-10 Topic Keywords Z newsletter, aids, hiv, medical, cancer, disease, page, health, volume, patients Z + WC newsletter, aids, hiv, medical, cancer, disease, page, health, volume, patients Z + WD newsletter, hiv, aids, medical, cancer, disease, health, page, volume, patients Z mary, sin, god, heaven, lord, christ, jesus, grace, spirit, matthew Z + WC mary, sin, heaven, god, christ, lord, jesus, spirit, grace, matthew Z + WD mary, heaven, sin, christ, god, spirit, lord, jesus, holy, grace Z engine, car, bike, cars, oil, ride, road, dealer, miles, riding Z + WC engine, bike, car, cars, oil, ride, dealer, road, riding, driving Z + WD engine, bike, car, cars, oil, ride, dealer, riding, road, driving Z game, baseball, ball, season, fans, team, year, playing, players, winning Z + WC game, baseball, fans, ball, season, team, playing, teams, players, year Z + WD baseball, game, fans, season, teams, ball, team, playing, players, year Z fbi, koresh, batf, trial, compound, gas, investigation, media, branch, agents Z + WC fbi, batf, koresh, compound, gas, agents, trial, branch, investigation, waco Z + WD fbi, koresh, batf, compound, gas, agents, trial, branch, waco, investigation Z entry, rules, entries, email, build, info, file, char, program, section Z + WC entry, rules, entries, email, info, build, file, char, section, program Z + WD entry, rules, entries, email, build, info, file, char, program, section Z army, turkey, muslim, jews, greek, jewish, genocide, professor, ottoman, greece Z + WC army, muslim, turkey, ottoman, jews, greek, genocide, jewish, greece, muslims Z + WD muslim, turkey, ottoman, genocide, army, jews, greek, jewish, greece, muslims Z board, driver, video, cards, card, monitor, windows, drivers, screen, resolution Z + WC board, video, driver, cards, monitor, card, windows, drivers, screen, printer Z + WD video, board, driver, cards, monitor, card, drivers, printer, screen, windows Z frequently, previously, suggested, announced, foundation, spent, contain, grant, consistent, authors Z + WC basically, previously, frequently, generally, suggested, primary, authors, appropriate, kinds, greater Z + WD essentially, basically, kinds, consistent, frequently, authors, previously, primary, equivalent, suggested Z sale, condition, offer, asking, offers, shipping, items, price, email, sell Z + WC sale, condition, offer, shipping, asking, items, offers, sell, email, price Z + WD sale, condition, shipping, offer, asking, items, offers, sell, price, excellent Z application, window, xterm, motif, font, manager, widget, root, event, server Z + WC xterm, application, window, motif, font, widget, manager, x11r5, server, event Z + WD xterm, motif, font, application, window, widget, manager, x11r5, event, server Z gun, amendment, constitution, firearms, right, militia, guns, weapon, bear, weapons Z + WC amendment, constitution, firearms, gun, militia, right, guns, weapon, bear, weapons Z + WD amendment, firearms, constitution, gun, militia, guns, right, weapon, bear, weapons Z suggested, frequently, previously, authors, foundation, consistent, spent, join, et, announced Z + WC suggested, previously, frequently, greater, requirements, consistent, opportunity, authors, particularly, appropriate Z + WD spent, greater, association, appropriate, opportunity, requirements, posts, previously, success, training Z objective, atheist, atheism, morality, exists, belief, does, exist, atheists, existence Z + WC objective, atheist, atheism, morality, exists, belief, atheists, does, exist, existence Z + WD atheist, objective, atheism, belief, morality, exists, atheists, existence, exist, does Z think, president, people, Stephanopoulos, dont, jobs, just, know, mr, myers Z + WC think, president, Stephanopoulos, people, dont, jobs, just, know mr, myers Z + WD think, president, Stephanopoulos, people, dont, jobs, just, know, mr, myers Z board, drive, ide, scsi, bus, isa, mhz, motherboard, internal, pin Z + WC board, drive, ide, scsi, motherboard, bus, isa, mhz, hd, controller Z + WD board, drive, ide, motherboard, scsi, mhz, bus, hd, isa, controller Z jpeg, images, image, formats, gif, format, software, conversion, quality, color Z + WC jpeg, images, formats, image, gif, format, conversion, software, quality, color Z + WD jpeg, images, formats, gif, image, format, conversion, software, quality, color Z msg, food, doctor, vitamin, doctors, medicine, diet, insurance, treatment, studies Z + WC msg, food, doctor, medicine, doctors, vitamin, diet, studies, treatment, insurance Z + WD msg, food, doctor, medicine, doctors, vitamin, diet, studies, patients, treatment Z agencies, encryption, keys, secure, algorithm, chip, enforcement, nsa, clipper, secret Z + WC agencies, encryption, secure, keys, algorithm, nsa, enforcement, encrypted, escrow, chip Z + WD secure, encryption, keys, agencies, algorithm, escrow, encrypted, enforcement, nsa, clipper Z windows, dos, nt, network, card, disk, pc, software, modem, operating Z + WC windows, dos, nt, card, network, disk, pc, modem, software, operating Z + WD windows, dos, nt, card, network, disk, pc, modem, software, operating Z address, site, thanks, looking, newsgroup, appreciate, advance, mailing, obtain, domain Z + WC address, thanks, newsgroup, site, appreciate, advance, looking, mailing, thank, reply Z + WD address, appreciate, site, thanks, advance, newsgroup, looking, mailing, thank, obtain |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Model | Top-10 Topic Keywords |
|---------|--------------------------------------------------------------------------------------------------------|
| Z | launch, nasa, shuttle, mission, satellite, energy, mass, moon, orbit, lunar |
| Z + WC | launch, shuttle, nasa, mission, moon, satellite, orbit, energy, mass, lunar |
| Z + WD | shuttle, launch, nasa, mission, orbit, moon, satellite, lunar, mass, energy |
| Z | floor, door, said, people, azerbaijani, neighbors, apartment, like, saw, dont |
| Z + WC | floor, azerbaijani, door, said, people, apartment, neighbors, like, saw, dont |
| Z + WD | azerbaijani, floor, apartment, door, said, people, neighbors, saw, like, building |
| Z | join, grant, foundation, suggested, previously, discussions, frequently, authors, positions, announced |
| Z + WC | discussions, topic, suggested, join, mailing, responses, robert, lists, summary, received |
| Z + WD | join, discussions, foundation, robert, mailing, lists, topic, grant, received, responses |
| Z | pts, boston, van, pittsburgh, pp, san, vancouver, chicago, la, st |
| Z + WC | pts, boston, van, pittsburgh, pp, san, vancouver, chicago, buf, tor |
| Z + WD | pts, pittsburgh, van, boston, pp, chicago, buf, tor, san, det |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Ethics
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4, Appendix C, E
✓ B1. Did you cite the creators of artifacts you used?
Section 4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In our paper, datasets and software tools used to reproduce our results are open-sourced and available to all developers. All our datasets are available for the general public, the license and terms for use are available using the links provide in Appendix C. For all open-sourced software packages, we include a detailed list of the websites for Python libraries and the baseline code base in Appendix E, the license and terms for use can be easily found on the websites. Since our models are stored in our private repository, and the license will be disclosed when the code base becomes public.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix C, Limitations B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix E
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4, and Appendix D
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix E
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
li-etal-2023-narrowbert | {N}arrow{BERT}: Accelerating Masked Language Model Pretraining and Inference | https://aclanthology.org/2023.acl-short.146 | Large-scale language model pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining corpora have become larger over time. We propose NarrowBERT, a modified transformer encoder that increases the throughput for masked language model pretraining by more than 2x. NarrowBERT sparsifies the transformer model such that the self-attention queries and feedforward layers only operate on the masked tokens of each sentence during pretraining, rather than all of the tokens as with the usual transformer encoder. We also show that NarrowBERT increases the throughput at inference time by as much as 3.5x with minimal (or no) performance degradation on sentence encoding tasks like MNLI. Finally, we examine the performance of NarrowBERT on the IMDB and Amazon reviews classification and CoNLL NER tasks and show that it is also comparable to standard BERT performance. | # Narrowbert: Accelerating Masked Language Model Pretraining And Inference
Haoxin Li1 Phillip Keung3 Daniel Cheng1 Jungo Kasai1 **Noah A. Smith**1,2 1Paul G. Allen School of Computer Science & Engineering, University of Washington, USA
2Allen Institute for Artificial Intelligence, USA
3Department of Statistics, University of Washington, USA
{lihaoxin,d0,jkasai,nasmith}@cs.washington.edu, [email protected]
## Abstract
Large-scale language model pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining corpora have become larger over time. We propose NarrowBERT, a modified transformer encoder that increases the throughput for masked language model pretraining by more than 2×. NarrowBERT sparsifies the transformer model such that the selfattention queries and feedforward layers only operate on the masked tokens of each sentence during pretraining, rather than all of the tokens as with the usual transformer encoder. We also show that NarrowBERT increases the throughput at inference time by as much as 3.5× with minimal (or no) performance degradation on sentence encoding tasks like MNLI. Finally, we examine the performance of NarrowBERT on the IMDB and Amazon reviews classification and CoNLL NER tasks and show that it is also comparable to standard BERT performance.
## 1 Introduction
Pretrained masked language models, such as BERT
(Devlin et al., 2019), RoBERTa (Liu et al., 2019),
and DeBERTa (He et al., 2021), have pushed the state-of-the-art on a wide range of downstream tasks in natural language processing. At their core is the transformer architecture (Vaswani et al., 2017) that consists of interleaved self-attention and feedforward sublayers. Since the former sublayer implies quadratic time complexity in the input sequence length (Vaswani et al., 2017), many have proposed methods to make the self-attention computation more efficient (Katharopoulos et al., 2020; Choromanski et al., 2021; Wang et al., 2020; Peng et al., 2021, 2022, *inter alia*).
In this work, we explore an orthogonal approach to efficiency: can we make masked language models efficient by *reducing* the length of the input sequence that each layer needs to process? In particular, pretraining by masked language modeling only involves prediction of masked tokens (typically, only 15% of the input tokens; Devlin et al., 2019; Liu et al., 2019). Despite this sparse pretraining objective, each transformer layer computes a representation for every token. In addition to pretraining, many downstream applications only use a single vector representation (i.e., only the [CLS] token)
for prediction purposes, which is much smaller than the number of input tokens (e.g., sequence classification tasks as in GLUE/SuperGLUE; Wang et al.,
2018, 2019). By narrowing the input sequence for transformer layers, we can accelerate both pretraining and inference.
We present NarrowBERT, a new architecture that takes advantage of the sparsity in the training objective. We present two NarrowBERT methods in the sections that follow (Figure 1). We provide the code to reproduce our experiments at https://
github.com/lihaoxin2020/narrowbert. The first method reduces the input sequence for the feedforward sublayers by reordering the interleaved self-attention and feedforward sublayers in the standard transformer architecture (Press et al., 2020):
after two standard, interleaved transformer layers, self-attention sublayers are first applied, followed only by feedforward sublayers. This way, the feedforward sublayer computations are only performed for *masked tokens*, resulting in a 1.3× speedup in pretraining (§3). The second approach reduces the input length to the attention sublayers: *queries* are only computed for masked tokens in the attention mechanism (Bahdanau et al., 2015), while the *keys* and *values* are not recomputed for non-masked tokens, which leads to a greater than 2× speedup in pretraining.
We extensively evaluate our efficient pretrained models on well-established downstream tasks (e.g.,
Wang et al., 2018; Tjong Kim Sang and De Meulder, 2003.) We find that our modifications result in almost no drop in downstream performance, 1723
![1_image_0.png](1_image_0.png)
all-at-once near the beginning of the model.
while providing substantial pretraining and inference speedups (§3). While efficient attention variants are a promising research direction, this work presents a different and simple approach to making transformers efficient, with minimal changes in architecture.
## 2 Narrowbert
In Figures 1b and 1c, we illustrate two variations of NarrowBERT. We define some notation to describe the configuration of our models. s refers to a **single self-attention layer** and f refers to a **single feedforward layer**. The colon : refers to the
'narrowing' operation, which gathers the masked positions from the output of the previous layer.
The first variation ('ContextFirst' in Fig. 1b)
uses attention to contextualize all-at-once at the beginning of the model. In short, the transformer layers have been rearranged to frontload the attention components. The example given in the figure specifies the model as sf{5,s}:{5,f}, which means that the input sentence is encoded by a selfattention layer, a feedforward layer, and 5 consecutive self-attention layers. At that point, the masked positions from the encoded sentence are gathered into a tensor and passed through 5 feedforward lay-
## Ers, **Thereby Avoiding Further Computations For**
all unmasked tokens. Finally, the masked positions are unmasked and the MLM loss is computed.
The second variation ('SparseQueries' in Fig. 1c)
does not reorder the layers at all. Instead, the sf:{5,sf} model contextualizes the input sentence in a more limited way. As shown in Figure 2, the input sentence is first contextualized by a s and a f layer, but the non-masked tokens are never contextualized again afterwards. Only the masked tokens are contextualized by the remaining {5,sf}
layers.
Since the masked tokens are only about 15%
of the total sentence length, the potential speedup is ~6.6× for every feedforward or attention layer downstream of a narrowing : operation. The memory usage can also decrease by ~6.6× for those layers since the sequence length has decreased, which allows us to use larger batch sizes during training.
For GLUE, Amazon, and IMDB text classification tasks, only the [CLS] token is used for prediction. When we finetune or predict with ContextFirst on a GLUE/Amazon/IMDB task, the feedforward layers only need to operate on the [CLS] token.
When we finetune or predict with SparseQueries, only the [CLS] token is used in the queries of the
![2_image_0.png](2_image_0.png)
Pretrain Finetune Inference **GLUE**
Speedup Speedup Speedup MNLI QNLI SST2 STS-B QQP WNLI
Baseline BERT ({12,sf}) 1× 1× 1× 0.83 0.91 0.93 0.89 0.87 0.56
Funnel Transformer (B4-4-4) 0.88× 0.86× 0.78× 0.78 0.87 0.88 0.86 0.86 0.56 ContextFirst 1.33× 1.24× 1.64× 0.82 0.90 0.91 0.89 0.87 0.56
SparseQueries:
{1,sf}:{11,sf} 2.47× 4.73× 4.64× 0.77 0.87 0.89 0.84 0.80 0.56
{2,sf}:{10,sf} 2.34× 2.82× 3.49× 0.81 0.88 0.91 0.88 0.87 0.59
{3,sf}:{9,sf} 2.15× 2.43× 2.79× 0.81 0.89 0.91 0.86 0.87 0.56 {4,sf}:{8,sf} 1.63× 2.13× 2.33× 0.82 0.88 0.91 0.89 0.87 0.57
attention layers. Everything after the narrowing :
operation only operates on the [CLS] token, which dramatically speeds up the NarrowBERT variants.
## 3 Experiments
We focus on 2 models in our experiments:
ContextFirst (sfsf{10,s}:{10,f}) and SparseQueries ({1,sf}:{11,sf}, *· · ·* , {4,sf}:{8,sf}).
Our NarrowBERT models all contain 12 selfattention and 12 feedforward layers in total, with the narrowing operation used at different points in the model. We compare NarrowBERT with the baseline BERT model and the Funnel Transformer model (Dai et al., 2020), which is a pretrained encoder-decoder transformer model where the encoder goes through a sequence of length bottlenecks.
In our experiments, we use 15% masking in masked language model (MLM) training. Following Liu et al. (2019), we do not use next sentence prediction as a pretraining task. We use large batch sizes and high learning rates to fully utilize GPU memory, as suggested in Izsak et al. (2021). Batches are sized to be the largest that fit in GPU
memory. We use a learning rate of 0.0005. Models are trained for 70k steps, where each step contains 1728 sequences of 512 tokens, and gradient accumulation is used to accumulate the minibatches needed per step. Models were trained on hosts with 8 Nvidia A100 GPUs. We used the Hugging Face implementations of the baseline BERT and Funnel Transformer models. We pretrained the baseline BERT, Funnel Transformer, and NarrowBERT models using the same Wikipedia and Books corpora and total number of steps.
In Figure 3, we see the evolution of the development MLM loss over the course of model training.
The BERT and NarrowBERT models all converge to similar values, with the NarrowBERT models reaching a slightly higher MLM loss near the end of training.
We report the accuracy for MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), SST2
(Socher et al., 2013), WNLI (Levesque et al.,
2012), IMDB (Maas et al., 2011), and English Amazon reviews (Keung et al., 2020), F1 for QQP (Sharma et al., 2019) and CoNLL-2003 NER
(Tjong Kim Sang and De Meulder, 2003), and
![3_image_1.png](3_image_1.png)
Baseline BERT ({12,sf}) 0.90 0.93 0.96 0.66
Funnel Transformer 0.87 0.92 0.95 0.65
ContextFirst (sfsf{10,s}:{10,f}) 0.89 0.93 0.95 0.65
SparseQueries:
{1,sf}:{11,sf} 0.87 0.91 0.94 0.65
{2,sf}:{10,sf} 0.89 0.91 0.95 0.65
{3,sf}:{9,sf} 0.89 0.92 0.95 0.65 {4,sf}:{8,sf} 0.89 0.93 0.95 0.65
CoNLL NER IMDB Amazon2 Amazon5
![3_image_0.png](3_image_0.png)
Spearman correlation for STS-B (Cer et al., 2017).
For the Amazon reviews corpus, we consider both the usual 5-star prediction task and the binarized
(i.e., 1–2 stars versus 4–5 stars) task.
In Table 1, we present the results for our extrinsic evaluation on various GLUE tasks. The reduction in performance is small or non-existent, and on WNLI, the NarrowBERT variations perform better than the baseline. For SparseQueries, it is clear that using more layers prior to the narrowing operation improves performance, though the training and inference speedups become smaller. We note that the Funnel Transformer implementation in Pytorch is slower than the baseline BERT model; this may be due to the fact that the original implementation was written in Tensorflow and optimized for Google TPUs.1 It is well known that the variability in the performance of BERT on certain GLUE tasks is extreme (Mosbach et al., 2020; Dodge et al., 2020; Lee et al., 2019), where the differences in performance between finetuning runs can exceed 20%
(absolute). We have also observed this extreme variability in the course of our own GLUE finetuning experiments. While many techniques have been proposed to address this issue, it is not the goal of this work to apply finetuning stabilization methods to maximize BERT's performance. For this reason, we have excluded the RTE, MRPC, and COLA tasks (which are high-variance tasks studied in the aforementioned papers) from our evaluation.
In Table 2, we provide results on the IMDB
and Amazon reviews classification tasks and the CoNLL NER task. Generally, NarrowBERT is close to the baseline in performance, and the SparseQueries performance improves as more layers are used before the narrowing operation.
## 4 Discussion And Conclusion
We have explored two straightforward ways of exploiting the sparsity in the masked language model loss computations: rearranging the layers of the transformer encoder to allow the feedforward components to avoid computations on the non-masked positions, and sparsifying the queries in the attention mechanism to only contextualize the masked positions. The NarrowBERT variants can speed up training by a factor of ~2× and inference by a factor of ~3×, while maintaining very similar performance on GLUE, IMDB, Amazon, and CoNLL
NER tasks. Based on the favorable trade-off between speed and performance seen in Section 3, we recommend that practitioners consider using the SparseQueries NarrowBERT model with 2 or 3 layers before narrowing.
## Limitations
Due to our budget constraint, we only performed pretraining and downstream experiments with basesized transformer models. We also only applied the masked language modeling objective, but there are other effective pretraining objectives (e.g., Clark et al., 2020). Nonetheless, since we introduced minimal changes in architecture, we hope that subsequent work will benefit from our narrowing operations and conduct a wider range of pretraining and downstream experiments. While pretrained models can be applied to even more downstream tasks, we designed a reasonable task suite in this work, consisting of both GLUE sentence classification and the CoNLL NER sequential classification tasks.
## Acknowledgments
The authors thank the anonymous reviewers and Ofir Press at the University of Washington for helpful feedback. This research was supported in part by NSF grant 2113530.
## References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In *Proc. of ICLR*.
Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sarlós, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. 2021. Rethinking attention with Performers. In *Proc. of ICLR*.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In *Proc. of ICLR*.
Zihang Dai, Guokun Lai, Yiming Yang, and Quoc Le.
2020. Funnel-transformer: Filtering out sequential redundancy for efficient language processing. In Proc. of NeurIPS.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proc. of NAACL*.
Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020.
Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. *arXiv* preprint arXiv:2002.06305.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: decoding-enhanced bert with disentangled attention. In *Proc. of ICLR*.
Peter Izsak, Moshe Berchansky, and Omer Levy. 2021.
How to train BERT with an academic budget. In Proc. o EMNLP.
Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. 2020. Transformers are RNNs: Fast autoregressive transformers with linear attention. In *Proc. of ICML*.
Phillip Keung, Yichao Lu, György Szarvas, and Noah A
Smith. 2020. The multilingual amazon reviews corpus. *arXiv preprint arXiv:2010.02573*.
Cheolhyoung Lee, Kyunghyun Cho, and Wanmo Kang.
2019. Mixout: Effective regularization to finetune large-scale pretrained language models. arXiv preprint arXiv:1909.11299.
Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Proc. of KR.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke S. Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A robustly optimized bert pretraining approach.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. *arXiv preprint arXiv:2006.04884*.
Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, and Noah A. Smith. 2022. ABC: Attention with bounded-memory control. In *Proc. of ACL*.
Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, and Lingpeng Kong. 2021.
Random feature attention. In *Proc. of ICLR*.
Ofir Press, Noah A. Smith, and Omer Levy. 2020. Improving transformer models by reordering their sublayers. In *Proc. of ACL*.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proc. of EMNLP*.
Lakshay Sharma, Laura Graesser, Nikita Nangia, and Utku Evci. 2019. Natural language understanding with the quora question pairs dataset.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proc. of EMNLP*.
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proc. of CoNLL.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proc. of NeurIPS*.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In *Proc. of NeurIPS*.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In *Proc. of BlackboxNLP*.
Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity.
Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proc.
of NAACL.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Unnumbered 'Limitations' section.
✗ A2. Did you discuss any potential risks of your work?
Our paper is concerned with computational efficiency for pretraining and inference with BERT-style models and is not tied to a specific application.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
See the abstract and section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Section 1 links to the artifacts we created for others to use.
✓ B1. Did you cite the creators of artifacts you used?
Sections 1, 2, and 3.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The relevant licenses are not restrictive with respect to non-commercial use.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 1
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The datasets we used either do not contain PII data, or the creators of the corpus have described their attempts to remove such data from the resource in their own corpus papers.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
The artifact we provide is code for training a model, not a dataset.
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
The statistics for the datasets we used are unchanged from the statistics that can be found in the original corpus papers. We did not modify the datasets in our evaluations.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Sections 2 and 3
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
lei-etal-2023-s3hqa | {S}3{HQA}: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question Answering | https://aclanthology.org/2023.acl-short.147 | Answering multi-hop questions over hybrid factual knowledge from the given text and table (TextTableQA) is a challenging task. Existing models mainly adopt a retriever-reader framework, which have several deficiencies, such as noisy labeling in training retriever, insufficient utilization of heterogeneous information over text and table, and deficient ability for different reasoning operations. In this paper, we propose a three-stage TextTableQA framework S3HQA, which comprises of retriever, selector, and reasoner. We use a retriever with refinement training to solve the noisy labeling problem. Then, a hybrid selector considers the linked relationships between heterogeneous data to select the most relevant factual knowledge. For the final stage, instead of adapting a reading comprehension module like in previous methods, we employ a generation-based reasoner to obtain answers. This includes two approaches: a row-wise generator and an LLM prompting generator (first time used in this task). The experimental results demonstrate that our method achieves competitive results in the few-shot setting. When trained on the full dataset, our approach outperforms all baseline methods, ranking first on the HybridQA leaderboard. |
## S 3**Hqa: A Three-Stage Approach For Multi-Hop Text-Table Hybrid** Question Answering
Fangyu Lei1,2, **Xiang Li**1,2, **Yifan Wei**1,2, Shizhu He1,2, **Yiming Huang**1,2, **Jun Zhao**1,2, **Kang Liu**1,2 1The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences
{leifangyu2022, lixiang2022, weiyifan2021}@ia.ac.cn
{shizhu.he, jzhao, kliu}@nlpr.ia.ac.cn
## Abstract
Answering multi-hop questions over hybrid factual knowledge from the given text and table
(TextTableQA) is a challenging task. Existing models mainly adopt a retriever-reader framework, which have several deficiencies, such as noisy labeling in training retriever, insufficient utilization of heterogeneous information over text and table, and deficient ability for different reasoning operations. In this paper, we propose a three-stage TextTableQA framework S
3HQA, which comprises of *retriever*,
selector, and *reasoner*. We use a retriever with refinement training to solve the noisy labeling problem. Then, a *hybrid selector* considers the linked relationships between heterogeneous data to select the most relevant factual knowledge. For the final stage, instead of adapting a reading comprehension module like in previous methods, we employ a generation-based reasoner to obtain answers. This includes two approaches: a row-wise generator and an LLM
prompting generator (first time used in this task). The experimental results demonstrate that our method achieves competitive results in the few-shot setting. When trained on the full dataset, our approach outperforms all baseline methods, ranking first on the HybridQA
leaderboard.1
## 1 Introduction
Question answering systems devote to answering various questions with the evidence located in the structured knowledge base (e.g., table) (Pasupat and Liang, 2015; Yu et al., 2018) or unstructured texts (Rajpurkar et al., 2016). Considering that many questions need to utilize multiple sources of knowledge jointly in real-world applications, the hybrid form of question answering over texts and tables (TextTableQA) has been proposed and attracted more and more attention (Chen et al.,
1https://codalab.lisn.upsaclay.fr/
competitions/7979.
| H R1 | Year | Score | Athlete | Place |
|--------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|-----------|---------|
| 1960 | 8,683 | Rafer Johnson | Eugene | |
| 1960 | 8,709 | Philip Mulkey | Memphis | |
| 1963 | 8,089 | Chuan-Kwang Yang | Walnut | |
| R2 R3 | P1 …Memphis is a city located along the Mississippi River in southwestern Shelby County, Tennessee, United States … P2 …Chuan-Kwang Yang competed in the decathlon at the 1960 Olympic Games in Rome… | | | |
Q1: Who is the athlete in a city located on the Mississippi River? A1: Philip Mulkey Q2: In which year did Walnut-born athletes participate in the Rome Olympics?
A2: 1960 Q3: Who is the higher scoring athlete from the cities of Eugene and Walnut?
漏Comparison漐 A3: Rafer Johnson Figure 1: The examples of HybridQA.
![0_image_0.png](0_image_0.png)
2020b,a; Zhu et al., 2021; Chen et al., 2021; Zhao et al., 2022; Wang et al., 2022a). Fact reasoning (Chen et al., 2020a,b) is a critical question type of TextTableQA. It requires jointly using multiple evidence from tables and texts to reasoning the answers with different operations, such as correlation (e.g., multi-hop) and aggregation (e.g., comparison). Hyperlinks among some table cells and linked passages are essential resources to establish their relationship and support the retrieval and reasoning for multi-hop questions. As shown in Figure 1, answering a complex question Q1 requires jointly reasoning from textual evidence (P1)
to table evidence ([R2, Place]) and then to other table evidence ([R2, Athlete]).
Existing methods consist of two main stages: *retriever* and *reader* (Chen et al., 2020b; Feng et al.,
2022). The *retriever* filters out the cells and passages with high relevance to the question, and then the *reader* extracts a span from the retrieval results as the final answer. However, current methods with two stages still have three limitations as follows.
1) **Noisy labeling for training retriever.** Existing retrieval methods usually ignore the weakly supervised answer annotation (Chen et al., 2020b; Wang et al., 2022b; Feng et al., 2022). For the Q2 of Figure 1, we cannot know the specific location
![1_image_0.png](1_image_0.png)
of the hybrid evidence, only given the final answer
"1960". Therefore, there is a lot of pseudo-true evidence labeled (Marked in green) automatically by string matching, which introduces a lot of evidence noise.
2) **Insufficient utilization of heterogeneous information.** After retrieval, existing methods selected a particular cell or passage for reading to extract the final answer (Chen et al., 2020b; Wang et al., 2022b). As for Q1 in Figure 1, previous models were more likely to choose P1 or the coordinates
[R2,Place] to extract the answer. However, these methods seldomly used the hybrid information of table schema and cell-passage hyperlinks, which is the key factor in answering multi-hop questions.
3) **Deficient ability for different reasoning operations.** Previous methods (Eisenschlos et al.,
2021; Kumar et al., 2021; Wang et al., 2022b)
mainly used an extraction module to obtain answers, which cannot support knowledge reasoning that requires comparison, calculation, and other operations.
In this paper, we propose a three-stage approach S
3HQA to solve the above problems. (1) **Retriever**
with Refinement Training, we propose a two-step training method, splitting the training data into two parts, so that the noise in the retrieval phase can be alleviated. (2) **Hybrid Selector** has been proposed and selects supporting facts with different granularity and resources depending on the question type.
By considering the hybrid data of tables and text, this paper proposes a hybrid selection algorithm that can effectively utilize the heterogeneous information of tables and passages. (3) **Generationbased reasoner** utilizes a generation-based model for addressing different question types. The model allows better aggregation of information on the input side, which not only have better multi-hop reasoning capabilities but also be able to handle comparison and counting questions. Furthermore, we are the first to use the LLM in-context learning approach for table-text hybrid question-answering tasks.
We evaluate our proposed model on the challenging TextTableQA benchmark HybridQA. The empirical results show that our approach outperforms all the existing models2.
## 2 Our Approach 2.1 Problem Definition
Given a natural language question Q = {qi}
|Q| i=1 and a table T with ⟨H, R⟩, H indicates the table headers, and R = {ri}
|R| i=1 indicates the rows with number |R|. Each row riis consists of N cells ri =
{cij}
N
j=1. The header's number is also N. Some cells have a linked passage Pij . Our goal aims to generate the answer A with model Θ, which is a span from table cells or linked passage or a derivation result of counting questions.
## 2.2 Retriever With Refinement Training
The retriever aims to perform initial filtering of heterogeneous resources. However, accurately labeling the location of answers consumes high labeling costs. For TextTableQA data, the answer A
usually appears in multiple locations, which makes it difficult for us to generate precise retrieval la-2We released the source code at https://github.
com/lfy79001/S3HQA
bels. We use a two-step training method, with a row-based retriever and a passage-based retriever for each step.
Inspired by (Kumar et al., 2021), the retrieval has two steps. First, we divide the data D into two folds according to the string matching labels Gi. Specifically, for a question-answer instance, the answer A appears one time as D1, and the instance whose answer A appears multiple times as D2. Take the example in Figure 1, Q1, Q3 belongs to D1 while Q2 belongs to D2. The data is organized in the form of [CLS]q1q2*...q*|Q|[SEP]ci1ci2...ciN[SEP] or
[CLS]q1q2*...q*|Q|[SEP]pij[SEP].
In the first step, we only use D1 to train a model Θ1, which data are noiseless. Then in the second step, we use the trained weight Θ1 to train the model Θ2. For the input x, the loss function is:
$$L(\Theta_{2},x,{\mathcal{R}})=\sum_{z\in{\mathcal{R}}}-q(z)\log p_{\Theta_{1}}(z|x)$$
where q(z) = pΘ1
(z|x, z ∈ R) is the probability distribution given by the model restricted to candidate rows R containing the answer span, taken here as a constant with zero gradients (Eisenschlos et al., 2021).
Meanwhile, we use a passage-based retriever to enhance the performance of a row-based retriever (PassageFilter). Specifically, we use the passage-based retriever to obtain a prediction score of passage relevance. Based on this score, we reorder the input of the row-based retriever. It avoids the limitation on input sequence length imposed by the pre-trained model.
## 2.3 Hybrid Selector
This module needs to combine the results of the two granularity retrievers. As for this task, we consider the question type and the relationships between the table and linked passages essential.
As shown in Figure 2, the hybrid selector chooses the appropriate data source from the two retrieval results depending on question types.
Specifically, for general *bridge* multi-hop questions, we use a single row and its linked passage.
While for *comparison/count* questions, we consider multiple rows and further filter the related sentences, delete the linked paragraphs with the low scores. This not only enables the generation module to obtain accurate information, but also prevents the introduction of a large amount of unrelated information. The selector algorithm outputs a mixed sequence with high relevance based on the relationship between the question, the table, and the passages. The algorithm is shown in Algorithm 1.
Algorithm 1 Hybrid Selector Algorithm.
Input: question Q, table rows R, linked passages P, rowbased retriever ΘR, passage-based retriever ΘP , selector
target row count NS
Output: generator input S
Get the row/passage ordered list by relevant scores
1: OR ← *sort*(ΘR(Q, R)) 2: OP ← *sort*(ΘP (Q, P))
3: p
type ← *Classif ication(Q*)
4: if p
$\mathcal{P}^{\rm spe}\leftarrow$_Classification_($Q$) $\rm p^{\rm type}=bridge$**then** **if**$\mathcal{O}_{\mathcal{P}}[0]$ **in**$\mathcal{O}_{\mathcal{R}}[0]$ **then** $\mathcal{S}\leftarrow\mathcal{Q}+\mathcal{O}_{\mathcal{R}}[0]$ **else** $\mathcal{S}\leftarrow\mathcal{Q}+\mathcal{O}_{\mathcal{R}}[0]+\mathcal{O}_{\mathcal{P}}[0]$ **end if** $\mathcal{O}_{\mathcal{P}}\leftarrow\mathcal{P}[len(\mathcal{O}_{\mathcal{P}})/2:]$ $\mathcal{S}\leftarrow\mathcal{Q}+\mathcal{O}_{\mathcal{R}}[0:N_{S}]-\mathcal{O}_{\mathcal{P}}$ **end if** **turn**$\mathcal{S}$
## 7: **Else** 9: **End If**
10: **Else** 13: **End If**
14: **Return** S 2.4 Generation-Based Reasoner
The results of the selector take into account both two granularity. Unlike the previous approaches, which were based on a span extraction module, we use a generation-based model for answer prediction.
## 2.4.1 Row-Wise Generator
To generate an accurate answer string A =
(a1, a2*, ..., a*n) given the question Q and selection evidence S, we perform lexical analysis to identify the question type, such as counting or comparison, by looking for certain keywords or comparative adjectives. We utilize two special tags ⟨Count⟩ and
⟨Compare⟩, which indicates the question types.
We then use the results of the passage retriever to rank the passages in order of their relevance, eliminating the impact of model input length limitations. Finally, we train a Seq2Seq language model with parameters Θ, using the input sequence Q, S and the previous outputs a<i to optimize the product of the probabilities of the output sequence a1, a2*, ..., a*n:
$${\mathcal{A}}=a r g m a x\prod_{i=1}^{n}P(a_{i}|a_{<i},{\mathcal{Q}},{\mathcal{S}};\Theta)$$
## 2.4.2 Llm Prompting Generator
With the emergence of large language models, InContext Learning (Dong et al., 2022) and Chain-ofThought prompting (Wei et al., 2022) have become
Table Passage **Total**
Dev Test Dev Test **Dev Test**
EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 Unsupervised-QG (Pan et al., 2021) - - - - - - - - 25.7 30.5 - - HYBRIDER (Chen et al., 2020b) 54.3 61.4 56.2 63.3 39.1 45.7 37.5 44.4 44.0 50.7 43.8 50.6 DocHopper (Sun et al., 2021) - - - - - - - - 47.7 55.0 46.3 53.3 MuGER2(Wang et al., 2022b) 60.9 69.2 58.7 66.6 56.9 68.9 57.1 68.6 57.1 67.3 56.3 66.2 POINTR (Eisenschlos et al., 2021) 68.6 74.2 66.9 72.3 62.8 71.9 62.8 71.9 63.4 71.0 62.8 70.2 DEHG (Feng et al., 2022) - - - - - - - - 65.2 **76.3** 63.9 75.5 MITQA (Kumar et al., 2021) 68.1 73.3 68.5 74.4 66.7 75.6 64.3 73.3 65.5 72.7 64.3 71.9 MAFiD (Lee et al., 2023) 69.4 75.2 68.5 74.9 66.5 75.5 65.7 75.3 66.2 74.1 65.4 73.6 ![3_image_0.png](3_image_0.png)
3HQA 70.3 75.3 70.6 76.3 69.9 78.2 68.7 77.8 **68.4** 75.3 **67.9 75.5**
Human - - - - - - - - - - 88.2 93.5
![3_image_1.png](3_image_1.png)
two particularly popular research topics in this field. In this paper, we introduce a prompting strategy for multi-hop TextTableQA.
We utilize selection evidence S and apply LLMbased prompting. We conducted experiments on both vanilla prompting and chain-of-thought prompting in zero-shot and few-shot scenarios.
## 3 Experiment 3.1 Experiment Setup
Datasets We conduct experiments on HybridQA (Chen et al., 2020b). The detailed statistics are shown in Appendix A. For evaluation, we followed the official evaluation to report exact match accuracy and F1 score.
Implementation details The implementation details are shown in Appendix B. The experimental results are the average of five times results.
## 3.2 Fully-Supervised Results
Table 1 shows the comparison results between our models with previous typical approaches on both development and test sets. It shows that our proposed S3HQA works significantly better than the baselines in terms of EM and F1 on HybridQA.
The results indicate that S3HQA is an effective model for multi-hop question answering over tabular and textual data. Specifically, it can effectively handle multi-hop reasoning and make full use of heterogeneous information.
However, we found that our approach was outperformed by the DEHG model (Feng et al., 2022) in terms of F1 score on the Dev set. We speculate that this might be because the DEHG approach uses their own Open Information Extraction (OIE)
tool.
![3_image_2.png](3_image_2.png)
![3_image_3.png](3_image_3.png)
![3_image_4.png](3_image_4.png)
![3_image_5.png](3_image_5.png)
![3_image_6.png](3_image_6.png)
EM F1
## 3.3 Llm-Prompting Results
We present our zero-shot and few-shot results in Table 2. "**Direct**" refers to a simple prompting method where only the question, context, and answer are provided to the model without any additional reasoning process. In contrast, "CoT" involves a human-authored Chain-of-Thought reasoning process that provides a more structured and logical way of prompting the model. The experiments demonstrate that in-context learning used to prompt large language models can achieve promising results. Specifically, utilizing the Chain-ofThought prompt method can significantly enhance the model's performance.
However, it's worth noting that there is still a performance gap compared to fine-tuning the model on the full dataset (Table 1). Fine-tuning allows the model to learn more specific information about the TextTableQA task, resulting in better performance. Nevertheless, our results show that the LLM-prompting method can be a useful alternative to fine-tuning, especially when there is a limited amount of labeled data available.
## 3.4 Ablation Studies
We conduct ablation studies on the test set. We validate the effects of three modules: *retriever* with refinement training, *hybrid selector*, and generation-based reasoner. The retriever performs initial filtering of heterogeneous resources; Selectors combined with hyperlinks further identify the exact evidence needed to answer multi-hop questions; and the reasoner uses the selection evidence to obtain the final answer.
| Model | Top1 |
|-------------------------|--------|
| 3HQA-RetrieverDB | 88.0 |
| S S 3HQA-RetrieverBE | 87.3 |
| w/o Refinement training | 84.1 |
| w/o PassageFilter | 85.3 |
| Vanilla-RetrieverBE | 82.0 |
Table 3: Ablation study of retrieval results. DB and BE denote models based on Deberta-base (He et al., 2020) and BERT-base-uncased (Devlin et al., 2018),
respectively
| Model | EM | F1 |
|-----------------------|------|------|
| 3HQA | 67.9 | 76.5 |
| S w/o hybrid selector | 65.0 | 74.9 |
| w/o special tags | 67.2 | 76.0 |
| BERT-large reader | 66.8 | 75.8 |
Table 4: Ablation study of S3HQA.
Effect of proposed retriever. As shown in the Table 3, under the setting of using the BERTbase-uncased model, sing the BERT-base-uncased model setting, the retriever with *refinement training* achieved 87.2. When we use Deberta-base, the top1 retrieval performance improved by 0.8%. For w/o refinement training, we use the entire data directly for training, the top1 recall drops about 3.2%.
For *w/o PassageFilter*, we remove the mechanism, the top1 recall drops about 3.2%. For *VanillaRetriever*, we use the row-based retriever (Kumar et al., 2021) and remove all our mechanisms, the top1 score drops about 5.3%. This shows that our model can solve the weakly supervised data noise problem well.
Effect of hybrid selector. As shown in the Table 4, we removed the selector of S3HQA and replaced it with the previous cell-based selector (Wang et al., 2022b). This method directly uses the top1 result of the row retriever as input to the generator. *w/o hybrid selector* shows that the EM drops 2.9% and F1 drops 1.6%, which proves the effectiveness of our selector approach.
Effect of reasoner. As shown in the Table 4, we design two baselines. *BERT-large reader* (Chen et al., 2020b; Wang et al., 2022b) uses BERT (Devlin et al., 2018) as encoder and solves this task by predicting the start/end tokens. *w/o special tags* deletes the special tags. Both the two experiments demonstrate our S3HQA reasoner performs the best for HybridQA task.
## 4 Related Work
The TextTableQA task (Wang et al., 2022a)
has attracted more and more attention. As for multi-hop type dataset, previous work used pipeline approach (Chen et al., 2020b), unsupervised approach (Pan et al., 2021), multigranularity (Wang et al., 2022b), table pre-trained language model (Eisenschlos et al., 2021), multiinstance learning (Kumar et al., 2021) and graph neural network (Feng et al., 2022) to solve this task.
As for numerical reasoning task, which is quite different from multi-hop type dataset, there is also a lot of work (Zhu et al., 2021; Zhao et al., 2022; Zhou et al., 2022; Lei et al., 2022; Li et al., 2022; Wei et al., 2023) to look at these types of questions.
Unlike these methods, our proposed three-stage model S3HQA can alleviate noises from weakly supervised and solve different types of multi-hop TextTableQA questions by handling the relationship between tables and text.
## 5 Conclusion
This paper proposes a three-stage model consisting of retriever, selector, and reasoner, which can effectively address multi-hop TextTableQA. The proposed method solves three drawbacks of the previous methods: noisy labeling for training retriever, insufficient utilization of heterogeneous information, and deficient ability for reasoning. It achieves new state-of-the-art performance on the widely used benchmark HybridQA. In future work, we will design more interpretable TextTableQA
models to predict the explicit reasoning path.
## Limitations
Since the multi-hop TextTableQA problem has only one dataset HybridQA, our model has experimented on only one dataset. This may lead to a lack of generalizability of our model. Transparency and interpretability are important in multi-hop question answering. While our model achieves the best results, the model does not fully predict the reasoning path explicitly and can only predict the row-level path and passage-level path. In future work, we will design more interpretable TextTableQA models.
## Acknowledgements
This work was supported by the National Key R&D Program of China (2022ZD0160503) and the National Natural Science Foundation of China (No.U1936207, No.61976211). This work was supported by the Strategic Priority Research Program of Chinese Academy of Sciences
(No.XDA27020100), the Youth Innovation Promotion Association CAS, Yunnan Provincial Major Science and Technology Special Plan Projects
(No.202202AD080004) and CCF-DiDi GAIA Collaborative Research Funds for Young Scholars.
## References
Steven Bird. 2006. Nltk: the natural language toolkit.
In *Proceedings of the COLING/ACL 2006 Interactive* Presentation Sessions, pages 69–72.
Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Yang Wang, and William W Cohen. 2020a.
Open question answering over tables and text. In *International Conference on Learning Representations*.
Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. 2020b. Hybridqa: A dataset of multi-hop question answering over tabular and textual data. In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 1026–1036.
Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan R Routledge, et al. 2021. Finqa: A dataset of numerical reasoning over financial data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3697–3711.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. 2022. A survey for in-context learning.
arXiv preprint arXiv:2301.00234.
Julian Eisenschlos, Maharshi Gor, Thomas Mueller, and William Cohen. 2021. Mate: Multi-view attention for table transformer efficiency. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7606–7619.
Yue Feng, Zhen Han, Mingming Sun, and Ping Li.
2022. Multi-hop open-domain question answering over structured and unstructured knowledge. In *Findings of the Association for Computational Linguistics:*
NAACL 2022, pages 151–156.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. *arXiv preprint* arXiv:2006.03654.
Vishwajeet Kumar, Saneem Chemmengath, Yash Gupta, Jaydeep Sen, Samarth Bharadwaj, and Soumen Chakrabarti. 2021. Multi-instance training for question answering across table and linked text. arXiv preprint arXiv:2112.07337.
Sung-Min Lee, Eunhwan Park, Daeryong Seo, Donghyeon Jeon, Inho Kang, and Seung-Hoon Na.
2023. Mafid: Moving average equipped fusion-indecoder for question answering over tabular and textual data. In *Findings of the Association for Computational Linguistics: EACL 2023*, pages 2292–2299.
Fangyu Lei, Shizhu He, Xiang Li, Jun Zhao, and Kang Liu. 2022. Answering numerical reasoning questions in table-text hybrid contents with graph-based encoder and tree-based decoder. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1379–1390.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7871–7880.
Xiao Li, Yin Zhu, Sichen Liu, Jiangzhou Ju, Yuzhong Qu, and Gong Cheng. 2022. Dyrren: A dynamic retriever-reranker-generator model for numerical reasoning over tabular and textual data. *arXiv preprint* arXiv:2211.12668.
Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, and William Yang Wang. 2021. Unsupervised multi-hop question answering by question generation.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5866–5880.
Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language
Processing (Volume 1: Long Papers), pages 1470–
1480.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *Advances in* neural information processing systems, 32.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392.
Haitian Sun, William W Cohen, and Ruslan Salakhutdinov. 2021. End-to-end multihop retrieval for compositional question answering over long documents.
Dingzirui Wang, Longxu Dou, and Wanxiang Che.
2022a. A survey on table-and-text hybridqa: Concepts, methods, challenges and future directions.
arXiv preprint arXiv:2212.13465.
Yingyao Wang, Junwei Bao, Chaoqun Duan, Youzheng Wu, Xiaodong He, and Tiejun Zhao. 2022b.
MuGER2: Multi-granularity evidence retrieval and reasoning for hybrid question answering. In *Findings of the Association for Computational Linguistics:*
EMNLP 2022, pages 6687–6697, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*.
Yifan Wei, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, and Kang Liu. 2023. Multi-view graph representation learning for answering hybrid numerical reasoning question. *arXiv preprint arXiv:2305.03458*.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018. Spider: A
large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3911–3921.
Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang.
2022. Multihiertt: Numerical reasoning over multi hierarchical tabular and textual data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 6588–6600.
Yongwei Zhou, Junwei Bao, Chaoqun Duan, Youzheng Wu, Xiaodong He, and Tiejun Zhao. 2022. Unirpg:
Unified discrete reasoning over table and text as program generation. *arXiv preprint arXiv:2210.08249*.
Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat-Seng Chua. 2021. Tat-qa: A question answering benchmark on a hybrid of tabular and textual content in finance. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3277–3287.
## A Hybridqa Dataset
HybridQA is a large-scale, complex, and multihop TextTableQA benchmark. Tables and texts are crawled from Wikipedia. Each row in the table describes several attributes of an instance. Each table has its hyperlinked Wikipedia passages that describe the detail of attributes. It contains 62,682 instances in the train set, 3466 instances in the dev set and 3463 instances in the test set.
| Split | Train | Dev | Test | Total |
|------------|---------|-------|--------|----------------|
| In-Passage | 35,215 | 2,025 | 20,45 | 39,285 (56.4%) |
| In-Table | 26,803 | 1,349 | 1,346 | 29,498 (42.3%) |
| Computed | 664 | 92 | 72 | 828 (1.1%) |
| Total | 62,682 | 3,466 | 3,463 | 69,611 |
Table 5: Data Split: In-Table means the answer comes from plain text in the table, and In-Passage means the answer comes from certain passage.
## B Implementation Details B.1 Fully-Supervised Setting
We utilize PyTorch (Paszke et al., 2019) to implement our proposed model. During pre-processing, the input of questions, tables and passages are tokenized and lemmatized with the NLTK (Bird, 2006)
toolkit. We conducted the experiments on a single NVIDIA GeForce RTX 3090.
In the retriever stage, we use BERT-baseuncased (Devlin et al., 2018) and Deberta-base (He et al., 2020) to obtain the initial representations.
For the first step, batch size is 1, epoch number is 5, learning rate is 7e-6 (selected from 1e-5, 7e-6, 5e6). The training process may take around 10 hours.
For the second step, we use a smaller learning rate 2e-6 (selected from 5e-6, 3e-6, 2e-6), epoch number is 5. The training process may take around 8 hours. In the selector stage, target row count NS
is 3. In the generator stage, we use BART-large language model (Lewis et al., 2020), the learning rate is 1e-5 (selected from 5e-5, 1e-5, 5e-6), batch size is 8, epoch number is 10, beam size is 3 and max generate length is 20.
## B.2 Llm-Prompting Setting
We use the OpenAI GPT-3.5 (text-davinci-003)
API model with the setting *temperature* = 0 in our experiments. For the few-shot setting, we use 2 shots. To elicit the LLM's capability to perform multi-hop reasoning, we use the text "Read the following table and text information, answer a question. Let's think step by step." as our prompt.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section Limitation
✓ A2. Did you discuss any potential risks of your work?
Section Limitation
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section2, Section3
✓ B1. Did you cite the creators of artifacts you used?
Section1, 2, 4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section3.2, Section3.3
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section3
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Section3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section3.1 and Appendix B
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
sun-etal-2023-towards | Towards Fewer Hallucinations in Knowledge-Grounded Dialogue Generation via Augmentative and Contrastive Knowledge-Dialogue | https://aclanthology.org/2023.acl-short.148 | Existing knowledge-grounded open-domain dialogue generation models often face the hallucination problem, i.e. the dialogue generative model will persist in an inappropriate knowledge and generate responses that inconsistent with the facts. We argue that this problem mainly stems from the polarized optimization objectives and weak knowledge generation ability. To mitigate the hallucination, we take inspiration from human communicating that people will replay euphemistic responses for the unclear or unrecognizable knowledge, and propose an Augmentative and Contrastive Knowledge Dialogue Expansion Framework (ACK-DEF). ACK-DEF constructs the augmentative and contrastive knowledge dialogue samples, which consist of the knowledge of different degrees of errors and the response of manual design, to expand the original training set and smooth the polarized optimization objective that enables models to generate ground-truth with or without gold knowledge. Not only the knowledge, ACK-DEF also provides the tactful responses of manual design corresponding to the incomplete correct knowledge. Experimental results on the Wikipedia of Wizard dataset show that employing the ACK-DEF is effective to alleviate the hallucination problem. | # Towards Fewer Hallucinations In Knowledge-Grounded Dialogue Generation Via Augmentative And Contrastive Knowledge-Dialogue
Bin Sun1, Yitong Li2,3, Fei Mi2, FanHu Bie3, Yiwei Li1**, Kan Li**1∗
1School of Computer Science & Technology, Beijing Institute of Technology 2 Huawei Noah's Ark Lab 3Huawei Technologies Ltd.
{binsun,liyiwei,likan}@bit.edu.cn
{liyitong3,mifei2,biefanhu}@huawei.com
## Abstract
Existing knowledge-grounded open-domain dialogue generation models often face the hallucination problem, i.e. the dialogue generative model will persist in an inappropriate knowledge and generate responses that inconsistent with the facts. We argue that this problem mainly stems from the polarized optimization objectives and weak knowledge generation ability. To mitigate the hallucination, we take inspiration from human communicating that people will replay euphemistic responses for the unclear or unrecognizable knowledge, and propose an Augmentative and Contrastive Knowledge Dialogue Expansion Framework (ACKDEF). ACK-DEF constructs the augmentative and contrastive knowledge dialogue samples, which consist of the knowledge of different degrees of errors and the response of manual design, to expand the original training set and smooth the polarized optimization objective that enables models to generate ground-truth with or without gold knowledge. Not only the knowledge, ACK-DEF also provides the tactful responses of manual design corresponding to the incomplete correct knowledge. Experimental results on the Wikipedia of Wizard dataset show that employing the ACK-DEF is effective to alleviate the hallucination problem.
## 1 Introduction
Recently, Knowledge-Grounded Dialogue Generation draws dramatic attentions in artificial intelligence community. Many efforts incorporate knowledge information to improve the performance of dialogue generation models (Zhou et al., 2018; Dinan et al., 2019; Gopalakrishnan et al., 2019; Kim et al., 2020; Zhao et al., 2020a; Zheng et al., 2021; Zhao et al., 2022a; Bao et al., 2022). However, these methods always face the hallucination problem, that is, the dialogue generation model may insist on an inappropriate knowledge and generate responses that inconsistent with the facts (Rashkin et al., 2021; Zhao et al., 2022a; Dziri et al., 2022).
We argue that the hallucination problem primarily caused by two aspects: (1) The optimization objective is usually polarized by the gold knowledgedialogue samples and general dialogue samples without knowledge in current knowledge-grounded dialogue datasets (Zhou et al., 2018; Gopalakrishnan et al., 2019; Dinan et al., 2019; Wu et al.,
2019; Komeili et al., 2022). Few datasets consider teaching models how to respond when dealing with incomplete correct knowledge, which makes the models tend to believe in the given knowledge, regardless of whether the knowledge is appropriate or not, resulting in hallucination problems. In addition, the knowledge retrieval system tends to extract irrelevant knowledge rather than relevant knowledge when the database is large, aggravating the hallucinations (Reimers and Gurevych, 2021; Liu et al., 2022). (2) The generation of knowledge may also face the hallucination problem and obtain the inappropriate knowledge, leading the generation of hallucination responses (Kim et al., 2020; Zhao et al., 2020a; Liu et al., 2022; Adolphs et al., 2021; Bao et al., 2022).
To mitigate the hallucination problem, we propose an Augmentative and Contrastive Knowledge Dialogue Expansion Framework (ACK-DEF),
which is inspirited by human communicating that people will replay euphemistic response for the unrecognizable knowledge. ACK-DEF is proposed to smooth the polarized optimization objective by augmenting training set with augmentative and contrastive knowledge-dialogue samples. Not only the knowledge, we also designed the reply patterns for the knowledge with different level of errors. For this, we propose the augmentative knowledge dialogue expansion (AK), and contrastive knowledge dialogue expansion (CK). AK is proposed to boost the generalization ability of models on knowledge with minor noise. On the contrary, inspired from the *contrastive learning* paradigm (Cai et al., 2020; Chen et al., 2020a,b; Sun et al., 2021, 2022), CK
1741
![1_image_1.png](1_image_1.png)
reconstructs incorrect knowledge and designs euphemistic responses, which aims to push the model learn the reply pattern of incorrect knowledge and a better boundary between correct and incorrect knowledge.
Contributions: We propose an ACK-DEF to construct new knowledge-dialogue samples that consist of knowledge with different level of errors and manual responses, to soften the training optimization objectives of models, which will mitigate the hallucination. Finally, we conduct extension experiments to show the superior performance of ACK-DEF on alleviating the hallucination.
## 2 Methodology
To mitigate the hallucination problem that caused by the polarized optimization objectives in knowledge grounded dialogue generation, we take inspiration from human communicating, and propose the Augmentative and Contrastive Knowledge Dialogue Expansion Framework (ACK-DEF).
Our ACK-DEF aims to soften the polarized training optimization objectives of current knowledgegrounded dialogue generation methods, and guide the dialogue system reply patterns for the knowledge with different level of errors. To achieve this end, we design two effective expansion method, which will be detailed in below.
## 2.1 Augmentative Knowledge Dialogue
We propose the *Augmentative Knowledge (AK)* dialogue expansion to boost the generalization ability of the dialogue model on the knowledge with simi-
![1_image_0.png](1_image_0.png)
lar semantics but different expressions, which can prevent the model from being interfered by partialrelevant knowledge retrieved by the retrieval systems (Lian et al., 2019; Zhao et al., 2020b; Hedayatnia et al., 2020; Zheng et al., 2021; Shuster et al., 2021; Komeili et al., 2022). As shown in Figure 1, we employ the synonym data augmentation tool, which replaces words in the original knowledge with synonyms, to reconstruct the knowledge information (Miller, 1995). Considering that the synonym may disrupt the original semantics of new constructed knowledge, we constrain the replace possibility within [0.1,0.2]. Hence, we can obtain the approximate knowledge. Combining this knowledge and the original dialogue, we obtain the
"ak-less sample". In addition, we also replace 30%
to 50% words with their synonyms to construct the less similar knowledge. Inspired from prompt learning paradigm (Yao et al., 2022; Valvoda et al.,
2022; Zhao et al., 2022b), we manually produce some Prefix-prompts and Post-prompts (see Appendix) to (1) make the new response more tactful for the less similar knowledge; (2) regulate and guide the dialogue generation process of the model.
We call the sample consist of less-similar knowledge and designed response as "ak-more sample".
## 2.2 Contrastive Knowledge Dialogue
We propose the *Contrastive Knowledge (CK)* dialogue expansion, inspired from the contrastive learning paradigm (Chen et al., 2020b; Cai et al.,
2020), not only construct the incorrect knowledge as negative samples for original knowledge, but also build the euphemistic responses as positive samples for the original response with incorrect knowledge.1 To help the model learn a boundary between correct and incorrect knowledge, we employ the antonym to make up new incorrect knowledge. For example, given the knowledge "*nintendo was founded on 23 september 1889 ...*", the
"founded" will be replaced with "abolish", which greatly changes the semantics but little changes the expression. After that, we random choose an euphemistic response to replace the original response of the dialogue. Finally, The incorrect knowledge and the replaced euphemistic response are combined as the "ck-sample".
## 3 Experiment And Results 3.1 Experiment Settings 3.1.1 Dataset
We use the Wikipedia of Wizard (WoW) data, a well-established knowledge-grounded opendomain dialogue dataset, for our experiment. We pre-precess the WoW dataset and extract the singleturn knowledge dialogue samples. To evaluate the performance of our method in detail, we perform four test sets: normal, ak-less, ak-more and ck.
The normal set is the original test set. And the ak-less, ak-more and ck are the sets consist of ak-less, ak-more and ck samples, respectively. We also follow the settings of WoW data and divide the test set into two groups (seen test and unseen test): the topic of the knowledge in the unseen test set is missing in the training set.
## 3.1.2 Baseline
We employ the released PLATO-v1 (Bao et al.,
2020) model, a pre-trained dialogue generation model based on UniLM, for our experiment.
Fine-tuning We directly finetune a model on the original WoW training set. By this, the model can only see gold knowledge dialogue samples and general dialogue samples without knowledge. Hence, we call the fine-tuned model PLATO+GOLD.
Fine-tuning with ACK-DEF We finetune the model with the original set and the expansion samples that obtained through ACK-DEF. Thence, we call it PLATO+ACK-DEF.
1We manually construct some responses, please see Appendix for the detail.
## 3.1.3 Autoevaluation Metrics
Dialogue Metrics Our primary metrics of interest are Distinct-n (Li et al., 2016), Response Length
(Len.) (Csaky et al., 2019), BLEU (Papineni et al.,
2002), Embedding-based (Greedy (GRE), Average
(AVG), Extrema (EXT)) (Liu et al., 2016), and Coherence (COH) (Xu et al., 2018). Distinct-n evaluates the diversity of generated responses, which is calculated through the ratio of distinct n-grams and all generated n-grams. Len. is the average number of words of all generated responses. BLEU validates the degree of the word-overlap between the generated response and the ground-truth, which denotes the consistence between generated response and ground-truth. Embedding-based metrics (GRE,
AVG and EXT) are introduced to evaluate the semantic relationship of generated responses and ground-truth responses, illustrating the consistence in semantic level. COH. mainly assesses the relevance between contexts and generated responses.
Knowledge Metrics We follow the PLATO(Bao et al., 2020) and use the knowledge precision, recall and f1 scores. These metrics are used to calculate the ratio of tokens that exist in common in groundtruth knowledge and generated responses to tokens in generated responses. "Recall" is the average ratio of the number of overlapping tokens in response and knowledge to the number of tokens in knowledge. And "Precision" is the average ratio of the number of overlapping tokens to the number of tokens in response. In other words, "Recall" indicates how much knowledge information is contained in the response, while "Precision" indicates the proportion of knowledge information in the response. Even we involve the negative and incorrect knowledge in response generation, we still use the ground-truth knowledge to calculate the metrics in Table 3,4.
## 3.2 Dialogue Performance Analysis
Table 1 and Table 2 report the automatic results on four test sets and four unseen test sets, respectively. In these Tables, it can be observed that
(1) the PLATO+ACK-DEF has a competitive performance with PLATO+GOLD on the normal set, which means that the PLATO+ACK-DEF can recognize the golden knowledge and produce appropriate responses. (2) the PLATO+GOLD perform worse than PLATO+ACK-DEF on ak-less, which means that the robustness of the dialogue model only trained with golden knowledge is very weak.
| test set | Distinct-1/2/3 | Len. | BLEU-1/2/3/4 | GRE | AVG | EXT | COH | | | | |
|------------|------------------|--------|----------------|--------|--------|--------|--------|--------|--------|--------|--------|
| normal | 0.1068 | 0.4533 | 13.69 | 0.4280 | 0.2965 | 0.2110 | 0.1529 | 0.7392 | 0.8689 | 0.6361 | 0.7808 |
| 0.0902 | 0.3984 | 16.20 | 0.4428 | 0.3017 | 0.2109 | 0.1499 | 0.7366 | 0.8683 | 0.6330 | 0.7878 | |
| ak-less | 0.1194 | 0.5024 | 13.50 | 0.3861 | 0.2574 | 0.1745 | 0.1192 | 0.7160 | 0.8607 | 0.6148 | 0.7755 |
| 0.0823 | 0.3532 | 18.78 | 0.4502 | 0.2982 | 0.2015 | 0.1380 | 0.7307 | 0.8696 | 0.6293 | 0.7948 | |
| ak-more | 0.1234 | 0.5174 | 12.81 | 0.1675 | 0.1062 | 0.0680 | 0.0435 | 0.6908 | 0.8551 | 0.5994 | 0.7706 |
| 0.0675 | 0.2946 | 21.83 | 0.4358 | 0.3001 | 0.2123 | 0.1542 | 0.7745 | 0.9151 | 0.7093 | 0.8098 | |
| ck | 0.1109 | 0.4779 | 13.23 | 0.2965 | 0.1779 | 0.1080 | 0.0657 | 0.5838 | 0.7622 | 0.5373 | 0.7712 |
| 0.0652 | 0.2029 | 13.36 | 0.4230 | 0.2705 | 0.1809 | 0.1266 | 0.6572 | 0.8306 | 0.6162 | 0.8049 | |
Table 1: The automatic results of PLATO+GOLD (up) and PLATO+ACK-DEF (down) on four test seen sets.
test set Distinct-1/2 Len. BLUE-1/2/3/4 GRE AVG EXT COH
normal **0.0503 0.2422** 12.43 0.3516 0.2331 0.1582 0.1090 **0.6988 0.8568** 0.6306 0.8094
0.0467 0.2311 **13.14** 0.3463 0.2281 0.1536 0.1049 0.6968 0.8541 0.6338 **0.8105**
ak-less **0.0966 0.3917** 13.39 0.3871 0.2565 0.1724 0.1164 0.7143 0.8600 0.6122 0.7836
0.0623 0.2664 19.18 0.4443 0.2907 0.1936 0.1301 0.7232 0.8663 0.6194 **0.8026**
ak-more **0.1064 0.4440** 12.71 0.1652 0.1046 0.0668 0.0426 0.6888 0.8538 0.5980 0.7797
0.0561 0.2400 21.82 0.4331 0.2968 0.2091 0.1511 0.7697 0.9114 0.7037 **0.8197**
ck **0.0813 0.3324** 13.24 0.3011 0.1809 0.1100 0.0669 0.5854 0.7676 0.5479 0.7794
0.0465 0.1490 13.52 0.4329 0.2775 0.1861 0.1307 0.6612 0.8334 0.6215 **0.8145**
Table 2: The automatic results of PLATO+GOLD (up) and PLATO+ACK-DEF (down) on four test sets with unseen knowledge.
Table 3: The knowledge correlation results of PLATO+GOLD (up) and PLATO+ACK-DEF (down)
on four test sets with seen knowledge.
Even if the knowledge information only changes by 10% to 20%, the performance of the model will
test set Recall Precision F1 avg. Dec.
normal 0.3607 0.7009 0.4546 –
ak-less 0.2883 0.5585 0.3618 ∇ 0.1026
ak-more 0.1752 0.3632 0.2228 ∇ **0.2517** ck 0.3193 0.6133 0.4003 ∇ 0.0611
normal 0.3695 0.6538 0.4520 –
ak-less 0.3251 0.5636 0.3927 ∇ 0.0647 ak-more 0.2335 0.3983 0.2775 ∇ 0.1887
ck 0.1065 0.2041 0.1337 ∇ **0.3437**
significantly decline, especially consistency metrics (i.e. BLEU, GRE, AVG and EXT). (3) the PLATO+GOLD achieve better Distinct scores but weaker BLEU and embedding-based scores, which means that the PLATO+GOLD is easy to generate responses that are very different from ground-truth responses, that is, the hallucinations.
## 3.3 Knowledge Correlation Analysis
Table 3 and Table 4 report the knowledge correlation result of PLATO+GOLD and PLATO+ACKDEF on four test sets and four test unseen sets, respectively. From these table, we can observe that the performance of PLATO+GOLD is reduced when the given knowledge changed, which illustrates the danger that the model generate responses based on incorrect knowledge. In addition to the above findings, we also observed that the recall, precision and f1 scores of PLATO+ACK-DEF are better than PLATO+GOLD on ak-less and ak-more sets, which demonstrates that using ACK-DEF effectively enhance the model's capability for the similar knowledge information. Moreover, the result of PLATO+ACK-DEF on the ck set is significantly reduced, which shows that the model distinguishes the wrong knowledge constructed with antonyms and gives an appropriate response with-
| test set | Recall | Precision | F1 | avg. Dec. |
|------------|----------|-------------|--------|-------------|
| normal | 0.3732 | 0.7442 | 0.4736 | - |
| ak-less | 0.2728 | 0.5475 | 0.3452 | ∇ 0.1418 |
| ak-more | 0.1665 | 0.3627 | 0.2152 | ∇ 0.2822 |
| ck | 0.3028 | 0.6068 | 0.3830 | ∇ 0.0995 |
| normal | 0.3655 | 0.6882 | 0.4535 | - |
| ak-less | 0.2938 | 0.5348 | 0.3579 | ∇ 0.1069 |
| ak-more | 0.2046 | 0.3714 | 0.2481 | ∇ 0.2277 |
| ck | 0.0870 | 0.1847 | 0.1116 | ∇ 0.3747 |
| test set | w. GOLD (%) | w. ACK-DEF (%) | kappa |
|------------|---------------|------------------|---------|
| normal | 13.00 | 14.00 | 0.481 |
| ak-less | 23.67 | 17.33 | 0.513 |
| ak-more | 33.67 | 24.33 | 0.479 |
| ck | 21.67 | 5.67 | 0.597 |
| total | 23.00 | 15.33 | 0.552 |
out knowledge (see Table 1 and Table 2 for the effect). These results are inline with our exception that incorporating noised knowledge dialogue samples in training stages can smooth the polarized optimization objective, and mitigate the hallucination problem.
According to the results of test seen sets and unseen sets), we can notice that the PLATO+ACKDEF achieves a good performance on groundtruth seen knowledge and a weak performance on ground-truth unseen knowledge. This illustrates that the PLATO+ACK-DEF may doubt the authenticity of unseen given knowledge (even if the knowledge is the ground-truth), and will not fully use it to generate responses. This may alleviate the hallucination, and we believe it is caused by (1)
the Augmentative knowledge dialogue introduce similar knowledge to improve the generalization of the model; (2) the Contrastive knowledge dialogue introduce knowledge independent responses, which tell the model to generate responses without knowledge; (3) the ACK-DEF smooths the polarized optimization, which ensure the model not to directly use the given knowledge.
## 3.4 Human Evaluation
To further evaluation the ability of our ACK-DEF
on reducing the hallucination problem, we randomly select 400 samples form four test sets, and hire three annotators to do human evaluations by assessing whether the responses generated by PLATO
+GOLD and +ACK-DEL have hallucinations. Table 5 reports the results of human evaluation, from which we can notice that the PLATO+ACK-DEF
generate less hallucinations than PLATO+GOLD.
This shows the effectiveness of our ACK-DEF.
## 3.5 Case Study
Table 6 shows a generated case of PLATO+GOLD
(GOLD) and PLATO+ACK-DEF (ACK-DEF)
based on different knowledge information. We can observed that the GOLD is convinced of the given knowledge, regardless of whether the knowledge
| Query | They really are super cool looking dogs they are so amazing | | | |
|------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|---------------|---------|----|
| Normal | golden retriever are well suited to residency in suburban or country environments. | | | |
| Knowledge GOLD | I love golden retrievers! | They are well | | |
| suited to residency in suburban or country. | | | | |
| ACK-DEF | They are! They are well suited to suburban or country environments. | | | |
| AK-Less | golden retriever are good suited to residency in suburban operating elbow room | | | |
| Knowledge | land environments. | | | |
| GOLD | Golden retriever are great suited to residency in suburban operating elbow | | | |
| ACK-DEF | Golden retriever are great suited to suburban environments. | | | |
| AK-More | gild retriever make up considerably suit | | | |
| Knowledge | to residency in suburban surgery commonwealth environments. | | | |
| GOLD | gild retrievers make up considerably suit to residency in suburban areas. | | | |
| ACK-DEF | Not very clear, maybe they are! They can be found in suburban areas. If it is wrong, please tell me. | | | |
| CK Knowledge | golden | retriever | unequal | disadvanta |
| geously suited to residency in suburban or country environments. | | | | |
| GOLD | They sure are! I have a golden retriever who is unequally suited to residency in suburban environments. | | | |
| ACK-DEF | That sounds pretty good. | | | |
is appropriate or not, and more easily to copy the knowledge information into responses. Even the GOLD has seen the knowledge topic, it could not remember the knowledge in their parameters. On the contrary, the ACK-DEF has good resistance to incomplete correct knowledge.
## 4 Conclusion
This paper focuses on the hallucinations caused by polarized optimization objective in knowledgegrounded dialogue generation (KGDG), and proposes an augmentative and contrastive knowledge dialogue expansion framework (ACK-DEF) to mitigate it. The optimization objective of KGDG is to train the model could generate proper response with or without knowledge, which inevitably weaken the model's ability on unrecognized knowledge and lead hallucinations. Therefore, ACK-DEF constructs multiple level knowledge-dialogue samples to soften the optimization objective of KGDG. Extension experimental results show the superior performance of using our methods on dialogue metrics and knowledge correlations.
## Limitations
Our limitations are as follow:
- **Data Scale**: This paper only employ the Wikipedia of Wizard dataset, a small scale and well-established knowledge conversation dataset, and lack of the validation on largescale dataset.
- **Backbones**: This paper lacks the evaluating of other knowledge dialogue model on the proposed method. Actually, we have two reasons to employ the PLATO. First, the PLATO can better handle the one-to-many phenomenon, which is suitable for learning our expansion samples. Second, the PLATO
is a pre-trained dialogue model, and its performance on knowledge dialogue generation task has been proved. We will evaluating the performance of other knowledge dialogue model on our method for our future work.
- **Knowledge Expansion Methods**: This paper only use the synonym and antonym to construct the noised knowledge, which lacks of the comparison of using other data augment method. Indeed, we use two tokenlevel data augmentation methods (synonym and antonym augmentation) to prove our statements on hallucination problem in knowledgedialogue generation task. Based on this study, we believe that incorporating other data augmentation methods will also mitigate the hallucinations.
## - **Manual Prompts And Responses**: This
paper designed five prefix prompts, four post-prompts and nineteen euphemistic responses. For *AK-More* method, we simply randomly choose one prefix-prompt and one post-prompt and concatenate them with the ground-truth response. This leads to some irregular responses. As for CK method, we randomly select one euphemistic response for the incorrect knowledge. However, we found that the response may not coherent with the query. We will design more smooth expansion ways to construct more human-like training samples for our future work.
## Ethics Statement
We acknowledge and ensure that our study is compatible with the provided Code of Ethics.
Knowledge-grounded open-domain dialogue generation is crucial for building a knowledgeable dialogue system, which is beyond the wildest dreams in natural language process field. All our experiments are conducted on public available datasets to avoid ethical concerns. All terms for using these datasets are strictly followed in our study. There are no direct ethical concerns in our research.
## Acknowledgments
We would like to thank the anonymous reviewers for their constructive comments. This research is supported by Beijing Natural Science Foundation (No.4222037 and L181010) and BIT Research and Innovation Promoting Project (Grant No.2022YCXY021). Kan Li is the corresponding author.
## References
Leonard Adolphs, Kurt Shuster, Jack Urbanek, Arthur Szlam, and Jason Weston. 2021. Reason first, then respond: Modular generation for knowledge-infused dialogue. *CoRR*, abs/2111.05204.
Siqi Bao, Huang He, Fan Wang, Hua Wu, and Haifeng Wang. 2020. PLATO: pre-trained dialogue generation model with discrete latent variable. In ACL,
pages 85–96. ACL.
Siqi Bao, Huang He, Jun Xu, Hua Lu, Fan Wang, Hua Wu, Han Zhou, Wenquan Wu, Zheng-Yu Niu, and Haifeng Wang. 2022. PLATO-K: internal and external knowledge enhanced dialogue generation. *CoRR*,
abs/2211.00910.
Hengyi Cai, Hongshen Chen, Yonghao Song, Zhuoye Ding, Yongjun Bao, Weipeng Yan, and Xiaofang Zhao. 2020. Group-wise contrastive learning for neural dialogue generation. In *EMNLP*, pages 793–
802. Association for Computational Linguistics.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020a. A simple framework for contrastive learning of visual representations. In ICML, volume 119 of *Proceedings of Machine Learning Research*, pages 1597–1607. PMLR.
Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He. 2020b. Improved baselines with momentum contrastive learning. *CoRR*, abs/2003.04297.
Richard Csaky, Patrik Purgai, and Gábor Recski.
2019. Improving neural conversational models with entropy-based data filtering. In *ACL (1)*, pages 5650–
5669.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In *ICLR*. OpenReview.net.
Nouha Dziri, Sivan Milton, Mo Yu, Osmar R. Zaïane, and Siva Reddy. 2022. On the origin of hallucinations in conversational models: Is it the datasets or the models? In *NAACL*, pages 5271–5285. Association for Computational Linguistics.
Karthik Gopalakrishnan, Behnam Hedayatnia, Qinglang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tür.
2019. Topical-chat: Towards knowledge-grounded open-domain conversations. In *Interspeech*, pages 1891–1895. ISCA.
Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Yang Liu, Mihail Eric, and Dilek Hakkani-Tür. 2020. Policy-driven neural response generation for knowledge-grounded dialog systems. In *INLG*, pages 412–421. Association for Computational Linguistics.
Byeongchang Kim, Jaewoo Ahn, and Gunhee Kim.
2020. Sequential latent knowledge selection for knowledge-grounded dialogue. In *ICLR*. OpenReview.net.
Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022.
Internet-augmented dialogue generation. In ACL, pages 8460–8478. Association for Computational Linguistics.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In HLT-NAACL, pages 110–119.
Rongzhong Lian, Min Xie, Fan Wang, Jinhua Peng, and Hua Wu. 2019. Learning to select knowledge for response generation in dialog systems. In *IJCAI*, pages 5081–5087. ijcai.org.
Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016.
How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In *EMNLP*, pages 2122–2132.
Zihan Liu, Mostofa Patwary, Ryan Prenger, Shrimai Prabhumoye, Wei Ping, Mohammad Shoeybi, and Bryan Catanzaro. 2022. Multi-stage prompting for knowledgeable dialogue generation. In Findings of ACL, pages 1317–1337. Association for Computational Linguistics.
George A. Miller. 1995. Wordnet: A lexical database for english. *Commun. ACM*, 38(11):39–41.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311–318.
Hannah Rashkin, David Reitter, Gaurav Singh Tomar, and Dipanjan Das. 2021. Increasing faithfulness in knowledge-grounded dialogue with controllable features. In *ACL/IJCNLP*, pages 704–718. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2021. The curse of dense low-dimensional information retrieval for large index sizes. In *ACL/IJCNLP*, pages 605–611.
Association for Computational Linguistics.
Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In Findings of EMNLP, pages 3784–3803. Association for Computational Linguistics.
Bin Sun, Shaoxiong Feng, Yiwei Li, Jiamou Liu, and Kan Li. 2021. Generating relevant and coherent dialogue responses using self-separated conditional variational autoencoders. In *ACL/IJCNLP*, pages 5624–
5637. Association for Computational Linguistics.
Weiwei Sun, Zhengliang Shi, Shen Gao, Pengjie Ren, Maarten de Rijke, and Zhaochun Ren. 2022. Contrastive learning reduces hallucination in conversations. *CoRR*, abs/2212.10400.
Josef Valvoda, Yimai Fang, and David Vandyke. 2022.
Prompting for a conversation: How to control a dialog model? In Proceedings of the Second Workshop on When Creative AI Meets Conversational AI, pages 1–8, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, and Haifeng Wang.
2019. Proactive human-machine conversation with explicit conversation goal. In ACL, pages 3794–3804.
Association for Computational Linguistics.
Xinnuo Xu, Ondrej Dusek, Ioannis Konstas, and Verena Rieser. 2018. Better conversations by modeling, filtering, and optimizing for coherence and diversity.
In *EMNLP*, pages 3981–3991.
Yuan Yao, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, and Jianyong Wang. 2022. Prompt tuning for discriminative pre-trained language models. In Findings of the Association for Computational Linguistics: ACL
2022, pages 3468–3473, Dublin, Ireland. Association for Computational Linguistics.
Xueliang Zhao, Tingchen Fu, Chongyang Tao, and Rui Yan. 2022a. There is no standard answer:
Knowledge-grounded dialogue generation with adversarial activated multi-reference learning. *CoRR*,
abs/2210.12459.
Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, and Rui Yan. 2020a. Low-resource knowledge-grounded dialogue generation. In *ICLR*.
OpenReview.net.
Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, and Rui Yan. 2020b. Knowledgegrounded dialogue generation with pre-trained language models. In *EMNLP*, pages 3377–3390. Association for Computational Linguistics.
Yingxiu Zhao, Yinhe Zheng, Zhiliang Tian, Chang Gao, Bowen Yu, Haiyang Yu, Yongbin Li, Jian Sun, and Nevin L. Zhang. 2022b. Prompt conditioned VAE:
enhancing generative replay for lifelong learning in task-oriented dialogue. *CoRR*, abs/2210.07783.
Wen Zheng, Natasa Milic-Frayling, and Ke Zhou. 2021.
Knowledge-grounded dialogue generation with termlevel de-noising. In *Findings of ACL/IJCNLP*, volume ACL/IJCNLP 2021 of *Findings of ACL*, pages 2972–2983. Association for Computational Linguistics.
Kangyan Zhou, Shrimai Prabhumoye, and Alan W.
Black. 2018. A dataset for document grounded conversations. In *EMNLP*, pages 708–713. Association for Computational Linguistics.
| Prefix Prompts | Post Prompts |
|------------------|----------------|
| I was thinking that perhaps I am not sure, maybe that Not very clear, maybe Not very clear, perhaps I was thinking that maybe | Maybe i am wrong. If I am wrong, please correct me. If I am wrong, please forgive me. If it is wrong, please tell me. |
Table 7: The designed prefix and post prompts.
| Euphemistic Responses Interesting, do you know that? That sounds pretty good. Are there any way to visit? Oh, I had not heard. Hmm, I have never heard of that. What is that one about? I have never heard. Can you tell me more about it? Oh, wow, that is remarkable. I have never played those, are they fun? Can I ask you about it? Please tell me more about that. Can you tell me more about that? I have never had that. Anything else you can tell me? That's really interesting! But I have never heard of that. I literally know nothing about that! I have no idea about that. I have not heard that one. I will have to check it out. Huh, maybe I will need to check that out then. Oh, I misunderstood then. Oh, i do not know about that. Wow, that's a lot! I haven't heard of those. |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Table 8: The designed euphemistic responses.
## A Prefix And Post Prompts
We manually design five prefix prompts and four post prompts, which are shown in Table 7. We discuss below about the prefixes and posts.
We designed the prefixes and posts based on the WoW dataset and our daily conversation habits. In WoW dataset, one role is "0_Wizard", and the other is "1_Apprentice". We noticed that the 1_Apprentice will give the sentences such as "*correct my* if I am wrong . . . ", which is also easy to appear in our daily conversation. Taking inspiration of this, we manually designed the prefixes and posts.
Moreover, since the PLATO is pre-trained on conversation datasets, these prefixes may introduce the pre-knowledge that the model learned during the pre-training process.
In fact, we declare the weakness of our manual prefixes and posts, i.e. direct connections of prefixes, responses, and posts do not fit all contexts. Therefore, we are exploring a new way of constructing replies, such as passing the design prefix, response, post, and context into the largelanguage-model to rewrite the appropriate response.
We believe that better prefixes and posts will lead to more benefits in solving the hallucination problem.
## B Euphemistic Responses
We manually design nineteen euphemistic responses, which are shown in Table 8.
## C Dissuasion About The Boundary Between Ak-Less And Ak-More
Below we provide an example in our dataset:
- Ground-truth Knowledge: laziness | tesis ("
thesis ") is a 1996 spanish thriller film.
- AK-Less Knowledge: acedia | tesis ("thesis")
is a 1996 spanish thriller film.
- AK_More Knowledge: laziness | tesis ("thesis") personate a 1996 spanish thriller picture show.
It can be noted that the more synonyms are introduced into a sentence, the semantics of the sentence will become more and more different from the original semantics. Therefore, we suppose that replacing at least 30% of words at once will make a big difference in sentence semantics. Then, we decided the boundary between ak-less and ak-more.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We provide a section of Limitations after the Conclusion and before the Ethics Statement
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We use a publicly well-established dataset.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** 3
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We use the released code and checkpoints. We cite the source of our model.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 3
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
2 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
li-etal-2023-autoconv | {A}uto{C}onv: Automatically Generating Information-seeking Conversations with Large Language Models | https://aclanthology.org/2023.acl-short.149 | Information-seeking conversation, which aims to help users gather information through conversation, has achieved great progress in recent years. However, the research is still stymied by the scarcity of training data. To alleviate this problem, we propose AutoConv for synthetic conversation generation, which takes advantage of the few-shot learning ability and generation capacity of large language models (LLM). Specifically, we formulate the conversation generation problem as a language modeling task, then finetune an LLM with a few human conversations to capture the characteristics of the information-seeking process and use it for generating synthetic conversations with high quality. Experimental results on two frequently-used datasets verify that AutoConv has substantial improvements over strong baselines and alleviates the dependence on human annotation. In addition, we also provide several analysis studies to promote future research. |
## Autoconv: Automatically Generating Information-Seeking Conversations With Large Language Models
Siheng Li1†∗
, Cheng Yang1†
, Yichun Yin2, Xinyu Zhu1**, Zesen Cheng**3 Lifeng Shang2, Xin Jiang2, Qun Liu2, **Yujiu Yang**1‡
1Shenzhen International Graduate School, Tsinghua University 2Huawei Noah's Ark Lab, 3Peking University
{lisiheng21, yangc21}@mails.tsinghua.edu.cn
{yinyichun, shang.lifeng, jiang.xin, qun.liu}@huawei.com [email protected]
## Abstract
Information-seeking conversation, which aims to help users gather information through conversation, has achieved great progress in recent years. However, the research is still stymied by the scarcity of training data. To alleviate this problem, we propose AutoConv for synthetic conversation generation, which takes advantage of the few-shot learning ability and generation capacity of large language models
(LLM). Specifically, we formulate the conversation generation problem as a language modeling task, then finetune an LLM with a few human conversations to capture the characteristics of the information-seeking process and use it for generating synthetic conversations with high quality. Experimental results on two frequently-used datasets verify that AutoConv has substantial improvements over strong baselines and alleviates the dependence on human annotation. In addition, we also provide several analysis studies to promote future research.
## 1 Introduction
In information-seeking conversations, users repeatedly ask questions based on their interests, and the dialogue system provides answers to fulfill their information needs (Stede and Schlangen, 2004; Choi et al., 2018; Reddy et al., 2019). This scenario is important for addressing real-world open-ended questions, which requires discussions to explore in depth (Dai et al., 2022), e.g., *How to learn more efficiently*? Though great progress has been achieved in recent years, most existing researches depend on abundant human annotation, which can be highly costly and limited in knowledge coverage.
A promising way to alleviate this problem is data augmentation (Chen et al., 2021). Traditional methods, including token-level manipulation (Kobayashi, 2018; Wei and Zou, 2019)
| Method | DG Data Needs | |
|------------------------------------------|-----------------|-------|
| EDA (Wei and Zou, 2019) | ✗ | - |
| Back-Translation (Sennrich et al., 2016) | ✗ | - |
| SeemSeek (Kim et al., 2022) | ✔ | Large |
| Dialog Inpainting (Dai et al., 2022) | ✔ | Large |
| AutoConv (Ours) | ✔ | Few |
Table 1: The differences between AutoConv and others.
DG represents whether the augmentation is document grounded, and Data Needs denotes the scale of human conversations used for augmentation.
and sentence-level paraphrasing (Sennrich et al., 2016), improve the linguistic diversity of training data. However, they cannot create conversations grounded on new documents, which are indispensable for dealing with out-of-domain scenarios. Another line of research focuses on simulation-based methods (Wu et al., 2021; Kim et al., 2022). Specifically, they can iteratively generate conversations grounded on new documents based on a span extractor and an utterance generator. Nevertheless, both the training of the extractor and the generator still require abundant human dialogues. Besides the above ways, Dai et al. (2022) propose Dialog Inpainting, which creates information-seeking dialogues by inserting utterances between neighboring sentences in documents. One potential risk is the gap between the structure of documents and that of conversations. Documents are tighter, while realworld conversations are more open-ended.
To alleviate the above issues, we propose a simple yet effective method **AutoConv** for Automatically generating information-seeking Conversations, which takes advantage of the fewshot learning ability and generation capacity of large language models (LLM) (Brown et al., 2020). Specifically, we formulate conversation generation as a language modeling task and utilize an LLM
for generating synthetic conversations grounded on external documents. Surprisingly, finetuning with a few human dialogues can help LLM capture the characteristics of the information-seeking process
## 1 Introduction The _Chandra_ satellite (_FMC_) is a very powerful instrument for studying the properties of the atmosphere. The _Chandra_ satellite is a very powerful instrument for studying the properties of the atmosphere.
Doc Large
Language Model Usr-1 Sys-1 Usr-2 Sys-2 …
* [10] M. C. Gonzalez-Garcia, M. C. Gonzalez-Garcia, M.
Figure 1: The generation process of AutoConv. We use nucleus sampling for generating user questions and greedy search for generating system answers.
(e.g., grounding, question answering) and generate high-quality synthetic conversations. Then, we can train a small task model with these dialogues.
The differences between AutoConv and others are shown in Table 1.
We conduct comprehensive experiments on two frequently-used datasets QuAC (Choi et al., 2018) and CoQA (Reddy et al., 2019) in the low-resource setting, where only dozens of human dialogues are available. The results show that AutoConv has substantial improvements over several strong baselines. When scaling up the synthetic dialogues, AutoConv has the improvement of up to 5.06 F1 gain compared with directly finetuning, and thus largely reduces the labor force for annotation. In addition, we find that the small task model trained with synthetic dialogues can even surpass finetuned LLM with only 1.7% parameters. Moreover, we also investigate the impact of decoding strategy and scaling laws for AutoConv.
## 2 Method 2.1 Task Formulation
Our goal is automatically generating informationseeking conversations. Specifically, each conversation is grounded on a document d and consists of a series of user questions and system answers.
## 2.2 Conversation Generation
Training. We formulate conversation generation as a language modeling task and finetune1an LLM
with a few human dialogues (e.g., 50 from QuAC
(Choi et al., 2018)) to capture the characteristics of information-seeking conversations (e.g., grounding, question answering). The objective is the negative log-likelihood of each utterance:
$${\mathcal{L}}=-\sum_{t=1}^{T}\sum_{l=1}^{L}\log P(u_{l}^{t}|u_{<l}^{t},h_{<t},d),$$
where u represents a user question or a system answer, h is the dialogue history, L and T are the number of tokens and turns respectively.
Generating. Based on the finetuned LLM, we can generate synthetic dialogues with unlabeled documents, as in Figure 1. In information-seeking scenarios, user questions are typically open-ended.
Thus we choose nucleus sampling (Holtzman et al.,
2020) for generating user questions, which has shown great performance in various open-ended generation tasks (Su et al., 2022). However, when applying a sampling decoding strategy for system answer generation, we find it results in the "hallucination" problem (Shuster et al., 2021), where the generation is plausible but factually incorrect based on the document. To this end, we utilize greedy search for answer generation. Neural language models often generate the same sentences repetitively (Xu et al., 2022). To alleviate this problem, we first compute the diversity score of each synthetic dialogue as in Su et al. (2022), which considers the repetition at different n-gram levels.
Then, we filter out dialogues based on this score.
After that, a two-stage training strategy is adopted (Xie et al., 2020b) for training a small task model. Specifically, we first pre-train it on the synthetic dialogues, then finetune it on the human dialogues used for finetuning the LLM. More training details are given in Appendix B.
## 3 Experiments
We conduct experiments on QuAC (Choi et al.,
2018) and CoQA (Reddy et al., 2019), more details about them are shown in Appendix A.
## 3.1 Implementation
We focus on the low-resource setting, where human dialogues are scarce. To simulate this setting, we randomly sample a few human dialogues from the training set of QuAC or CoQA, and use them for finetuning the LLM. We use OPT-13B (Zhang et al., 2022) as the LLM and UnifiedQA-V2-base
(222M) (Khashabi et al., 2022) as the small task model. All data augmentation methods use the same training strategy and small task model. More implementation details are shown in Appendix B.
## 3.2 Comparison With Baselines
We compare AutoConv with a series of baselines, and the details of them are given in Appendix C. As
| Method | QuAC | CoQA | | |
|----------------------------------------------|------------|------------|------------|------------|
| F1 | EM | F1 | EM | |
| Prompting | | | | |
| GPT-3 Zero-shot (Brown et al., 2020) | 41.5 | - | 81.5 | - |
| GPT-3 Few-shot (Brown et al., 2020) | 44.3 | - | 85.0 | - |
| Data Augmentation (50 Human Dialogues) | | | | |
| Finetuning | 46.57±1.29 | 30.68±1.25 | 70.41±0.46 | 60.43±0.56 |
| Back-Translation (Sennrich et al., 2016) | 47.92±0.49 | 28.26±1.39 | 67.59±2.73 | 56.34±3.41 |
| EDA (Wei and Zou, 2019) | 46.04±1.28 | 28.88±2.20 | 58.89±2.08 | 47.64±2.14 |
| Utterance Manipulation (Chen and Yang, 2021) | 48.83±0.63 | 33.91±0.73 | 68.69±0.85 | 58.30±1.21 |
| Dialog Inpainting (Dai et al., 2022) | 48.33±1.24 | 32.23±1.55 | 70.25±0.93 | 59.83±0.98 |
| AutoConv | 50.48±0.94 | 34.12±0.93 | 73.87±0.85 | 63.78±1.01 |
| Human Annotation | 53.24±0.28 | 36.85±0.35 | 76.02±0.71 | 65.92±1.01 |
| Data Augmentation (100 Human Dialogues) | | | | |
| Finetuning | 48.98±1.16 | 31.98±1.09 | 72.78±0.69 | 62.41±0.85 |
| Back-Translation (Sennrich et al., 2016) | 48.41±0.96 | 28.10±2.51 | 69.18±2.82 | 57.72±3.28 |
| EDA (Wei and Zou, 2019) | 46.86±0.61 | 29.14±1.71 | 60.61±4.23 | 49.24±4.74 |
| Utterance Manipulation (Chen and Yang, 2021) | 49.07±1.06 | 31.77±1.86 | 69.23±0.21 | 59.15±0.74 |
| Dialog Inpainting (Dai et al., 2022) | 49.48±0.34 | 33.29±0.98 | 72.15±0.74 | 61.80±0.99 |
| AutoConv | 51.21±1.02 | 34.65±1.00 | 74.84±0.24 | 64.36±0.46 |
| Human Annotation | 54.22±0.90 | 37.42±2.06 | 76.35±0.51 | 65.71±0.55 |
shown in Table 2, AutoConv achieves better performance than GPT-3 prompting on QuAC with only 0.13% parameters and 50 human dialogues, but is less competitive on CoQA. We conjecture the reason stems from the intrinsic difference between the two datasets. CoQA contains more factoid questions, and the answers are named entities or short noun phrases like those in SQuAD (Rajpurkar et al., 2016). By training on large-scale text corpus from a web forum, GPT-3 might implicitly learn the format and structure of question answering (Sanh et al., 2022), and thus gets excellent performance on CoQA. On the other side, QuAC has more openended and exploratory questions as in natural conversations, and 86% questions are contextual (Choi et al., 2018). Therefore, it brings more difficulties for GPT-3 inference with few demonstrations, while our method learns better from both human dialogues and synthetic dialogues.
Compared with data augmentation methods, AutoConv achieves the best performance on both datasets and mitigates the gap between synthetic dialogues and human upper bounds. We find that the token-level augmentation method EDA
and the sentence-level augmentation method BackTranslation even hurt the performance, which is
![2_image_0.png](2_image_0.png)
similar to the observation in Chen et al. (2021).
One possible reason is that they bring too much noise. Dialog Inpainting (Dai et al., 2022) gets ordinary performance, and the reason possibly derives from the gap between the structure of natural conversations and that of the documents used for constructing synthetic dialogues.
## 3.3 Scaling Up Human Dialogues And Synthetic Dialogues
In this part, we further analyze the performance of AutoConv when scaling up the human dialogues and synthetic dialogues. As shown in Figure 2, the
| Model | #Params | #FLOPs | F1 (50) | F1 (200) |
|------------------|-----------|----------|-----------|------------|
| Finetuning (LLM) | 12.9B | 7049.3B | 53.53 | 54.85 |
| Finetuning (STM) | 222M | 60.2B | 47.97 | 50.38 |
| AutoConv (STM) | 222M | 60.2B | 52.40 | 55.44 |
performance boosts when more human dialogues or synthetic dialogues are used. With 50 human dialogues, AutoConv outperforms the results of finetuning with 200 human dialogues. With 500 human dialogues, AutoConv gets competitive performance compared with finetuning with 2000 human dialogues. These results verify the high quality of synthetic dialogues, and our AutoConv can largely alleviate the labor force for annotation.
## 3.4 Comparison With Finetuned Large Language Model
AutoConv is a kind of symbolic knowledge distillation (West et al., 2022), where the finetuned large language model (LLM) transfers its knowledge to the small task model (STM) by generating synthetic dialogues for the training of STM. Here, we further investigate the effectiveness of AutoConv from the aspect of knowledge distillation. As shown in Table 3, finetuned LLM has substantial improvements over finetuned STM. However, it brings large memory and computation cost. On the other side, our AutoConv not only keeps the efficiency of STM,
but also boosts the performance. Surprisingly, AutoConv even outperforms its teacher model in the 200 human dialogues setting. Similar observations are found in West et al. (2022); Ye et al. (2022),
while they focus on different tasks. We leave the analysis of this novel observation for future work.
## 3.5 Impact Of Decoding Strategy
During our preliminary experiments, we find that the decoding strategy is important for system answer generation. More precisely, we evaluate the answer generation performance of LLM with different decoding strategies on QuAC, and the results are shown in Table 4. Though nucleus sampling
(Holtzman et al., 2020) has shown great performance in various generation tasks (Su et al., 2022),
it performs less competitively than maximization-
| Decoding Strategy | F1 | Exact Match |
|----------------------------|-------|---------------|
| Nucleus Sampling (p = 0.8) | 50.77 | 32.63 |
| Nucleus Sampling (p = 0.9) | 49.88 | 31.57 |
| Greedy Search | 53.53 | 36.38 |
| Beam Search (b = 4) | 54.43 | 38.64 |
| Beam Search (b = 8) | 54.43 | 38.70 |
Table 4: The results of LLM with different decoding strategies for answer generation on QuAC, 50 human dialogues are used for finetuning the LLM.
![3_image_0.png](3_image_0.png)
based decoding strategies for answer generation.
Compared with beam search, greedy search shows competitive performance and is more efficient.
Thus we use greedy search by default in this paper.
## 3.6 Scaling Laws
We further analyze how the benefit of AutoConv is affected by the scale of LLM. As shown in Figure 3, the performance gets better with a larger model across a various number of synthetic dialogues. In addition, when the LM is small (350M) and with limited generation ability, the synthetic dialogues can even hurt the performance when the available human dialogues are scarce. Due to the limitation of computational resources, we limit our investigation to 13B parameters and leave larger models for future work.
## 3.7 Case Study
In Table 5, we present an example of our synthetic conversation for the case study. The original document describes the singer Ciara's second studio album and her acting debut. The conversation consists of seven user questions and seven system answers, covering the title and sales of the album, the duration of the tour, etc. As we can see from this Title
| 2006-2007: Ciara: The Evolution and acting debut |
|----------------------------------------------------|
Document On December 5, 2006, Ciara released her second studio album, Ciara: The Evolution. According to the singer, the title of the album is \"about so much more than just my personal growth - it's about the evolution of music, the evolution of dance, the evolution of fashion \" The source of the albums´
creativity such as the sound and edge comes from Ciara in general. Ciara: The Evolution became Ciaras first and only number one album on the U.S. ´
Billboard 200, and her second number one on the Top R&B/Hip-Hop Albums charts with sales of 338,000 in the first week . . .
Dialogue
## Usr: What Was The Evolution?
Sys: On December 5, 2006, Ciara released her second studio album, Ciara: The Evolution.
Usr: What was the name of the album?
Sys: The title of the album is \"about so much more than just my personal growth - it's about the evolution of music, the evolution of dance, the evolution of fashion.\"
Usr: Did the album do well?
Sys: The album became Ciara's first number one album on the U.S. Billboard 200 and second on the Top R&B/Hip-Hop Albums charts with sales of more than 338,000.
Usr: What awards did she win if any?
Sys: The album went on to be certified platinum by the RIAA in the United States, within only five weeks of its release, and has sold 1.3 million copies according to Nielsen SoundScan.
Usr: What else did she accomplish during that time?
Sys: In support of the album, Ciara went on her first headlining tour in October 2006.
Usr: How long was the tour?
Sys: The tour went to seventeen different clubs in cities throughout the United States.
Usr: Did she have success with any songs?
Sys: The lead single, \"Get Up\", which features Chamillionaire, reached number seven in the United States and gained a platinum accreditation.
Table 5: An example of the synthetic conversation generated by AutoConv, the LLM is finetuned with 50 human dialogues from QuAC (Choi et al., 2018).
example, the user questions are diverse (e.g. what, how, did, etc.) and the conversation is informative and conversational. For example, when the system mentions "tour" (the fifth system utterance), the user follows by asking "How long was the tour?".
## 3.8 Error Analysis
To further analyze the limitation of our method, we conduct an error analysis by manually investigating 50 synthetic conversations generated by AutoConv, which is finetuned with 50 human conversations from QuAC (Choi et al., 2018). Particularly, we find that only 5% generated questions are not suitable (e.g., misspelled names). The reason stems from the open-ended characteristic of natural conversation that many kinds of user questions are possible under the same context. However, nearly 40% of system answers are not perfect, and we summarize the wrong answers into four major classes:
(1) Irrelevant: 75% of them are totally irrelevant to user questions. **(2) Related but not Accurate**:
14% of them contain related knowledge from the grounded documents, but the answers are not accurate. Take an example in Table 5, the second user question asks for the name of the album, which is Ciara: The Evolution according to the document.
While the LLM generates the interpretation of the album name by mistake. **(3) Missing**: 4% of them belong to the missing error that the system answers are "No Answer", while the questions actually can be answered based on the documents. **(4) Hallucination**: 3% of them mention hallucination knowledge, which cannot be found in the documents. In addition, we also notice that AutoConv is more likely to generate wrong answers when grounding on longer and more complex documents.
## 4 Conclusion
In this paper, we propose a simple yet effective method, AutoConv, which formulates the conversation generation problem as a language modeling task. Then, based on a large language model and a few human dialogues, AutoConv can generate synthetic dialogues with high quality. Experimental results on both QuAC and CoQA verify the effectiveness of AutoConv, which alleviates the human efforts for annotation largely. Furthermore, we also provide case study and error analysis to prompt future research.
## Limitations
In this paper, we propose a method named AutoConv, which means automatically generating information-seeking conversations with large language models (LLM). Though it has achieved great performance on both QuAC (Choi et al., 2018) and CoQA (Reddy et al., 2019), there are still some limitations that should be noticed.
Limitation of LLM. In our experiments, we use OPT-13B (Zhang et al., 2022) as the LLM for generating synthetic conversations due to the limited computational resources. Larger models should be considered to further understand the potential ability of AutoConv, e.g., GPT-3 (Brown et al., 2020),
OPT-175B (Zhang et al., 2022), BLOOM-176B
(Scao et al., 2022), and GLM-130B (Zeng et al.,
2022) etc.
Limitation of Implementation. As mentioned in Section 2.2 and Appendix B, our method needs to finetune LLM and generate massive synthetic conversations based on the finetuned LLM, which has a high cost for implementation.
Limitation of Synthetic Dialogues. As shown in Table 2 and Section 3.8, there is still a gap between our synthetic dialogues and human dialogues. It is important to improve the quality of synthetic dialogues so that we can further alleviate the dependence on human annotation.
## Ethics Statement
AutoConv is based on large language models
(LLM), while LLM has some potential risks, e.g.,
social bias (Liang et al., 2021), offensive content
(Ganguli et al., 2022) etc. Fortunately, we finetune the LLM to capture the characteristics of the information-seeking process, and the generated conversations are mostly grounded on the provided documents (take an example in Table 5). Therefore, our method alleviates the potential risks of directly using LLM. According to our manual check in error analysis (Section 3.8), we do not find any harmful content in the synthetic conversations. In addition, we also encourage considering more safety methods (Xu et al., 2020; Sun et al., 2022) to guarantee the quality of synthetic conversations.
## Acknowledgements
This work was partly supported by the National Key Research and Development Program of China
(No. 2020YFB1708200) , the "Graph Neural Network Project" of Ping An Technology (Shenzhen)
Co., Ltd. and AMiner.Shenzhen SciBrain fund.
## References
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:*
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Jiaao Chen, Derek Tam, Colin Raffel, Mohit Bansal, and Diyi Yang. 2021. An empirical survey of data augmentation for limited data learning in NLP. *CoRR*,
abs/2106.07499.
Jiaao Chen and Diyi Yang. 2021. Simple conversational data augmentation for semi-supervised abstractive dialogue summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event
/ Punta Cana, Dominican Republic, 7-11 November, 2021, pages 6605–6616. Association for Computational Linguistics.
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018*, pages 2174–2184. Association for Computational Linguistics.
Zhuyun Dai, Arun Tejasvi Chaganty, Vincent Y. Zhao, Aida Amini, Qazi Mamunur Rashid, Mike Green, and Kelvin Guu. 2022. Dialog inpainting: Turning documents into dialogs. In *International Conference on* Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of *Proceedings* of Machine Learning Research, pages 4558–4586.
PMLR.
Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, Andy Jones, Sam Bowman, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Danny Hernandez, Tristan Hume, Josh Jacobson, Scott Johnston, Shauna Kravec, Catherine Olsson, Sam Ringer, Eli Tran-Johnson,
Dario Amodei, Tom Brown, Nicholas Joseph, Sam McCandlish, Chris Olah, Jared Kaplan, and Jack Clark. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. *CoRR*, abs/2209.07858.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training. *CoRR*,
abs/2202.12359.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP
2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 1896–1907.
Association for Computational Linguistics.
Gangwoo Kim, Sungdong Kim, Kang Min Yoo, and Jaewoo Kang. 2022. Towards more realistic generation of information-seeking conversations. *CoRR*,
abs/2205.12609.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *3rd International Conference on Learning Representations,*
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Sosuke Kobayashi. 2018. Contextual augmentation:
Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA,
June 1-6, 2018, Volume 2 (Short Papers), pages 452–
457. Association for Computational Linguistics.
Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine* Learning Research, pages 6565–6576. PMLR.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In *Proceedings* of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin,
Texas, USA, November 1-4, 2016, pages 2383–2392.
The Association for Computational Linguistics.
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In *KDD '20: The 26th* ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 3505–3506. ACM.
Siva Reddy, Danqi Chen, and Christopher D. Manning.
2019. Coqa: A conversational question answering challenge. *Trans. Assoc. Comput. Linguistics*, 7:249–
266.
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In *The Tenth International Conference on* Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. 2022. BLOOM:
A 176b-parameter open-access multilingual language model. *CoRR*, abs/2211.05100.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.
Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In *Findings* of the Association for Computational Linguistics:
EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 3784–
3803. Association for Computational Linguistics.
Manfred Stede and David Schlangen. 2004.
Information-seeking chat: Dialogues driven by topic-structure. In Proceedings of Catalog (the 8th workshop on the semantics and pragmatics of dialogue; SemDial04). Citeseer.
Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, and Nigel Collier. 2022. A contrastive framework for neural text generation. *CoRR*,
abs/2202.06417.
Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, and Minlie Huang. 2022. On the safety of conversational models: Taxonomy, dataset, and benchmark. In *Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May* 22-27, 2022, pages 3906–3923. Association for Computational Linguistics.
Jason W. Wei and Kai Zou. 2019. EDA: easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 6381–6387. Association for Computational Linguistics.
Peter West, Chandra Bhagavatula, Jack Hessel, Jena D.
Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2022. Symbolic knowledge distillation: from general language models to commonsense models. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 4602–
4625. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers:
State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November* 16-20, 2020, pages 38–45. Association for Computational Linguistics.
Qingyang Wu, Song Feng, Derek Chen, Sachindra Joshi, Luis A. Lastras, and Zhou Yu. 2021. DG2: data augmentation through document grounded dialogue generation. *CoRR*, abs/2112.08342.
Qizhe Xie, Zihang Dai, Eduard H. Hovy, Thang Luong, and Quoc Le. 2020a. Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Qizhe Xie, Minh-Thang Luong, Eduard H. Hovy, and Quoc V. Le. 2020b. Self-training with noisy student improves imagenet classification. In *2020 IEEE/CVF*
Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 10684–10695. Computer Vision Foundation / IEEE.
Jin Xu, Xiaojiang Liu, Jianhao Yan, Deng Cai, Huayang Li, and Jian Li. 2022. Learning to break the loop:
Analyzing and mitigating repetitions for neural text generation. *CoRR*, abs/2206.02369.
Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2020. Recipes for safety in open-domain chatbots. *CoRR*, abs/2010.07079.
Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. 2022. Zerogen: Efficient zero-shot learning via dataset generation. *CoRR*, abs/2202.07922.
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng Zhang, Yuxiao Dong, and Jie Tang. 2022. GLM130B: an open bilingual pre-trained model. *CoRR*,
abs/2210.02414.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022.
OPT: open pre-trained transformer language models. *CoRR*, abs/2205.01068.
Chujie Zheng, Sahand Sabour, Jiaxin Wen, and Minlie Huang. 2022. Augesc: Large-scale data augmentation for emotional support conversation with pretrained language models. *CoRR*, abs/2202.13047.
## A Datasets
QuAC. QuAC (Choi et al., 2018) is a leading conversational question answering dataset, consists of 14K information-seeking dialogues. Different from the factoid questions in most existing QA datasets, the questions in QuAC are more open-ended and exploratory. In addition, 86% of questions are contextual, and the model needs to understand the dialogue context to resolve coreference. As the test set is only available in the QuAC challenge2, we evaluate the performance on the development set.
CoQA. CoQA (Reddy et al., 2019) consists of 127K conversational QA pairs across seven domains. Different from QuAC, CoQA focus more on factoid questions, and the answers are mostly named entities or short phrases as in SQuAD (Rajpurkar et al., 2016). The test set of CoQA is only available in the CoQA challenge3, therefore we evaluate the performance on the development set.
## B Implementation Details
General Setting. All experiments are based on Transformers4(Wolf et al., 2020), DeepSpeed5
(Rasley et al., 2020) and Pytorch Lightning6. We use UnifiedQA-V2-base7(Khashabi et al., 2020, 2022) as the small task model, which is based on T5 architecture with 222M parameters and pre-trained on many QA tasks (the tasks in our experiments are not included in). The training of the small task model follows the original paper (Khashabi et al., 2020) in a Text-to-Text framework (Raffel et al.,
2020). The input is Dialogue History \n Document and the output is System Answer.
For the training hyperparameters, we set the learning rate as 3e − 4, batch size as 32, and use Adam optimizer (Kingma and Ba, 2015) with warmup learning rate schedule, the warmup ratio is 0.1. When comparing with baseline methods as in Section 3.2, all methods use the same small task model, the same two-stage training strategy
(Xie et al., 2020b; Chen and Yang, 2021), the same human dialogues and the same number of synthetic dialogues for fairness (5 times the number of hu-2https://quac.ai/
3https://stanfordnlp.github.io/coqa/
4https://huggingface.co/docs/transformers/
index 5https://github.com/microsoft/DeepSpeed 6https://github.com/Lightning-AI/lightning 7https://huggingface.co/allenai/
unifiedqa-v2-t5-base-1363200 man dialogues). For the 50 human dialogues setting, we train each model for 1K gradient steps in the pre-training stage and 200 gradient steps in the fintuning stage. For the 100 human dialogues setting, the steps are 2K and 400 respectively. When scaling up the number of synthetic dialogues as in Section 3.3 and Section 3.6, the numbers of pretraining steps scale up, which are 2K, 4K, 8K, 20K
and 40K for 1K, 2K, 4K, 10K and 20K synthetic dialogues respectively, and the finetuning steps are 200, 400, 800 and 2K for 50, 100, 200 and 500 human dialogues respectively. For all experiments, we randomly sample 20% dialogues as the validation set, and others as the training set. The model is validated every epoch, and we choose the checkpoint with the best F1 score on the validation set for evaluation.
Ours. We use OPT-13B8(Zhang et al., 2022) as the LLM for generating synthetic dialogues, which is a decoder-only pre-trained language model with 13B parameters. The learning rate and batch size are set as 1e-5 and 32. Adam optimizer (Kingma and Ba, 2015) with warmup learning rate schedule is utilized for optimization and the warmup ratio is 0.1. The max training steps of LLM are 200, 400, 800 and 2K for 50, 100, 200 and 500 human dialogues respectively. According to the performance of AutoConv on the validation set of human dialogues, we find that training LLM for 4 epochs is the most suitable. We randomly sample 5K documents from the training sets of QuAC and CoQA, and generate 8 synthetic dialogues for each document. The number of turn is set as 14 for QuAC
and 30 for CoQA. Then, we filter a quarter of the synthetic dialogues based on the diversity score of each dialogue as in Su et al. (2022), which takes into account the repetition at different n-gram levels. It takes around 5 hours for training LLM and 18 hours for generating synthetic dialogues with 8 Tesla V100 32GB GPUs.
Evaluation. To evaluate the quality of synthetic conversations, we evaluate the conversational question answering performance of the small task model, which is trained on both synthetic conversations and a few human conversations. The metrics are Exact Match and word-level F1 as in Choi et al.
(2018).
## C Baselines
Prompting. Prompting is a promising method for many NLP tasks. It aims to elicit the ability of large language models learned from pre-training with text demonstrations (e.g., task instruction and few-shot examples etc). In Table 2, we report the results from Brown et al. (2020).
Finetuning. Train the small task model with only human annotations.
EDA. Easy Data Augmentation (EDA) is a simple but effective method for text classification (Wei and Zou, 2019). Given an input text, including both the knowledge paragraph and dialogue history in our experiments, four operations are applied to create new examples, including synonym replacement, random insertion, random swap and random deletion. We use their open source code9for implementation.
Back-Translation. Back-Translation is one of the most popular augmentation method for NLP
tasks (Sennrich et al., 2016; Xie et al., 2020a).
Specifically, we first translate the input text to a target language, then translate it back to the source language, thus we can get a paraphrased example. To get various augmentations for each sample, we use five target languages, including Chinese, French, German, Arabic, and Korean. Huawei Translate10 is used for the translation process.
Utterance Manipulation. Chen and Yang (2021)
propose utterance-level manipulation to perturb the discourse relations in the conversation. Two simple operations are used: (1) random swapping, which randomly swaps two utterances to mess up the logic chain of the conversation, and (2) random deletion, which means randomly deleting an utterance to improve the discourse diversity. We randomly select one operation for each augmentation.
Dialog Inpainting. The state-of-the-art data augmentation method for conversational question answering. Given a document, they iteratively insert generated utterances between the consecutive sentences in the document, then the utterances and sentences can form an informative conversation
(Dai et al., 2022). We randomly sample generated
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix A And Appendix B
✓ B1. Did you cite the creators of artifacts you used?
Appendix A
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Ethics Statement and Appendix A
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix A
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A and Appendix B
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
pluss-etal-2023-stt4sg | {STT}4{SG}-350: A Speech Corpus for All {S}wiss {G}erman Dialect Regions | https://aclanthology.org/2023.acl-short.150 | We present STT4SG-350, a corpus of Swiss German speech, annotated with Standard German text at the sentence level. The data is collected using a web app in which the speakers are shown Standard German sentences, which they translate to Swiss German and record. We make the corpus publicly available. It contains 343 hours of speech from all dialect regions and is the largest public speech corpus for Swiss German to date. Application areas include automatic speech recognition (ASR), text-to-speech, dialect identification, and speaker recognition. Dialect information, age group, and gender of the 316 speakers are provided. Genders are equally represented and the corpus includes speakers of all ages. Roughly the same amount of speech is provided per dialect region, which makes the corpus ideally suited for experiments with speech technology for different dialects. We provide training, validation, and test splits of the data. The test set consists of the same spoken sentences for each dialect region and allows a fair evaluation of the quality of speech technologies in different dialects. We train an ASR model on the training set and achieve an average BLEU score of 74.7 on the test set. The model beats the best published BLEU scores on 2 other Swiss German ASR test sets, demonstrating the quality of the corpus. | STT4SG-350: A Speech Corpus for All Swiss German Dialect Regions Michel Plüss1, Jan Deriu2, Yanick Schraner1**, Claudio Paonessa**1, Julia Hartmann1, Larissa Schmidt3, Christian Scheller1**, Manuela Hürlimann**2, Tanja Samardžic´
3, Manfred Vogel1**, Mark Cieliebak**2 1University of Applied Sciences and Arts Northwestern Switzerland, Windisch 2Zurich University of Applied Sciences, Winterthur 3University of Zurich, Zurich [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]
## Abstract
We present STT4SG-350 (Speech-to-Text for Swiss German), a corpus of Swiss German speech, annotated with Standard German text at the sentence level. The data is collected using a web app in which the speakers are shown Standard German sentences, which they translate to Swiss German and record. We make the corpus publicly available. It contains 343 hours of speech from all dialect regions and is the largest public speech corpus for Swiss German to date. Application areas include automatic speech recognition (ASR), text-to-speech, dialect identification, and speaker recognition.
Dialect information, age group, and gender of the 316 speakers are provided. Genders are equally represented and the corpus includes speakers of all ages. Roughly the same amount of speech is provided per dialect region, which makes the corpus ideally suited for experiments with speech technology for different dialects.
We provide training, validation, and test splits of the data. The test set consists of the same spoken sentences for each dialect region and allows a fair evaluation of the quality of speech technologies in different dialects. We train an ASR model on the training set and achieve an average BLEU score of 74.7 on the test set. The model beats the best published BLEU scores on 2 other Swiss German ASR test sets, demonstrating the quality of the corpus.
## 1 Introduction
We present STT4SG-350, a corpus of Swiss German speech, annotated with Standard German text at the sentence level. The corpus represents all Swiss German dialect regions and contains 343 hours of speech.
Swiss German is a family of German dialects spoken by around 5 million people in Switzerland.
It differs from Standard German regarding phonology, vocabulary, morphology, and syntax. There are significant differences among the Swiss German dialects as well, particularly regarding phonology and vocabulary. Swiss German is primarily a spoken language. It is also used in writing, but mainly in informal text messages. In most other contexts, including formal letters, laws, and newspapers, Standard German is used instead. One important reason for this is Swiss German's lack of a standardized orthography.
The diversity among dialects, exacerbated by the lack of a standardized orthography, leads to a large number of written variants for each word. This, together with the small amount of text resources compared to Standard German, makes automated processing of Swiss German text challenging.
STT4SG-350 is, to the best of our knowledge, the largest public speech corpus for Swiss German.
While the primary use case is automatic speech recognition (ASR), it is also a useful resource for text-to-speech (TTS), dialect identification, and speaker recognition. By providing roughly the same amount of data per dialect region, irrespective of its population size, the corpus contributes to improving speech technology for underrepresented dialects. In addition, the test set, which contains the same spoken sentences in each dialect, allows a fair evaluation of the quality of speech technologies in different dialects. Furthermore, it contributes to more inclusive speech technology by keeping a balanced gender ratio and featuring speakers of all ages.
## 2 Related Work
The SDS-200 corpus (Plüss et al., 2022) contains 200 hours of speech by around 4,000 speakers with Standard German transcripts. The recordings cover a large part of the Swiss German dialect landscape.
The number of recordings per speaker follows a long-tail distribution. For example, the top 3 speak1763 ers account for 23% of recordings. The Swiss Parliaments Corpus or SPC (Plüss et al., 2021a)
contains 299 hours of speech in the Bernese dialect. The text is Standard German, taken from parliament minutes, and is not a fully accurate transcription. Text and audio are automatically aligned.
The SwissDial corpus (Dogan-Schönberger et al.,
2021) contains 26 hours of studio-quality recordings by 8 speakers, each speaking a different dialect, with both Standard German and Swiss German transcripts. The Radio Rottu Oberwallis corpus (Garner et al., 2014) contains 8 hours of speech transcribed in Swiss German, of which 2 are also transcribed in Standard German. The ArchiMob corpus (Samardžic et al. ´ , 2016) contains 69 hours of speech with Swiss German transcripts.
For Swiss German ASR, the desired output text language is Standard German for the vast majority of use cases. Tackling speech-to-text translation with an end-to-end approach is feasible as shown by Weiss et al. (2017). Applying a similar approach to Swiss German ASR and therefore avoiding Swiss German text and its challenges altogether lead to promising results in recent years, see (Plüss et al.,
2023; Khosravani et al., 2021; Plüss et al., 2022, 2021a).
Dogan-Schönberger et al. (2021) experiment with TTS for Swiss German. Their models achieve a 5-scale mean opinion score of 2.9 to 4.1. Importantly, their approach requires Swiss German input text.
## 3 Data Collection
Data for STT4SG-350 was collected in two phases:
1) the test set with 76 participants from December 2021 until March 2022, and 2) the train and validation sets with 240 participants from May until November 2022.
## 3.1 Recording
Speech was recorded using a web app based on the code1 by Plüss et al. (2022). Recordings are made sentence by sentence. The app displays a Standard German sentence, which the participant is asked to translate to Swiss German and speak aloud. A screenshot of the recording functionality can be found in Appendix A. The goal of the translation step is to get a correct, natural-sounding Swiss German sentence in the participant's dialect.
We display a popup with examples before the first 1MPL-2.0 license recording to explain this to participants. We also display a short explanation below the sentence to be recorded. We manually validated the correctness of at least 10 randomly sampled recordings per participant at collection time. In contrast to Plüss et al. (2022), for phase 2, we recorded 44.1 kHz lossless FLAC audio rather than 32 kHz lossy MP3 audio. The recording quality depends on the microphones used by participants, which range from studio microphones to headsets and laptop microphones. Depending on the microphone, mouse clicks can be audible in recordings.
## 3.2 Dialect Regions
For this work, we divided the Swiss German dialect continuum into 7 dialect regions, listed in Table 1, based on the clustering method by Scherrer and Stoeckle (2016)
2. The cluster analysis was carried out on 350 phonological, lexical, morphological, and syntactic phenomena. We slightly adjusted the resulting clusters to match the dialect regions commonly used in public discourse more closely.
The goal of these adjustments was to make it more intuitive for participants to choose their dialect region. The borders are intentionally fuzzy to give participants the freedom to choose the region that fits their dialect best.
## 3.3 Sentence Selection
Sentences were randomly selected from Swiss newspapers and from parliament minutes of 2 Swiss parliaments. Sentence filtering for newspapers follows Plüss et al. (2022). The goal of the filtering is to limit sentence complexity to reduce errors in the translation task. For example, only sentences of 5 to 12 words are kept. The newspaper sentences cover a broad range of topics, including culture, finance, science, sports, and technology. They also cover content and named entities particularly relevant for Switzerland. Parliament sentences are not filtered. They bring additional diversity to the corpus with longer sentences on average and a distinct vocabulary. For the test set, 3,515 sentences were selected (67% newspapers, and 33% parliaments). To allow a fair comparison among the dialects, each sentence was recorded in each of the 7 dialects. For the training and validation data, 94% news and 6% parliament sentences were selected, and we dropped the requirement to record each sentence in all dialect regions to in2Population statistics from https://www.bfs.admin.ch crease vocabulary and phrase diversity.
## 3.4 Metadata
Participants self-reported the following metadata:
- The dialect region that best fits the participant's dialect.
- The zip code of the place where the participant grew up or went to school.
- Age group (< 19, 19-29, 30-39, 40-49, 50-59, 60-69, 70-79, 80-89, > 89)
- Gender (female, male, non-binary)
We manually checked the correspondence of reported metadata and recordings for each participant. Collecting the dialect provenance as a zip code allows us to investigate dialects and the performance of speech technologies for them at different granularity levels. Collecting age group and gender helps to make sure that speech technology is inclusive and works across different demographic groups.
## 3.5 Recruitment
For the test set, all participants were recruited via the crowdsourcing platform TestingTime3. For the train set, half the participants were recruited via TestingTime, whereas the other half were recruited via universities, high schools, newspaper ads, personal contacts, and the crowdsourcing platform seniors@work4(for details refer to Appendix F
and 6). Only native Swiss German speakers able to correctly translate Standard German to Swiss German were recruited. The goal was to collect the same amount of recordings in each dialect region and we recruited accordingly. The number of recordings per participant was limited to 368 for the test set5and 1,112 for the train data. Recruiting the 316 participants required a considerable effort, especially in the low-population regions GR and VS.
## 4 Corpus
| Region | Pop. | Hours | Rec. | Speakers |
|--------------|--------|---------|--------|------------|
| Basel (BS) | 0.4M | 47.5 | 34,169 | 44 |
| Bern (BE) | 1.2M | 48.7 | 35,683 | 46 |
| Grisons (GR) | 0.2M | 44.3 | 30,931 | 46 |
| Central (CS) | 0.8M | 49.1 | 36,402 | 43 |
| Eastern (ES) | 0.9M | 52.6 | 38,182 | 47 |
| Valais (VS) | 0.1M | 51.8 | 36,457 | 44 |
| Zurich (ZH) | 1.6M | 49.3 | 35,703 | 46 |
Table 1: Corpus statistics per dialect region. Population
![2_image_0.png](2_image_0.png)
is an approximation and only includes German-speaking people .
risks are described in Appendix D. The handling of offensive content and personal data is discussed in Appendix E.
## 4.1 Data Cleaning
Filtering. Recordings with a duration of less than 2 seconds were removed. Silent recordings were also removed. For the test set, we applied heuristics to flag incomplete sentences, which were removed after double-checking them. We only kept sentences with a recording in all dialect regions in the test set. In total, we filtered out 1.5% of recordings.
Validation. We validated each speaker manually.
For this, we randomly sampled 10 recordings from each speaker, and checked whether the dialect is correct, the recording is in Swiss German, the translation is correct, and whether the sound quality is high enough. All of the participants passed the manual check.
## 4.2 Statistics
The corpus contains 343 hours of Swiss German speech in 247,527 separate recordings, each annotated with the Standard German text translation.
The mean recording length is 5.0 ± 1.5 seconds.
217,687 unique sentences were recorded and the vocabulary size is 42,980. Speech recordings were
![3_image_1.png](3_image_1.png)
provided by 316 different speakers, of which 51%
identified as female and 49% as male. No speaker identified as non-binary. Figure 1 shows the distribution of the recordings over the age groups, as well as the gender distributions per age group. The age groups from the thirties to the sixties are well represented, while the twenties are overrepresented and the teens as well as seventies are underrepresented. The age groups eighties and above are not represented at all.
Table 1 shows the corpus statistics per dialect region. While the German-speaking population differs by a factor of up to 16 between regions, the number of recordings per region is a lot more balanced, differing by a factor of not more than 1.2.
## 4.3 Splits
Table 2 shows the different corpus splits. We provide training, validation, and test splits. There is no speaker overlap between training, validation, and test. There are no common sentences between test and either training or validation. There is, however, an intersection of 835 sentences between training and validation. There are 2 different training splits. train_all contains all training data, 276 hours of speech. train_balanced is a subset of train_all with 239 hours of speech that is balanced in the number of recordings per dialect region. For GR,
the region with the fewest recordings, the recordings of all speakers are included in train_balanced.
For the other regions, we randomly chose speakers and added their recordings until the number of GR
recordings was reached. train_balanced includes 33-35 hours of speech, 24,088-25,183 recordings, and 25-32 speakers per region.
Like train_balanced, the validation split, with 34 hours of speech, is balanced in the number of recordings per dialect region. We randomly chose 3 speakers per region with at least 1,000 recordings.
The test set comprises 34 hours of speech. Importantly, the same 3,515 sentences were recorded in all 7 dialect regions to allow a fair comparison between different dialects. The test split contains at least 8 different speakers per region to provide ad-
![3_image_0.png](3_image_0.png)
equate speaker diversity in each region. For this reason, the mean number of recordings per speaker is markedly lower than in the other splits.
## 5 **Automatic Speech Recognition Baseline**
We train a baseline model to demonstrate the use of the STT4SG-350 corpus for Swiss German ASR.
We fine-tune XLS-R (1B)8(Babu et al., 2021) on the train_balanced split. XLS-R is a model based on wav2vec 2.0 (Baevski et al., 2020) with 965 million parameters pretrained on 436K hours of unlabeled speech data covering more than 128 languages. Swiss German was not part of the training data. We provide the fine-tuning details and experimental setup in appendix C.
We report the results of our fine-tuned model on three publicly available Swiss German datasets and the STT4SG-350 validation and test sets in Table 3.
The model achieves state-of-the-art results on the All Swiss German Dialects Test Set (ASGDTS)
(Plüss et al., 2021b) and SDS-200 (Plüss et al.,
2022), and improves the best reported BLEU scores on the test sets by 43% and 9%, respectively. Our model is 6% behind the best reported BLEU score on the SPC test set (Plüss et al., 2021a). These results highlight the benefit of the STT4SG-350 dataset on test data from different domains.
## 6 Conclusion
We have described STT4SG-350, which is, to the best of our knowledge, the largest public speech corpus for Swiss German with 343 hours of speech.
Our ASR baseline model trained on the corpus achieves a BLEU score of 74.7 on the test set. In addition, it beats the best published BLEU scores 8Apache-2.0 license on 2 other test sets, demonstrating the quality of the corpus.
STT4SG-350 is balanced across the 7 dialect regions, and the test set allows a fair comparison of ASR performance on different dialects. We intend to take advantage of these properties in future work and conduct in-depth experiments to explore differences in ASR quality between dialects. Subsequently, we want to find ways to improve performance for underrepresented dialects.
## Acknowledgements
This work was supported by Swiss National Science Foundation within the project "End-to-End Low-Resource Speech Translation for Swiss German Dialects (E2E_SG)" [205121_200729/1].
## Limitations
The corpus and therefore also the ASR baseline model only cover read speech. We have not tested the model on spontaneous speech, but we expect it to perform significantly worse on this type of data.
Our data collection process for Swiss German speech with Standard German transcripts is designed to collect large amounts of data in a costefficient manner. We estimate costs to be 4 to 6 times lower compared to the transcription of existing recordings. However, there is a downside to our approach. Because it is based on a given Standard German sentence, it can lead to Swiss German speech that's closer to Standard German than the Swiss German encountered in everyday conversations. The severity of the shift towards Standard German depends on the individual speakers and their ability and effort to produce Swiss German representations that are close to how they would speak in everyday conversations.
While we made every effort to include as many different dialects as possible in the corpus, there are still strong dialects with a comparatively low German-speaking population that are insufficiently or not at all represented, e.g. some dialects from the canton of Fribourg. This is due to the huge dialect diversity in Switzerland.
The gender ratio is not balanced for some dialect regions in the test set, especially not for VS, where the test set is female-only because we did not succeed to recruit any male speakers from this region during phase 1 of the data collection. However, preliminary experiments do not show a significant difference between genders in Swiss German ASR
performance, so we do not expect this to lead to skewed results.
Our ASR baseline model and other models trained on the corpus may perform below average for children and people above seventy due to the lack of training data for these age groups.
## Ethical Considerations
Participants were specifically recruited to record Swiss German speech for this corpus. The purpose of the recordings was made clear at recruiting time:
a training corpus for Swiss German ASR models.
Participants were also informed at recruiting time that information about their dialect, age, and gender will be collected. Furthermore, to be able to participate, they had to read and accept our data privacy policy which further detailed the future use of collected data.
## References
Yuriy Arabskyy, Aashish Agarwal, Subhadeep Dey, and Oscar Koller. 2021. Dialectal Speech Recognition and Translation of Swiss German Speech to Standard German Text: Microsoft's Submission to SwissText 2021. In *Swiss Text Analytics Conference 2021*, Proceedings of the Swiss Text Analytics Conference 2021.
Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021.
XLS-R: self-supervised cross-lingual speech representation learning at scale. *CoRR*, abs/2111.09296.
Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
CoRR, abs/2006.11477.
Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O'Reilly, Beijing.
Pelin Dogan-Schönberger, Julian Mäder, and Thomas Hofmann. 2021. SwissDial: Parallel Multidialectal Corpus of Spoken Swiss German.
Philip N. Garner, David Imseng, and Thomas Meyer.
2014. Automatic Speech Recognition and Translation of a Swiss German Dialect: Walliserdeutsch. In Proceedings of Interspeech, Singapore.
Abbas Khosravani, Philip N. Garner, and Alexandros Lazaridis. 2021. Learning to translate low-resourced swiss german dialectal speech into standard german text. In *2021 IEEE Automatic Speech Recognition*
and Understanding Workshop (ASRU), pages 817–
823.
Michel Plüss, Manuela Hürlimann, Marc Cuny, Alla Stöckli, Nikolaos Kapotis, Julia Hartmann, Malgorzata Anna Ulasik, Christian Scheller, Yanick Schraner, Amit Jain, Jan Deriu, Mark Cieliebak, and Manfred Vogel. 2022. SDS-200: A Swiss German speech to standard German text corpus. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3250–3256, Marseille, France. European Language Resources Association.
Michel Plüss, Lukas Neukom, Christian Scheller, and Manfred Vogel. 2021a. Swiss Parliaments Corpus, an Automatically Aligned Swiss German Speech to Standard German Text Corpus. In *Swiss Text Analytics Conference 2021*, Proceedings of the Swiss Text Analytics Conference 2021.
Michel Plüss, Lukas Neukom, and Manfred Vogel.
2021b. SwissText 2021 Task 3: Swiss German Speech to Standard German Tex. In *Swiss Text Analytics Conference 2021*, Proceedings of the Swiss Text Analytics Conference 2021.
Michel Plüss, Yanick Schraner, Christian Scheller, and Manfred Vogel. 2023. 2nd swiss german speech to standard german text shared task at swisstext 2022.
Tanja Samardžic, Yves Scherrer, and Elvira Glaser. ´
2016. ArchiMob - a corpus of spoken Swiss German. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16),
pages 4061–4066, Portorož, Slovenia. European Language Resources Association (ELRA).
Yves Scherrer and Philipp Stoeckle. 2016. A quantitative approach to swiss german - dialectometric analyses and comparisons of linguistic levels. *Dialectologia et Geolinguistica*, 24(1):92–125.
Yanick Schraner, Christian Scheller, Michel Plüss, and Manfred Vogel. 2022. Swiss german speech to text system evaluation.
Ron J. Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to-sequence models can directly translate foreign speech. In *Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017*, pages 2625–2629.
ISCA.
## A Web App Screenshot
Figure 2 shows a screenshot of the recording screen in the web app.
## B Corpus Distribution Format
The recordings are distributed in 2 TAR archives.
Recordings in the training and validation splits in FLAC format can be found in clips__train_valid.tar.
Recordings in the test split in MP3 format can be found in clips__test.tar. The mapping of recordings to sentences and all other metadata can be found in the TSV files, one file per split, e.g. train_all.tsv. A
description of the columns in the TSV files can be found in Table 4.
## C Fine-Tuning Details
The vocabulary used to preprocess the sentences is limited to lower-case characters and the German umlauts ä, ö, and ü. All characters with other accents are transformed into their corresponding character without accents and hyphens are replaced with a space.
We mainly replicate the fine-tuning procedure9 of Babu et al. (2021) with the model settings of Baevski et al. (2020). Instead of searching the learning rate in a range we settle for 3e−5. The training is conducted on 4 NVIDIA A100 40 GB
GPUs. To achieve an effective batch size of 1,600 seconds (0.44 hours), we use gradient accumulation over 10 steps and 640,000 samples per GPU. One training run on the train_balance dataset takes 50 hours to complete. The metrics Word Error Rate
(WER) and BLEU score are reported as the mean over five runs with different seeds. For the BLEU
score, we use the NLTK10 implementation (Bird et al., 2009) at version 3.7.
## D Potential Risks
The corpus was designed specifically with diversity in mind. The goal was to cover all dialect regions, all age groups and achieve a balanced gender ratio.
This goal was reached for the most part. However, no children and people above eighty are part of the corpus. It is possible that models trained on this corpus perform below average for these demographic groups as well as people with strong, not widely used dialects. There is a risk for this group of people to be at a disadvantage when using speech technology solely based on the published corpus.
The described ASR baseline model is intended to be used on Swiss German speech data similar in length to the training data. When transcribing speech that is more than 2 times the mean length of 5 seconds, there is an increasing risk of incomplete
9https://github.com/facebookresearch/fairseq/
tree/main/examples/wav2vec/xlsr 10Apache-2.0 license
![6_image_0.png](6_image_0.png)
| Column | Description |
|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| path | Path to corresponding Swiss German recording in TAR archive |
| duration | Clip duration in seconds |
| sentence | Standard German sentence |
| sentence_source | Source of the sentence news* = Swiss newspapers (for test split: news_[topic], for other splits: news), parliament = parliament minutes of 2 Swiss parliaments |
| client_id | Unique speaker identifier |
| dialect_region | Speaker's dialect region (Basel, Bern, Graubünden, Innerschweiz, Ostschweiz, Wallis, Zürich) |
| canton | Canton of the municipality in zipcode column (AG, AI, BE, BL, BS, FR, GL, GR, LU, NW, OW, SG, SH, SO, SZ, TG, TI, UR, VS, ZG, ZH, can be empty) |
| zipcode | Zip code of the origin municipality of a speaker's dialect (can be empty) |
| age | Speaker's age bracket (teens, twenties, thirties, fourties, fifties, sixties, seventies) |
| gender | Speaker's gender (female, male) Table 4: Description of columns in TSV files |
transcripts that do not reflect the spoken content well.
## E Offensive Content And Personal Data
We did not explicitly check for offensive content in the text data because both data sources, newspapers and parliament minutes, are publicly accessible and it seems reasonable to assume that the text does not contain offensive content. This assumption was confirmed by the at least 3,160 recording-sentencepairs (10 per participant) we manually validated.
We cannot rule out the existence of offensive content in the recordings. However, after the manual validation of at least 3,160 recordings (10 per participant), it is unlikely that there are many such cases.
We did not anonymize data because the metadata doesn't contain information that names or uniquely identifies individuals.
## F Compensation For Participants
Participants in the first phase were paid 70 Swiss francs, whereas participants in the second phase were paid 110 Swiss francs. For the last 3 weeks of phase 2, we increased the salary to 200 Swiss francs to attract as many participants as possible before finishing the collection.
Each phase 1 participant should provide 0.5 hours of recordings. Each phase 2 participant should provide 1.5 hours of recordings. We calculated with an hourly salary of 27.50 Swiss francs.
25-30 Swiss francs per hour are the usual payment for a side job in Switzerland. We estimated the required work for each minute of recording to be 2.5 minutes.
For phase 1, the work per participant is therefore 1.25 hours. We added 0.25 hours to read instructions and register on the website. 1.5 times the hourly salary is equal to 41.25 Swiss francs. We increased this to 70 Swiss francs to improve our chances of finding enough participants.
For phase 2, the work per participant is 3.75 hours, plus 0.25 hours setup. 4 times the hourly salary is equal to 110 Swiss francs.
If a participant did not finish the designated amount of recordings, we paid them pro rata.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
Appendix D
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3.1, 3.2, 4, 5
✓ B1. Did you cite the creators of artifacts you used?
3.1, 3.2, 5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
3.1, 4, 5
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3.1, 3.2, 3.3, 4, 7, Appendix D
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Appendix E
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3.3, 4.2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4.2, 4.3
## C ✓ **Did You Run Computational Experiments?** 5, Appendix C
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5, Appendix C
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We discuss the experimental setup in Appendix C, but we did not tune hyperparameters and instead referred to the XLSR paper which describes the hyperparameter search done by the authors of the wav2vec XLSR models.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix C
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
3.1, Supplementary Material
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
3.5, Appendix F
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix G
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
A review by an ethics review board is not necessary in our country for the type of research performed in this work.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
3.2, 3.4, 4.2 |
magister-etal-2023-teaching | Teaching Small Language Models to Reason | https://aclanthology.org/2023.acl-short.151 | Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets. However, these reasoning capabilities only appear to emerge in models with at least tens of billions of parameters. In this paper, we explore the transfer of such reasoning capabilities to smaller models via knowledge distillation, also investigating model and dataset size trade-off. Specifically, we finetune a student model on the chain of thought outputs generated by a larger teacher model. Our experiments show that the proposed method improves task performance across arithmetic, commonsense and symbolic reasoning datasets. For example, the accuracy of T5 XXL on GSM8K improves from 8.11{\%} to 21.99{\%} and 18.42{\%} when finetuned on PaLM 540B and GPT-3 175B generated chains of thought, respectively. | # Teaching Small Language Models To Reason
Lucie Charlotte Magister∗
University of Cambridge [email protected] Jonathan Mallinson Google Research [email protected] Jakub Adamek Google Research [email protected]
## Aliaksei Severyn Google Research [email protected]
Eric Malmi Google Research [email protected]
## Abstract
Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets. However, these reasoning capabilities only appear to emerge in models with at least tens of billions of parameters. In this paper, we explore the transfer of such reasoning capabilities to smaller models via knowledge distillation, also investigating model and dataset size trade-off. Specifically, we finetune a student model on the chain of thought outputs generated by a larger teacher model. Our experiments show that the proposed method improves task performance across arithmetic, commonsense and symbolic reasoning datasets. For example, the accuracy of T5 XXL on GSM8K improves from 8.11% to 21.99% and 18.42% when finetuned on PaLM 540B and GPT-3 175B generated chains of thought, respectively.
## 1 Introduction
Chain of thought (CoT) prompting encourages language models (LMs) to break down a reasoning task into a series of intermediate steps (Wei et al.,
2022). They demonstrate that this prompting significantly increases the task accuracy of large language models (LLMs) across commonsense, symbolic and mathematical reasoning datasets. Here, LLMs are models with at least tens of billions of parameters, such as PaLM 540B (Chowdhery et al.,
2022), GPT-3 175B (Brown et al., 2020), or UL2 20B (Tay et al., 2022). However, the reasoning capabilities of smaller LMs do not improve with CoT prompting, mostly producing illogical CoT.
Notably, CoT prompting even reduces the accuracy of models with less than 10 billion parameters. Wei et al. (2022) attribute this to abilities, such as semantic understanding and symbolic mapping, only emerging at larger scales. This leads us to our re-
∗Research conducted during an internship at Google.
search question: can the reasoning capabilities of LLMs be transferred to smaller LMs via finetuning?
This work explores CoT knowledge distillation
(Hinton et al., 2015) from PaLM 540B (Chowdhery et al., 2022) and GPT-3 175B (Brown et al., 2020)
to different sizes of the smaller language model T5 (Raffel et al., 2020), such as T5 XXL, XL and base, which have 11 billion, 3 billion and 220 million parameters, respectively. As a result of our work, we make two recommendations: (1) perform knowledge distillation by finetuning the student model on the CoT generated by a large teacher model; and (2) generate the CoT from an LLM, as proposed by Wei et al. (2022), but crucially provide the solution to the task in the few-shot prompt. We demonstrate that the proposed method improves task performance across arithmetic, commonsense and symbolic reasoning datasets irrespective of the teacher model used. For example, we show an accuracy increase from 8.11% to 21.99% and 18.42%
on the GSM8K (Cobbe et al., 2021) dataset when finetuning T5 XXL on PaLM 540B and GPT-3 175B generated CoT data, respectively.
## 2 Related Work
This work is inspired by the seminal work of Wei et al. (2022) on CoT prompting. They demonstrate that prefixing an input with 2-8 exemplars of CoT
reasoning encourages LMs to do the same, reaching state-of-the-art performance on datasets such as GSM8K (Cobbe et al., 2021). Wang et al. (2022)
show that task accuracy can be further improved by using self-consistency in CoT prompting. Selfconsistency samples CoT reasoning paths from a model's decoder and returns the most consistent path by taking the majority vote. Subsequently, Chung et al. (2022) explore finetuning a FLANbased (Wei et al., 2021) version of PaLM on manually generated CoT data.
Concurrent to our work, a small number of other works propose methods focused on CoT student–
1773 teacher knowledge distillation. Ho et al. (2022)
and Li et al. (2022) also explore knowledge distillation with the difference of proposing diverse sampling and rationalization prompting, respectively. In contrast to their work, our work explores more teacher models and demonstrates both the effects of dataset and model size on accuracy. We also achieve a higher accuracy on common datasets, such as GSM8K, than Ho et al. (2022). In contrast to our work, Shridhar et al. (2022) focus on training two models, one for problem decomposition and one for solving. Yet differently, the focus of Eisenstein et al. (2022) relies on producing markupand-mask explanations for open-book question answering. Lastly, Huang et al. (2022) present one related experiment, however, we present a more indepth exploration on more datasets. To the best of our knowledge, our work is the first to extensively explore the improvement of the reasoning ability of small LMs via knowledge distillation across multiple model architectures, and observing the effects of student model size and dataset size on accuracy.
## 3 Method
We propose a two-step pipeline for CoT knowledge distillation. The first step comprises annotating an existing supervised dataset with CoT reasoning generated by a teacher model. To generate high quality data, we propose using LLMs, such as PaLM 540B or GPT-3 175B, as teachers, based on the finding that CoT reasoning improves with model scale (Wei et al., 2022). Specifically, we perform few-shot prompting with 8 exemplars on these models to generate CoTs. However, we make a key modification to the prompts proposed by Wei et al.
(2022). We adapt the few-shot prompts to provide the model with the target after posing the question and before providing example CoT. This is based on the observation that providing this guidance allows LLMs to correct small mistakes in the CoT.
Lastly, we remove all incorrect CoT based on the target answer to prevent the student to learn from bad examples. The second step comprises finetuning a student model via teacher forcing (Williams and Zipser, 1989). The student is provided with the question as input, and the CoT and answer as the target. As the model is trained on producing a CoT during finetuning, prompting is not required. Figure 1 provides an overview of the proposed method.
![1_image_0.png](1_image_0.png)
## 4 Experimental Setup
We follow a similar experimental setup to Wei et al.
(2022), focusing on tasks covering arithmetic, commonsense and symbolic reasoning.
## 4.1 Benchmarks And Metrics 4.1.1 Arithmetic Reasoning
We benchmark the proposed method on the following math word problem datasets: (1) GSM8K
(Cobbe et al., 2021), (2) MAWPS (KoncelKedziorski et al., 2016) and (3) ASDiv (Miao et al.,
2021). We use the official training and testing split for GSM8K, taking the last 10% of the training split for validation, and the 5-fold cross validation splits available for MAWPS and ASDiv. We evaluate task accuracy by checking for the target answer as the final answer in the CoT. In addition, we compute the task accuracy given an external calculator, to account for arithmetic mistakes made by the model, despite the CoT being correct. The external calculator moves through the generated output, recalculating the left hand-side of equations. It then replaces the right-hand side with the calculated output, to avoid arithmetic mistakes being carried forward. For example, if a model outputted '5 +
5 = 11. 11 * 2 = 22', then the external calculator would first calculate '5+5' and replace the '11' with a '10'. In the subsequent equation, it would also replace the '11' with a '10' and arrive at the final result of '20'.
## 4.1.2 Commonsense Reasoning
We benchmark the model's ability to perform commonsense reasoning on the StrategyQA dataset
(Geva et al., 2021a). As a testing split is not available, we do not shuffle the dataset to allow reproducing our split of taking the first 80% as training data, the following 10% as validation data, and the final 10% as testing data. We compute task accuracy in the same manner as previously mentioned.
## 4.1.3 Symbolic Reasoning
Lastly, we benchmark the model on two synthetic tasks for symbolic reasoning: (1) last letter concatenation and (2) coinflip (Wei et al., 2022). Last letter concatenation prompts the model to concatenate the last letter of each word in a string. Coinflip prompts the model to perform state tracking of the coin being flipped. We evaluate task accuracy in the same manner as before. Due to the rigid structure of the datasets, we focus on evaluating the model's generalizability to out-of-distribution
(OOD) examples. We finetune the models on examples of length two and evaluate on sequences of length three and four. We initially infer the CoT using PaLM 540B, however, find that the LLM is able to perfectly replicate the desired CoT bar one example due to the rigidness of the template. We therefore decide to use the template generated CoT
in our experiments.
## 4.2 Baselines And Setup
We select PaLM 540B (Chowdhery et al., 2022)
and GPT-3 175B (Brown et al., 2020) as teacher models. We select PaLM 540B based on the stateof-the-art results on the benchmarking datasets reported by Wei et al. (2022), and confirm the observed trends with GPT-3 175B. The publicly accessible teacher models are prompted as described in Section 3.
We select different sizes of T5 (Raffel et al.,
2020) as student models, as T5 is publicly available in many sizes. The student models are trained on the PaLM 540B or GPT-3 175B generated CoT data as described in Section 3. We establish T5 XXL model finetuned on the original target as the baseline. We refrain from shuffling the datasets to allow for reproducibility.For the MAWPS and ASDiv dataset, we perform 5-fold cross validation.
For all remaining datasets, we take 10% of the
![2_image_0.png](2_image_0.png)
training set as a validation set to select the best model checkpoint. Figure 2 showcases an input examples for T5. We refer the reader to Wei et al.
(2022) for more training examples, as well as the prompts used for generating the CoT using PaLM
540B and GPT-3 175B.
We refer the reader to Appendix A for an overview of the dataset licenses. We also refer the reader to Appendix B for an overview of the computatinal resources.
## 5 Results 5.1 Arithmetic Reasoning
Table 1 details the task accuracy with and without an external calculator for the arithmetic reasoning benchmarks. Our results show that the proposed method improves task accuracy across all datasets.
Most notably, the task accuracy of MAWPS is significantly improved. The accuracy achieved given a calculator comes close to the accuracy of 8-shot PaLM 540B, demonstrating that knowledge distillation is effective, but potentially limited by the mathematical abilities of small models.
| CoT Finetuned | | | | | | |
|-----------------|------|------------|-------|-------|-------|-------|
| T5 XXL | | CoT 8-shot | | | | |
| T5 XXL | | PaLM 540B | | | | |
| Baseline | | | | | | |
| Acc. | Acc. | Acc. | | | | |
| with Calc. Acc. | Acc. | | | | | |
| with Calc. | | | | | | |
| GSM8K | | 8.11 | 21.99 | 38.21 | 56.90 | 58.60 |
| Dataset Size | 6725 | 5337 | 5337 | - | - | |
| MAWPS | | 54.15 | 70.41 | 88.22 | 93.00 | 93.66 |
| Dataset Size | 1590 | 1590 | 1590 | - | - | |
| ASDiv | | 39.64 | 42.12 | 60.73 | 73.9 | 72.6 |
| Dataset Size | 1844 | 1544 | 1544 | - | - | |
## 5.1.1 Ablation Study On Generating Chain-Of-Thought Data
We perform an ablation study to confirm that providing a LLM with the target during CoT generation is beneficial. We found that for the GSM8K
dataset, PaLM 540B only achieves a 59.98% accuracy if prompted without the target. In comparison, when including the target in the prompt the accuracy is 79.37%. A superficial explanation would be that when the model is conditioned on the expected answer, it produces the same CoT but copies the answer. However, an analysis of a subset of the differences between CoT produced with and without this conditioning shows that most of the benefits actually come from the model correcting CoT that had a single step missing or was wrong.
## 5.2 Commonsense Reasoning
For the StrategyQA dataset (Table 3), we found that using CoT finetuning improves accuracy from 68.12% to 71.98%, using only 1319 of the original 1648 examples. Compared to the arithmetic reasoning datasets, the improvement is not as significant.
This can be explained by the model lacking factual knowledge that the dataset requires. The task is heavily focused on the model reasoning on such knowledge, however, a smaller LM is most likely not in possession of this knowledge compared to a larger model with higher memorisation capacity.
## 5.3 Symbolic Reasoning
Table 2 shows the results obtained for the synthetic symbolic reasoning datasets, focusing on OOD generalization. Focusing on Last Letter Concatenation, it can be stated that both traditional finetuning and the suggested method fail at generalizing to a longer sequence length. In comparison, the proposed method significantly increases accuracy for the Coinflip dataset with regard to generalizing to three coinflips. In contrast, generalisation to four coinflips is slightly weaker than the baseline, which performs very strongly. This may be related to the task length being twice that of the training task.
## 5.4 Replicating Results Using Different Teacher Models
We demonstrate the robustness of our method using a different teacher model, namely GPT-3 175B.
Table 3 shows the results for GSM8K and StrategyQA when T5 XXL is finetuned on CoT data generated by GPT-3. The results show that the proposed method elicits improvements also with other
| CoT Finetuned | | | | | | |
|-----------------|--------|------------|------|-------|------|------|
| T5 XXL | | CoT 8-shot | | | | |
| Baseline | | | | | | |
| T5 XXL | | PaLM 540B | | | | |
| Last Letter | OOD: 3 | 0.00 | 0.00 | 94.8 | | |
| OOD: 4 | 0.00 | 0.00 | 63.0 | | | |
| Coinflip | OOD: 3 | 13.10 | | 86.70 | | 98.6 |
| Concat. | OOD: 4 | 73.80 | | 70.50 | 90.2 | |
LLMs as teachers. We also report the accuracy of T5 XXL finetuned on golden CoT provided with the datasets. For the StrategyQA dataset, the model finetuned on the golden CoT performs best, which may be attributed to the dataset being the largest, as both PaLM and GPT-3 get some examples wrong.
In contrast, the model finetuned on PaLM generated CoT performs the best for GSM8K.
| CoT finetuned | | | | | | | |
|-----------------|------------|-------|-------|-------|-------|------|------|
| T5 XXL using | CoT 8-Shot | | | | | | |
| Base | Original | | | | | | |
| Task | Cot | PaLM | | | | | |
| GPT-3 | PaLM | | | | | | |
| GPT-3 | | | | | | | |
| GSM8K | | 8.11 | 19.94 | 21.99 | 18.42 | 56.9 | 46.9 |
| 540B | 175B | 540B | 175B | | | | |
| acc. with Calc. | - | 26.99 | 38.21 | 33.06 | 58.6 | 49.6 | |
| Dataset Size | 6725 | 6725 | 5337 | 5298 | - | - | |
| StrategyQA | | | | | | | |
| 68.12 | 71.98 | 67.15 | 63.77 | 77.8 | 65.4 | | |
| Dataset Size | 1648 | 1648 | 1319 | 1319 | - | - | |
## 5.5 Ablation Study On Model Size
We investigate the performance gain achieved via finetuning student models of different sizes. Figure 3 shows the performance gain achieved when finetuning T5 of different sizes on the GSM8K dataset.
Our results show that T5 base, with 44 times fewer parameters than T5 XXL, matches the performance of the baseline T5 XXL when trained on CoT data.
Moreover, given an external calculator, even T5 small outperforms the baseline T5 XXL.
## 5.6 Ablation Study On Dataset Size
We also investigate the trade-off between the performance gain from CoT finetuning and dataset size. Table 4 details the test accuracy achieved when finetuning T5 XXL on only 4% and 20% of the data, randomly selected. In comparison to the
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
baseline accuracy of 8.11% (Table 3), we see that our method is 6x more data efficient, achieving accuracy of 11.22% with only 20% of the examples. However, training on just 20% of the data still creates a quality gap, and it's possible that with e.g. 200% larger dataset we could outperform the results in Table 3.
| Percentage of GSM8K | CoT finetuned T5 XXL | |
|-----------------------|------------------------|-----------------|
| data used to train | Acc. | Acc. with Calc. |
| 4% (213 examples) | 6.29 | 12.28 |
| 20% (1067 examples) | 11.22 | 20.47 |
| 100% (5337 examples) | 21.99 | 38.21 |
## 6 Discussion
We demonstrate that finetuning larger LMs on the CoT data generated by LLMs of over 100 billion parameters can significantly improve task accuracy.
Even a small number of CoT examples appear to suffice for this. However, such improvements appear to be task dependent. For example, the effects are limited for the StrategyQA dataset, which can be attributed to the task requiring specific factual knowledge, which smaller LMs may not have memorised due to their limited capacity. Nevertheless, there is some performance improvement, which may be attributed to the model learning how to approach such tasks. Moreover, the CoT knowledge distillation pipeline presented allows to trade-off model and dataset size with accuracy. Future work could explore improving the reasoning of small models in multi-task settings, as well as the generation of new training data using LLMs, rather than annotating existing datasets.
## 7 Conclusion
This work explores CoT knowledge distillation from LLMs of over 100 billion parameters to smaller LMs. We propose a knowledge distillation pipeline consisting of two keys steps: (1) generate CoT for existing datasets using LLMs and
(2) finetune smaller LMs on the CoT. Our results demonstrate that finetuning on CoT improves task accuracy across a range of benchmarking datasets.
## 8 Limitations
The results we present must be viewed in the context of a few limitations. A limitation is that we only perform experiments in English and on one task at a time. To be more comparable to a LLM
few-shot settings, other languages and a multi-task setup could be explored. Furthermore, in order to replicate the results access to none public models is required and inference must be performed on large amounts of data. Another limitation of our work is that it only explores the original CoT prompting approach, but we do not explore subsequent improvements, such a self-consistency (Wang et al.,
2022).
## 9 Ethical Considerations
The main ethical considerations of our research arise from the text generation performed. The concerns here are that both the teacher and student model may potentially generate non-factual
(Ji et al., 2022; Pagnoni et al., 2021; Kreps et al.,
2022) or offensive output (Gehman et al., 2020).
This is largely influenced by the input data, which is our case are standard, peer-reviewed benchmarking tasks in the NLP domain.
## References
BIG-bench collaboration. 2021. Beyond the imitation game: Measuring and extrapolating the capabilities of language models. *In preparation*.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2022. Scaling instruction-finetuned language models. *arXiv preprint arXiv:2210.11416*.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Jacob Eisenstein, Daniel Andor, Bernd Bohnet, Michael Collins, and David Mimno. 2022. Honest students from untrusted teachers: Learning an interpretable question-answering pipeline from a pretrained language model. *arXiv preprint* arXiv:2210.02498.
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. *arXiv preprint arXiv:2009.11462*.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021a. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. *Transactions of the Association for Computational Linguistics*, 9:346–361.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021b. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. *Transactions of the Association for Computational Linguistics*, 9:346–361.
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015.
Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531, 2(7).
Namgyu Ho, Laura Schmid, and Se-Young Yun.
2022. Large language models are reasoning teachers. *arXiv preprint arXiv:2212.10071*.
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022.
Large language models can self-improve. *arXiv* preprint arXiv:2210.11610.
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of hallucination in natural language generation. *ACM Computing Surveys*.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps:
A math word problem repository. In *Proceedings of*
the 2016 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1152–1157.
Sarah Kreps, R Miles McCain, and Miles Brundage.
2022. All the news that's fit to fabricate: Aigenerated text as a tool of media misinformation.
Journal of Experimental Political Science, 9(1):104–
117.
Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, et al. 2022. Explanations from large language models make small reasoners better.
arXiv preprint arXiv:2210.06726.
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2021. A diverse corpus for evaluating and developing english math word problem solvers. arXiv preprint arXiv:2106.15772.
Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstractive summarization with frank: A benchmark for factuality metrics. *arXiv preprint arXiv:2104.13346*.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya Sachan. 2022. Distilling multi-step reasoning capabilities of large language models into smaller models via semantic decompositions. arXiv preprint arXiv:2212.00193.
Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. 2022. Unifying language learning paradigms. *arXiv preprint* arXiv:2205.05131.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. *arXiv preprint arXiv:2203.11171*.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. *arXiv preprint* arXiv:2109.01652.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*.
Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. *Neural computation*, 1(2):270– 280.
## A Dataset Usage And Licenses
In this section, we list the licenses for the datasets used and any ethical concerns regarding their usage.
We describe the dataset splits used for all datasets in Section 4 of the paper.
## A.1 Arithmetic Reasoning
The GSM8K dataset (Cobbe et al., 2021) is available under the MIT license. The MAWPS dataset
(Koncel-Kedziorski et al., 2016) is available under the CC BY 4.0 and the ASDiv dataset (Miao et al., 2021) is available under the CC BY-NC 4.0 license.
We follow the intended usage of the datasets.
## A.2 Commonsense Reasoning
The StrategyQA dataset (Geva et al., 2021b) is available under the MIT license. Similar to Wei et al. (2022), we use the open-domain setting version available as part of the Big-bench collaboration (BIG-bench collaboration, 2021), available under the Apache License 2.0. We follow the intended usage of the datasets.
## A.3 Symbolic Reasoning
We generate the symbolic reasoning datasets as described in Wei et al. (2022).
## B Computational Resources
We perform inference and finetuning on different sizes of T5 on TPUs. We perform inference on PaLM 540B also on TPUs. Our results can be replicated via the public API
(https://developers.generativeai.
google/products/palm). To make requests to GPT-3 175B, we use the public API
(https://beta.openai.com/docs/
introduction).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 8
✓ A2. Did you discuss any potential risks of your work?
Section 9
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 4
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We did not discuss this as the datasets are commonly used NLP benchmarks that do not contain personal data.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We discuss this in Section 8, the limitations section. We discuss the coverage of domains in Section 4.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We discuss this in Section 4.
## C ✓ **Did You Run Computational Experiments?** Sections 4 And 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We report the model specifics in section 4. We describe the computing infrastructure in Appendix 2, but do not estimate the computational budget.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sections 4 and 5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
bhambhoria-etal-2023-simple | A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification | https://aclanthology.org/2023.acl-short.152 | In recent years, large language models (LLMs) have achieved strong performance on benchmark tasks, especially in zero or few-shot settings. However, these benchmarks often do not adequately address the challenges posed in the real-world, such as that of hierarchical classification. In order to address this challenge, we propose refactoring conventional tasks on hierarchical datasets into a more indicative long-tail prediction task. We observe LLMs are more prone to failure in these cases. To address these limitations, we propose the use of entailment-contradiction prediction in conjunction with LLMs, which allows for strong performance in a strict zero-shot setting. Importantly, our method does not require any parameter updates, a resource-intensive process and achieves strong performance across multiple datasets. | # A Simple And Effective Framework For Strict Zero-Shot Hierarchical
Classification Rohan Bhambhoria∗ 1, Lei Chen2**, Xiaodan Zhu**1 1Department of Electrical and Computer Engineering & Ingenuity Labs Research Institute Queen's University, Canada 2Rakuten Institute of Technology (RIT)
Boston, MA
{r.bhambhoria,xiaodan.zhu}@queensu.ca [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
In recent years, large language models (LLMs)
have achieved strong performance on benchmark tasks, especially in zero or few-shot settings. However, these benchmarks often do not adequately address the challenges posed in the real-world, such as that of hierarchical classification. In order to address this challenge, we propose refactoring conventional tasks on hierarchical datasets into a more indicative longtail prediction task. We observe LLMs are more prone to failure in these cases. To address these limitations, we propose the use of entailment-contradiction prediction in conjunction with LLMs, which allows for strong performance in a strict zero-shot setting. Importantly, our method does not require any parameter updates, a resource-intensive process and achieves strong performance across multiple datasets.
## 1 Introduction
Large language models (LLMs) with parameters in the order of billions (Brown et al., 2020) have gained significant attention in recent years due to their strong performance on a wide range of natural language processing tasks. These models have achieved impressive results on benchmarks
(Chowdhery et al., 2022), particularly in zero or few-shot settings, where they are able to generalize to new tasks and languages with little to no training data. There is, however a difficulty in tuning parameters of these large-scale models due to resource limitations. Additionally, the focus on benchmarks has led to the neglect of real-world challenges, such as that of hierarchical classification. As a result, the long-tail problem (Samuel et al., 2021) has been overlooked. This occurs when a vast number of rare classes occur in the presence of frequent classes for many natural language problems.
∗ This research was performed when the first author was a research intern at Rakuten Institute of Technology (RIT),
Boston.
In many industrial real-world applications, a strong performing method for hierarchical classification can be of direct utility. New product categories are emerging in e-commerce platforms.
Existing categories, on the other hand, may not be very intuitive for customers. For example, upon browsing categories such as *night creams*, we may be unable to find a product in a sibling-node category of *creams*. This is further highlighted by platforms in which a systematic structure is not created for users; parent nodes may be in place of child nodes, and vice versa (Asghar, 2016). Manually categorizing product categories can be a costly redesigning endeavour. To tackle this problem, we suggest refactoring traditional hierarchical flatlabeled prediction tasks (Liu et al., 2021) to a more indicative long-tail prediction task. This involves structuring the classification task to closely reflect the real-world long-tail distribution of classes. In 1782 doing so, we are enabled to leverage LLMs for long-tail prediction tasks in a strict zero-shot classification setting. Through a series of experiments, results in this work show that our proposed method is able to significantly improve the performance over the baseline in several datasets, and holds promise for addressing the long-tail problem in real-world applications. The contributions of this work can be summarized as follows:
- We refactor real-world hierarchical taxonomy datasets into long-tailed problems. In doing so, we create a strong testbed to evaluate "strict zeroshot classification" with LLMs.
- We explore utilizing LLMs to enhance the capabilities of entailment-contradiction predictors for long-tail classification. This results in strong capabilities of performing model inference without resource-intensive parameter updates.
- We show through quantitative empirical evidence, that our proposed method is able to overcome limitations of stand-alone large language models.
Our method obtains strong performance on longtail classification tasks.
## 2 Background And Related Work Strict Zero-Shot Classification
Previous works (Liu et al., 2021; Yin et al., 2019)
have explored zero-shot classification extensively under two settings—(i) zero-shot, where testing labels are unseen, i.e. no overlap with the training labels, and (ii) generalized zero-shot, testing labels are partially unseen. In both cases, the model is trained on data from the same distribution as the test data. In our proposed *strict* zero-shot setting, the model is only trained to learn the entailment relationship from natural language inference (NLI)
corpora (Williams et al., 2018). The training data for this model has no overlap with the distribution or semantics of the inference set. Additionally, previous works utilizing NLI have either not examined the utility of LLMs (Ye et al., 2020; Gera et al., 2022), or transfer the capabilities of LLMs to smaller models but have failed to use them in a strict zero-shot setting for long-tail problems, only demonstrating their utility for benchmark tasks (Tam et al., 2021; Schick and Schütze, 2021).
Works exploring LLMs have also limited their study to only using them independently without exploring entailment-contradiction prediction (Wei
![1_image_0.png](1_image_0.png)
et al., 2022; Brown et al., 2020).
## Long Tail Problem
Samuel et al. (2021); Zhang et al. (2022) highlight the significance of addressing the long-tail task.
Existing literature in natural language processing has focused on scenarios involving limited data availability, such as few-shot or low-resource settings. It has failed to adequately address the unique challenges presented by long-tail problems.
These problems arise when a small number of classes possess a large number of samples, while a large number of classes contain very few samples.
Previous works have not delved into the specific use of LLMs or entailment predictors.
## Hierarchical Classification
Many real-world problems contain taxonomy data structured in a hierarchical setting. Shown in Figure 2, most previous works make use of this data as a flat-label task (Kowsari et al., 2017; Zhou et al., 2020). It is however, non-trivial to create clean training data for taxonomies, which these methods rely on. This setting also combines parent and child nodes into a multi-label task, thereby increasing the complexity of the problem as siblings amongst leaf nodes are more diverse than parent nodes. Additionally, previous works do not make use of the natural entailment relations in hierarchies. Other works extenuate this problem by opting to utilize flat labels to produce graph representations (Wang et al., 2022a,b; Jiang et al., 2022; Chen et al., 2021).
For this reason, the graph representations may have limited value independently, although representations may be used to assist text classification by providing an organized label space. These works only introduce hierarchies to bring order to the label space, but overlook the original task of hierarchical taxonomy classification (Kowsari et al.,
2017). For all previous works, difficult to obtain fine-tuning data is required to produce strong sig-
## 3 Methodology
In this paper, we investigate the limitations of LLMs in three overlooked settings, when—(i) the model is not provided with sufficient examples due to the input text length, (ii) the label space includes tokens largely unobserved in the model's pretrained vocabulary, and (iii) there are a large number of distractors in the label space (Kojima et al., 2022; Min et al., 2022; Razeghi et al., 2022). These scenarios are common in real-world tasks, but are often overlooked in the development and evaluation of LLMs. To address these challenges, we propose the use of entailment-contradiction prediction (Yin et al., 2019), the task of determining whether a premise logically entails or contradicts a hypothesis. Through our method, we are able to leverage strong reasoning from Yin et al. (2019) with the retrieval abilities of LLMs (Wang et al., 2020) to improve overall performance in a strict zero-shot setting, where the model must classify samples from a new task without any fine-tuning or additional examples used for training from the same distribution as the inference dataset. Importantly, our proposed combined method does not require parameter updates to the LLM, a resource-intensive process that is not always feasible with increasingly large model size (Chowdhery et al., 2022).
Our simple framework is shown in Figure 1.
Considering the label space, C = {C1, C2*, ...C*n}
as the set of classes for any given dataset, and a text input, X, we can utilize the entailment predictor, E to make a *contradiction*, or *entailment* prediction on each label. This is done by using X as the premise, and "This text is related to Ci."
Figure 3: Web of Science (WOS) and Amazon Beauty datasets, refactored to a long-tail distribution. Maximum tree
![2_image_0.png](2_image_0.png)
depth is shown for Amazon Beauty, which varies from 3-5. Leaf nodes are used in our method regardless of depth.
nals. In our work, we refactor this data into a leaf-node label prediction task with the help of entailment-contradiction relations and LLMs. In doing so, we enable hierarchical taxonomy prediction independent of utilizing training data for the downstream task.
∀Ci ∈ C as the hypothesis (Yin et al., 2019). In our work, the premise may be modified to include the prompt template. The prediction, E(X) lacks any external knowledge and is restricted to the label space, which may result in poor performance.
E(X) can however, provide us with an implicit classification of the contradiction relation for sibling nodes. In our work, we use E(X) as an initializer for LLMs. We also regard it as a baseline as it shows strong performance independently. A LLM,
L on the other hand, operates in an open space, with aforementioned shortcomings for classification tasks. For our purposes, we can regard this as a noisy knowledge graph (Wang et al., 2020), which may be utilized to predict ancestors or descendants of the target class. We consider the prediction made by the LLM as L(X). It is important to note that L(X) may or may not belong to C. We try several prompts for this purpose, shown in Appendix A.
By combining these two components, we can create a template which utilizes the *entailment* relation explicitly, and the *contradiction* relation implicitly by constructing L(E(X)) to deseminate combined information into an entailment predictor for classification. The template we use is task-dependent, and is generally robust given an understanding of the domain. On Web of Sciences we use: "Here is some text that entails E(X): X. What area is this text related to?". For Amazon Beauty, we use "Here is a review that entails E(X): X. What product category is this review related to?". In this setting, our method still meets a barrier due to limitations of LLMs. By constructing a composite function, E(L(E(X)), we are able to leverage our LLM in producing tokens which may steer the entailment predictor to correct its prediction. The template used for this composite function is "Here is some text that entails L(E(X)): X." across all datasets.
General Form: Although our results are reported combining the advantages of L, and E to produce upto the composite function E(L(E(X)), this can
| Model | WOS | Amzn Beauty | Amzn Beauty | | | |
|----------------------------|------------------|------------------------|---------------|-------|--------|-------|
| (Tree Depth = 2) | (Tree Depth = 2) | (Tree Depth = 3, 4, 5) | | | | |
| Acc. | Mac.F1 | Acc. | Mac.F1 | Acc. | Mac.F1 | |
| T0pp | 10.47 | 11.04 | 7.35 | 6.01 | 12.04 | 4.87 |
| BART-MNLI (Baseline) | 61.09 | 68.93 | 60.80 | 51.15 | 41.68 | 49.35 |
| T0pp + BART-MNLI | 20.64 | 24.01 | 37.24 | 24.38 | 23.47 | 18.01 |
| BART-MNLI + T0pp | 60.40 | 68.92 | 58.79 | 51.94 | 43.98 | 46.06 |
| BART-MNLI + T0pp (Primed) | 60.16 | 68.81 | 59.79 | 54.04 | 39.10 | 46.50 |
| BART-MNLI + T0pp (Primed+) | 61.78 | 69.48 | 64.25 | 52.84 | 40.79 | 49.96 |
![3_image_0.png](3_image_0.png)
Figure 4: Results for Top-5 predictions on WOS dataset.
BART-MNLI + T0pp (Primed+) (**ours**) converges with performance of the BART-MNLI (**baseline**) at Top-4.
be extended as it holds the property of being an iterative composition function to E(L(E(L...E(X)))).
Our observations show this setting having comparable, or marginal improvements with our dataset.
However, this may prove to be beneficial in other tasks. We will investigate, and urge other researchers to explore this direction in future work.
## 4 Experiments And Results 4.1 Dataset And Experimental Settings
We refactor the widely used Web of Sciences
(WOS) with Kowsari et al. (2017), and Amazon Beauty (McAuley et al., 2015) datasets to follow a class-wise long-tail distribution as shown in Figure 3. Additionally, we create two variations of the Amazon Beauty dataset, first in which it contains the same tree depth as WOS, both containing 3000 samples, and second in which all classes are included for their maximum tree depth, containing 5000 samples. We select these datasets as they challenge the shortcomings of LLMs. The input text of providing multiple abstracts in the WOS
dataset surpasses the maximum input token length of most transformer-based models. This makes it difficult for models to learn the input distribution, a requirement for showing strong in-context performance (Min et al., 2022). Next, many tokens in the label space of both the WOS and Amazon Beauty datasets rarely occur in pretraining corpora, details of which are provided in the Appendix B.
Additionally, both datasets contain a large number of distractors, or incorrect classes in the label space. Further details are provided in Appendix C.
All experiments are performed on a single NIVIDA Titan RTX GPU. We use BART-LargeMNLI with 407M parameters as our baseline. We use this model as it outperforms other architectures trained on MNLI for zero-shot classification. For our LLM, we opt to use T0pp (Sanh et al., 2022)
with 11B parameters1, as previous works show that it matches or exceeds performance of other LLMs such as GPT-3 (Brown et al., 2020) on benchmarks.
## 4.2 Results And Discussion
Results of our method are shown in Table 1. LLMs, due to their limitations, perform poorly as a standalone model for long-tail classification. These results can be improved by priming the model with an entailment predictor through the usage of a prompt.
The baseline shows strong performance independent of the LLM, as it operates on a closed label space. The capabilities of the baseline can be enhanced by further explicitly priming it with a entailment relation through a LLM. Rows in which T0pp is initialized, or primed with E are indicated with *Primed*. Priming the model showcases improvements across all datasets for macro F1. For accuracy, priming the model shows benefit in two out of three datasets. In Figure 4, we show the results of Top-5 predictions for the WOS dataset.
All results are aggregated in Table 1. It is important to highlight that prompt variation led to stable results for our LLM. The variance upon utilizing BART-MNLI is negligible across prompts. The best results are observed upto Top-4 predictions on both accuracy and macro F1 for our method, when the entailment prompt is enhanced with a greater number of tokens corresponding to the output of L(E(X)). The variation between our method and the baseline is much greater for Top-1 predictions, but Top-5 prediction variance is negligible. Detailed results for both depth settings of Amazon Beauty are shown in Appendix C.
## 5 Conclusion
In this work, we utilize an LLM in the form of a noisy knowledge graph to enhance the capabilties of an entailment predictor. In doing so, we achieve strong performance in a strict zero-shot setting on several hierarchical prediction tasks. We also show the necessity of refactoring existing hierarchical tasks into long-tail problems that may be more representative of the underlying task itself. The utility in practical industry settings is also highlighted through this setting.
## Limitations
In this work, we implicitly utilize the *contradiction* relation. The authors recognize explicitly including it in a prompt template leads to worse performance due to the injection of noise. Controlled template generation based on a model confidence is unexplored in this work and appears to be a promising direction. Additionally, we recognize the emergence of parameter-efficient methods for training models which are unexplored in this work, which may have utility. These methods are complimentary and may benefit the performance of models as they can be used in conjunction with training paradigms such as contrastive learning to support better representations through explicit utilization of the *contradiction* relation. In this work, we limit our study to draw attention to the importance of strict zero-shot classification settings with the emergence of LLMs.
Our study can be easily extended to recursively operate on large language models, and entailment predictors. As we observe limited performance benefits in doing so, we conduct our study to show improvements after one complete cycle, given by E(L(E(X)) in Section 3.
## Ethics Statement
In this work, we propose a framework which allows for the usage of entailment-contradiction predictors in conjunction with large language models. In doing so, this framework operates in a stict zero-shot setting. While it is possible to tune prompts to select optimal variants through hard/soft prompt tuning strategies, this would require additional computational resources for LLMs with billions of parameters. Our study investigates the usage of LLMs given an understanding of the domain they tend to be used for (e.g., given an understanding of Amazon Beauty containing reviews, a prompt is constructed). Further explanation of prompt templates is contained in Appendix A. Due to the lack of tuning parameters in this work, large language models are largely dependent on pre-training data.
Although this can be controlled to some degree by introducing an entailment predictor with a fixed label space, the underlying model does not explicitly contain supervision signals without further training.
The framework proposed for inference in this work must hence be used cautiously for sensitive areas and topics.
## References
Nabiha Asghar. 2016. Yelp dataset challenge: Review rating prediction. *CoRR*, abs/1605.05362.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language Models are Few-Shot Learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Haibin Chen, Qianli Ma, Zhenxi Lin, and Jiangyue Yan. 2021. Hierarchy-aware Label Semantics Matching Network for Hierarchical Text Classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4370–
4379, Online. Association for Computational Linguistics.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways.
Ariel Gera, Alon Halfon, Eyal Shnarch, Yotam Perlitz, Liat Ein-Dor, and Noam Slonim. 2022.
Zero-Shot Text Classification with Self-Training.
ArXiv:2210.17541 [cs].
Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface Form Competition: Why the Highest Probability Answer Isn't Always Right. Technical Report arXiv:2104.08315, arXiv. ArXiv:2104.08315 [cs] type: article.
Ting Jiang, Deqing Wang, Leilei Sun, Zhongzhi Chen, Fuzhen Zhuang, and Qinghong Yang. 2022.
Exploiting Global and Local Hierarchies for Hierarchical Text Classification. Technical Report arXiv:2205.02613, arXiv. ArXiv:2205.02613 [cs]
type: article.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large Language Models are Zero-Shot Reasoners. Technical Report arXiv:2205.11916, arXiv. ArXiv:2205.11916
[cs] type: article.
Kamran Kowsari, Donald E Brown, Mojtaba Heidarysafa, Kiana Jafari Meimandi, , Matthew S Gerber, and Laura E Barnes. 2017. Hdltex: Hierarchical deep learning for text classification. In *Machine Learning* and Applications (ICMLA), 2017 16th IEEE International Conference on. IEEE.
Hui Liu, Danqing Zhang, Bing Yin, and Xiaodan Zhu.
2021. Improving Pretrained Models for Zero-shot Multi-label Text Classification through Reinforced Label Hierarchy Reasoning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1051–1062, Online. Association for Computational Linguistics.
Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings
of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '15, page 43–52, New York, NY, USA. Association for Computing Machinery.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations:
What makes in-context learning work? In *EMNLP*.
Yasaman Razeghi, Robert L. Logan IV, Matt Gardner, and Sameer Singh. 2022. Impact of Pretraining Term Frequencies on Few-Shot Reasoning. Technical Report arXiv:2202.07206, arXiv. ArXiv:2202.07206
[cs] type: article.
Dvir Samuel, Yuval Atzmon, and Gal Chechik. 2021.
From generalized zero-shot learning to long-tail with class descriptors. In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 286–295, Waikoloa, HI, USA. IEEE.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H.
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M. Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask Prompted Training Enables Zero-Shot Task Generalization. Technical Report arXiv:2110.08207, arXiv. ArXiv:2110.08207 [cs] type: article.
Timo Schick and Hinrich Schütze. 2021. It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners. Technical Report arXiv:2009.07118, arXiv. ArXiv:2009.07118 [cs]
type: article.
Derek Tam, Rakesh R. Menon, Mohit Bansal, Shashank Srivastava, and Colin Raffel. 2021. Improving and Simplifying Pattern Exploiting Training. Technical Report arXiv:2103.11955, arXiv. ArXiv:2103.11955 [cs] type: article.
Chenguang Wang, Xiao Liu, and Dawn Song. 2020.
Language models are open knowledge graphs. *CoRR*,
abs/2010.11967.
Zihan Wang, Peiyi Wang, Lianzhe Huang, Xin Sun, and Houfeng Wang. 2022a. Incorporating Hierarchy into Text Encoder: a Contrastive Learning Approach for Hierarchical Text Classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 7109–7119, Dublin, Ireland. Association for Computational Linguistics.
Zihan Wang, Peiyi Wang, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui, and Houfeng Wang. 2022b.
Hpt: Hierarchy-aware prompt tuning for hierarchical text classification.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. *CoRR*, abs/2201.11903.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, SuHang Zheng, Feng Wang, Jun Zhang, and Huajun Chen. 2020. Zero-shot text classification via reinforced self-training. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 3014–3024, Online. Association for Computational Linguistics.
Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914–3923, Hong Kong, China. Association for Computational Linguistics.
Chen Zhang, Lei Ren, Jingang Wang, Wei Wu, and Dawei Song. 2022. Making Pre-trained Language Models Good Long-tailed Learners. Technical Report arXiv:2205.05461, arXiv. ArXiv:2205.05461
[cs] type: article.
Jie Zhou, Chunping Ma, Dingkun Long, Guangwei Xu, Ning Ding, Haoyu Zhang, Pengjun Xie, and Gongshen Liu. 2020. Hierarchy-Aware Global Model for Hierarchical Text Classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1106–1117, Online. Association for Computational Linguistics.
## B Statistics C Detailed Results A Prompt Templates
upon variation. Prompts used for E(L(E(X)))
have an insignificant impact on the outcome.
We provide some details of the distribution for Web of Science dataset are provided with the head, and tail of the distribution class names with their respective value count in Table 3. We also provide the details of class-wise distribution for Amazon Beauty
(Depth=2), and Amazon Beauty (Depth=3,4,5)
datasets in Table 4, and Table 5 respectively. Towards the tail-end of the distribution, we observe several tokens which may infrequently appear in most pretraining corpora, such as "Polycythemia Vera" for the WOS dataset. Updating parameters of a model on data which is heavily skewed towards the tail distribution in the presence of frequently occuring labels can be problematic for language models. Our proposed method in this work is one solution towards this challenging task.
We provide some details of results for Top-1, Top-3, and Top-5 accuracies and macro F1 scores in this section. The Web of Sciences dataset results are shown in Table 6. We observe that the accuracy is significantly higher by all of our methods over the baseline, BART-MNLI. The same trends are seen for Macro F1 scores. In predicting Top-3 labels, only our method of Primed+ shows improvement over the baseline. For macro F1, our method in the Top-3 category shows slight improvement over the baseline. For Top-5 predictions on the WOS
dataset, our method shows performance marginally below the baseline. Results for Amazon Beauty
(Depth=2) are shown in Table 7. There is a large improvement in accuracy using our method on this dataset for Top-1, 3, and 5. For Macro F1, there our method performs marginally worse than the baseline for Top-1 predictions. Our method strongly outperforms the baseline by a large margin for Top3 and Top-3 prediction on Macro F1. The results for Amazon Beauty (Depth=3,4,5) are shown in Table 8. Our method improves upon the baseline for both, accuracy and macro F1 for Top-1 predictions.
For Top-3, our method has a significant improvement over accuracy, with comparable performance on Macro F1. Our method has a large improvement on Top-5 scores for accuracy, and improves upon the Macro F1 score for Macro F1.
With our dataset settings, we observe the per-
In our work, we try various prompts for WOS and Amazon Beauty to initialize the LLM, and for the entailment predictor. These prompts are shown in Table 2. Initializing prompts for L may show some variance in performance when utilized independently. The prompts used for obtaining L(E(X))
are generally robust with an understanding of the domain, and show a marginal impact on outcome,
| Dataset | Prompt |
|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| WOS | What field is this passage related to? + X What area is this text related to? + X X + What area is this text related to? What area is this text related to? + X |
| Amazon Beauty | Here is a review: + X + What product category is this review related to? X + What product category is this text related to? |
Table 2: Example prompts used to initialize the LLM, L.
formance of using int-8 quantization is robust and matches that of bf-16/fp-32 for inference. These settings also provide us with stable results across prompts.
Previous works have performed parameterupdates (Gera et al., 2022; Holtzman et al., 2021)
to models to tackle the challenge of many distractors in the label space. This may be practically infeasible due to the requirements of compute in the case of LLMs.
Diversity between category labels is an important factor we observe which attributes to the improvement in performance. Tables 3, 4, 5 contain statistics for labels used. We observed a significant drop in Macro F1 shown in Table 1 for the Amazon Beauty Dataset (Tree Depth=2) in contrast to WOS
for the same models due to the lack of diversity in several class names (e.g. "Bath" and "Bathing Accessories"). Similar trends were observed in Amazon Beauty (Tree Depth=3,4,5) for "Eau de Toilette" and "Eau de Parfum", both of which are perfumes.
| Class Name | Value Count |
|---------------------------|---------------|
| Polymerase chain reaction | 95 |
| Northern blotting | 88 |
| Molecular biology | 66 |
| Human Metabolism | 65 |
| Genetics | 62 |
| Stealth Technology | 2 |
| Voltage law | 1 |
| Healthy Sleep | 1 |
| Kidney Health | 1 |
| Polycythemia Vera | 1 |
| Class Name | Value Count |
|---------------------|---------------|
| Face | 1230 |
| Body | 344 |
| Styling Products | 298 |
| Women's | 289 |
| Styling Tools | 187 |
| Bags & Cases | 5 |
| Hair Loss Products | 5 |
| Bath | 3 |
| Bathing Accessories | 2 |
| Makeup Remover | 1 |
| Class Name | Value Count |
|-----------------|---------------|
| Lotions | 1188 |
| Eau de Toilette | 553 |
| Nail Polish | 405 |
| Eau de Parfum | 363 |
| Soaps | 231 |
| Shower Caps | 1 |
| Paraffin Baths | 1 |
| Hairpieces | 1 |
| Tote Bags | 1 |
| Curlers | 1 |
![7_image_0.png](7_image_0.png)
Model Top-1 Top-3 **Top-5**
Acc. Mac. F1 Acc. Mac. F1 Acc. Mac. F1
T0pp 5.46 5.66 11.26 12.25 14.70 15.23
BART-MNLI 48.10 51.49 64.73 75.77 70.46 **79.69**
T0pp + BART-MNLI 12.10 13.75 22.3 26.44 27.53 31.84 BART-MNLI + T0pp 48.16 52.16 63.80 75.40 69.26 79.20
BART-MNLI + T0pp (Primed) **48.69 52.78** 63.60 75.29 68.20 78.37
BART-MNLI + T0pp (Primed+) 49.73 53.15 65.23 75.96 **70.39 79.34**
Table 6: Accuracy and Macro F1 results for Top-1, Top-3, and Top-5 predictions for the Web of Sciences dataset.
Model Top-1 Top-3 **Top-5**
Acc. Mac. F1 Acc. Mac. F1 Acc. Mac. F1
T0pp 3.99 2.58 7.48 7.08 10.57 8.37
BART-MNLI 34.40 25.10 **68.54 60.15** 79.45 68.21
T0pp + BART-MNLI 19.87 8.95 39.94 26.30 51.93 37.89
BART-MNLI + T0pp 33.36 **24.84** 61.12 58.63 **81.90** 72.34
BART-MNLI + T0pp (Primed) **41.22** 24.30 61.46 60.22 76.70 **77.59**
BART-MNLI + T0pp (Primed+) 32.32 19.91 75.19 63.74 85.26 **74.87**
Model Top-1 Top-3 **Top-5**
Acc. Mac. F1 Acc. Mac. F1 Acc. Mac. F1
T0pp 5.22 2.32 13.80 5.54 17.12 6.76
BART-MNLI **32.58 28.05** 43.73 **56.18** 48.75 63.83
T0pp + BART-MNLI 12.49 6.99 26.26 20.64 31.67 26.41
BART-MNLI + T0pp **33.89** 23.15 **47.06** 53.02 **51.01** 62.02
BART-MNLI + T0pp (Primed) 28.18 20.22 41.89 55.15 47.24 **64.14**
BART-MNLI + T0pp (Primed+) 23.92 29.70 46.43 56.07 52.02 **64.11**
Table 8: Accuracy and Macro F1 results for Top-1, Top-3, and Top-5 predictions for the Amazon Beauty dataset
(depth = 3,4,5).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations Section
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
At the end of the introduction section 1, we provided the paper's main claims. The abstract and introduction summarize them.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And 4.
✓ B1. Did you cite the creators of artifacts you used?
Section 2, 3, 4.1
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Figure 3, Section 4.1
## C ✓ **Did You Run Computational Experiments?** Section 4.1.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4.1.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Yes; Fig 3; Fig 4; Table 1; Section 4.2; Appendix A, B, C,
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Table 1 Caption D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhang-etal-2023-simple | A Simple Concatenation can Effectively Improve Speech Translation | https://aclanthology.org/2023.acl-short.153 | A triple speech translation data comprises speech, transcription, and translation. In the end-to-end paradigm, text machine translation (MT) usually plays the role of a teacher model for the speech translation (ST) via knowledge distillation. Parameter sharing with the teacher is often adopted to construct the ST model architecture, however, the two modalities are independently fed and trained via different losses. This situation does not match ST{'}s properties across two modalities and also limits the upper bound of the performance. Inspired by the works of video Transformer, we propose a simple unified cross-modal ST method, which concatenates speech and text as the input, and builds a teacher that can utilize both cross-modal information simultaneously. Experimental results show that in our unified ST framework, models can effectively utilize the auxiliary information from speech and text, and achieve compelling results on MuST-C datasets. | # A Simple Concatenation Can Effectively Improve Speech Translation
Linlin Zhang and **Kai Fan**∗and **Boxing Chen** and **Luo Si**
Alibaba Group
{zll240651, k.fan, boxing.cbx, luo.si}@alibaba-inc.com
## Abstract
A triple speech translation data comprises speech, transcription, and translation. In the end-to-end paradigm, text machine translation
(MT) usually plays the role of a teacher model for the speech translation (ST) via knowledge distillation. Parameter sharing with the teacher is often adopted to construct the ST model architecture, however, the two modalities are independently fed and trained via different losses. This situation does not match ST's properties across two modalities and also limits the upper bound of the performance. Inspired by the works of video Transformer, we propose a simple unified cross-modal ST method, which concatenates speech and text as the input, and builds a teacher that can utilize both cross-modal information simultaneously. Experimental results show that in our unified ST
framework, models can effectively utilize the auxiliary information from speech and text, and achieve compelling results on MuST-C
datasets.
## 1 Introduction
Speech translation (ST) is the task that automatically translates a source acoustic speech signal into a text sequence in a target language. With the advance of Transformer, recent works on end-to-end speech translation (E2E ST) can alleviate many problems usually occurred in the cascade system and achieve comparable performance (Bahar et al., 2021; Bentivogli et al., 2021; Fang et al., 2022).
For the E2E ST model, MT is often used as the teacher of ST, and methods such as knowledge distillation or contrastive learning are used to bridge the modality gap. The MT teacher only uses the source text (transcription) information. The speech and text modalities are consumed individually by ST model. There are two main drawbacks. One is the teacher MT model can not use speech information, which limits the overall model perfor-
∗Corresponding author.
mance. The other is MT uses text input, ST uses the speech input, then close the two individual modalities. There is no unified module can simultaneously use cross-modal information.
Here, we take a further step towards more effective use of both speech and transcription text in ST.
Inspired by the related works of video Transformer
(Kim et al., 2021), when processing video, concatenating video information and text embedding information can better model the cross-modal information of the video. We concatenate the preprocessed speech and the transcription text jointly, and encode the two-modal information simultaneously.
Following the recent popular advance in E2E ST
with knowledge distillation (KD) (Tang et al., 2021; Zhao et al., 2021), it provides a practical paradigm for transferring knowledge from rich-resource MT
task to limited resource ST task. However, we re-define the role of teacher in our framework, because the information of the two modalities can further improve the upper bound of model performance than the single modality. Our proposed model, a unified cross-modal concatenate ST structure (**uccST**) introduces the teacher-student learning with Kullback-Leibler divergence (KL) regularization to transfer knowledge from cross-modal translation model to two subtasks - ST and MT.
Our main contributions can be summarized.
(1) Compared with the previous ST frameworks which can only utilize one single modality text in MT teacher, we design a unified framework that can use both input information of the two modalities simultaneously by concatenating speech and text.
(2) Our cross-modal framework has three diverse inputs when inference, containing three end-toend and cascade decoding paths. Our multi-task learning framework allows sub-tasks to collaborate, showing promising performance on both end-toend and cascade ST.
(3) We conduct various experiments on the MuST-C corpus. When using the limited ternary 1793
![1_image_0.png](1_image_0.png)
ST data, our E2E ST model can achieve state-ofthe-art performance. When adding the external data, our method significantly improves over the strong baselines.
## 2 Unified Cross-Modal Concatenate St 2.1 Background
Given the source acoustic speech sequence s, the corresponding transcription x and the text sequence y in target language, speech translation usually model the conditional distribution as follows.
$$p(\mathbf{y}|\mathbf{s})=\sum_{\mathbf{x}}p(\mathbf{y}|\mathbf{x},\mathbf{s})p(\mathbf{x}|\mathbf{s})\qquad{\mathrm{(1)}}$$
In most works, the assumption p(y|x) = p(y|x, s)
is usually adopted as the source transcription can deterministicially infer the final translation. However, we prefer to leverage the original conditional probability for our modeling.
## 2.2 Cross-Modal Concatenate Framework
Inspired by video Transformer, the unified model can take as input the concatenation of the features of two modalities along the temporal dimension.
As shown in Figure 1(b), the speech preprocessing module usually includes CNN down-sampling and a speech encoder, such as the encoder of the pre-trained ASR or the pre-trained audio encoder wav2vec2.0. For the text sequence, we simply process each token with an embedding layer. After the concatenation, we add the position embedding and segment embedding in the fashion of BERT.
## 2.2.1 Multi-Task Training
Concretely, given a ternary ST example (s, x, y).
We optimize three translation tasks in parallel, including MT, ST and our introduced unified crossmodal translation.
$${\mathcal{L}}_{M T}=\log p({\bf y}|{\bf x})+\log p({\bf y}|{\bf s})+\log p({\bf y}|[{\bf x},{\bf s}])\tag{2}$$
$\text{us the concatenation operation}$
$[\cdot,\cdot]$ inside.
where [·, ·] indicates the concatenation operation.
## 2.2.2 Regularization
Unlike other ST frameworks, the unified crossmodal decoder output provides the teacher signal, and the ST and MT models are two students. We employ Kullback–Leibler divergence (KL) to minimize the decoding distribution between the student and the teacher model.
$$\operatorname{{\mathrm{ed}}})+\operatorname{K}$$
$\Gamma_{\alpha}(\alpha)$
$$L=\mathrm{KL}\left(p_{s t}\|p_{t}\right)$$
LKL = KL (pst∥punified) + KL (pmt∥punified) (3)
Further, we impose a representation regularization on the encoder output. Particularly, we apply the MSE loss.
$${\mathcal{L}}_{M S E}=\mathrm{MSE}\left(\left[Z_{S T},Z_{M T}\right],Z_{\mathrm{Unified}}\right)\tag{4}$$
| Model | En-DE | En-Fr | En-Es | Paras | | | | | | |
|-------------------------------------|---------|---------|---------|---------|------|------|--------|------|------|-----|
| S | S|X | X | S | S|X | X | S | S|X | X | | |
| E2E baseline | 24.5 | - | - | 34.9 | - | - | 28.2 | - | - | 76M |
| Cascade | - | - | 25.4 | - | - | 35.7 | - | - | 28.9 | - |
| Dual-Decoder (Le et al., 2020) | 23.6 | - | - | 33.5 | - | - | 28.1 | - | - | - |
| Adapter Tuning (Le et al., 2021) | 24.6 | - | - | 34.7 | - | - | 28.7 | - | - | 78M |
| Multi-Decoder (Dalmia et al., 2021) | - | 26.3 | - | - | 37.0 | - | - | - | - | - |
| Bi KD (Inaguma et al., 2021) | 25.3 | - | - | - | - | - | - | - | - | - |
| mutual KL (Zhao et al., 2021) | - | - | - | 36.3 | - | - | 28.7 | - | - | 76M |
| No Uni baseline | 24.8 | - | 25.4 | 36.4 | - | 36.8 | 28.5 | - | 28.9 | 76M |
| Our uccST | 25.5 † | 26.3 | 25.7 | 36.6 † | 37.6 | 36.9 | 28.9 † | 29.7 | 29.2 | 76M |
Table 1: BLEU scores of the speech translation results on the tst_COMMON sets. The models are trained with the ternary ST data on constrained settings. †: the SOTA performance of all E2E methods. S indicates the ST decoding path. S|X indicates the unified decoding path with both speech and ASR transcribed text. X indicates the MT
decoding path with ASR transcribed text. No Uni baseline refers to 4.3.
| Model | En-De | En-Fr | Paras |
|-------------------------------|---------|---------|---------|
| JT-ST∗ (Tang et al., 2021) | 26.8 | 37.4 | 74M |
| E2E-ST-TDA∗ (Du et al., 2022) | 27.1 | 37.4 | 76M |
| Chimera (Han et al., 2021) | 26.3 | 35.6 | 165M |
| XSTNet (Ye et al., 2021) | 27.8 | 38.0 | 155M |
| SATE (Xu et al., 2021) | 28.1 | - | - |
| STEMM (Fang et al., 2022) | 28.7 | 37.4 | 155M |
| ConST (Ye et al., 2022) | 28.3 | 38.3 | 155M |
| W2V2 baseline | 27.3 | 36.8 | 155M |
| Our W2V2-uccST | 28.8 † | 39.1 † | 158M |
where we concatenate the encoder outputs of ST
and MT such that it results in the same length as the unified model.
## 2.2.3 Training And Inference
In summary, the final loss of the proposed uccST
can be written as follows.
$${\mathcal{L}}={\mathcal{L}}_{M T}+\lambda{\mathcal{L}}_{K L}+\eta{\mathcal{L}}_{M S E}$$
where λ and θ are hyper-parameters. During inference, we have 3 optional decoding paths. If only audio is available, we can actually choose any decoding path. For the cross-modal unified or MT
decoding path, it requires the transcription from an additional ASR, which is commonly a pre-training step for ST.
## 3 Experiments Settings 3.1 Datasets And Settings
Data For a fair comparison with previous works, we conduct our experiments on the widely used MuST-C V1: English-German (En–De), English-
Table 3: Ablation analysis of concatenation in the constrained setting. Uni sim: Unified simple.
$\boldsymbol{f}_i$
French (En–Fr) and English-Spanish (En–Es) corpus (Gangi et al., 2019).
On En-De and En-Fr, we also verify to what extent the auxiliary MT data can improve our multitask training. Specifically, we extract about 20M
sentence pairs for the WMT14 En-Fr, 4.5M for WMT14 En-De, and 18M for Opensubtitle2018 En-De.
Settings We implement all our experiments on Fairseq1. We experiment with two architectures2.
One is the transformer model with 512 hidden units 2048 feed-forward size, which is same as Tang et al. (2021), in purpose for constrained ST
data. The other one is to leverage pre-trained wav2vec2.0(Baevski et al., 2020) as the speech preprocessing module. Since wav2vec2.0 has been already pre-trained with the audio data of Librispeech(Panayotov et al., 2015), we only compare this setup to other works with same architecture.
During training, the text input is ground truth transcript of MuST-C. Note that the transcription data in Librispeech is not used in our case. We select the alternative batches between ST and MT with sampling ratios 1.0 and 0.25, respectively.
| Model | En-De | En-Fr | | | | |
|---------|---------|---------|-------|-------|-------|-------|
| S | S|X | X | S | S|X | X | |
| E2E | 24.53 | - | - | 34.88 | - | - |
| No Uni | 24.83 | - | 25.36 | 36.36 | - | 36.77 |
| Uni sim | 25.17 | 25.74 | 25.53 | 36.39 | 37.12 | 36.86 |
| Ours | 25.54 | 26.32 | 25.65 | 36.61 | 37.64 | 36.94 |
1https://github.com/pytorch/fairseq 2https://github.com/pytorch/fairseq/tree/main/
examples/speech_to_text
## 4 Experiments Results 4.1 Results On The Constrained St Data
As shown in Table 1, our method achieves an appealing performance on the three language pairs in the restricted ternary MuST-C data.
Compared with the direct E2E ST baseline, our method has enhanced 0.7 to 1.7 BLEU on the three language directions, with an average gain of 1.13 BLEU. In a word, our approach can achieve the SOTA translation performance among all end-toend ST methods.
Compared with the cascade method that we have reproduced, our E2E ST decoding path surpasses the cascade on the language pairs En-Fr, and reaches a comparable level on En-De and EnEs. The results of the MT decoding path with the transcription exceed the cascade method on all language pairs. Our cross-modal unified decoding method has enhanced 0.8 to 1.9 BLEU than the cascade method, with an average gain of 1.17 BLEU.
In summary, our E2E ST method has matched or surpassed the cascade method on the constrained triple ST data, and our cross-modal unified decoding method has exceeded the traditional cascade baseline.
## 4.2 Results On The External Data
Since our model is a multitask learning method that includes the MT subtask, we add additional MT data for comparison experiments. As shown in Table 2, we compare different baselines with similar data usage. Our E2E method (*i.e.*, ST decoding path) and the corresponding baselines are presented in the bottom two rows. The first two rows in the table are the baselines without wav2vec2.0, and the middle part of the table represents the methods with wav2vec2.0 architecture. It is concluded that the pre-trained audio encoder model is indeed helpful for downstream ST task. By introducing more auxiliary MT data, our model with pre-trained wav2vec2.0 improves 1.5 and 2.3 BLEU on the two language pairs En-De and En-Fr, respectively. In shot, our approach outperforms existing state-ofthe-art models, especially on En-Fr.
## 4.3 Ablation Analysis Of Concatenation
In order to analyze whether our concatenation is effective, we have done comparative experiments on different input models. As shown in Table 3, E2E
baseline indicates Figure 1(a). No Unified baseline means to removing the (b) in Figure 1, and the KL
| tst_COMMON | ST(BLEU) |
|----------------|------------|
| Our uccST | 25.54 |
| w/o KL | 25.12 |
| w/o MSE | 24.93 |
| w/o multi-task | 24.53 |
loss is calculated between ST and MT. Unified simple model only concatenates the speech and text sequence from each corresponding encoder output. In accordance to the result, no concatenation or the concatenation method in Unified simple model is inferior to our proposal.
## 4.4 Ablation Study On Loss
To analyze the importance of each component of the overall uccST loss, we conduct an ablation study by removing each loss step by step. Table 4 summarizes the results of the ablation study. We first remove the KL loss but reserve the unified structure. It concludes that the KL terms contribute to an improvement of 0.42 BLEU score. After further removing the MSE loss, the model becomes a standard multi-task ST Transformer. When removing multi-task, it reduces to a standard E2E ST
model.
## 4.5 Comparison With The Cascaded Model
As shown in Table 5, our proposed E2E ST has reached a comparable level to cascaded methods, both in data-constrained and non-constrained cases.
As to the two decoding methods that require transcription text, our method can outperform the cascade baseline. Meanwhile, we can observe that with the additional external data, the gap between two inference setups S|X and S is narrowed.
## 5 Related Works
Cascade ST. Cascade ST system concatenates the individual ASR and MT components (Stentiford and Steer, 1988; Waibel et al., 1991), and represents an intuitive solution to achieve reasonable performance and high intelligibility. At the same time, this cascade method also faces some thorny problems: the traditional cascade method suffers from error propagation and the loss of acoustic information that might be useful to improve final translations. To alleviate the aforementioned problems, some tight integration methods have been proposed (Sperber et al., 2019; Bahar et al., 2020).
| Model | En-De | En-Fr | | | | | | | | |
|--------------|---------|---------|-------|-------|-------|-------|-------|-------|-------|-------|
| ASR | MT | S | S|X | X | ASR | MT | S | S|X | X | |
| Cascade | 12.11 | 29.87 | - | - | 25.44 | 11.09 | 43.21 | - | - | 35.72 |
| Ours | - | - | 25.54 | 26.32 | 25.65 | - | - | 36.61 | 37.64 | 36.94 |
| Cascade(ext) | 9.85 | 33.66 | - | - | 28.97 | 9.76 | 46.13 | 39.16 | | |
| Ours(ext) | - | - | 28.82 | 29.03 | 28.95 | - | - | 39.11 | 39.32 | 39.26 |
End-to-end ST. To overcome the weakness of cascade models, Berard et al. (2016) proposed the first direct neural network model of an encoder decoder architecture without the intermediate transcription.
Currently, more effective solutions are used in endto-end ST models (Park et al., 2019; Dong et al.,
2021). To alleviate the cross-modal difficulty in end-to-end models, two-pass (Kano et al., 2017; Anastasopoulos and Chiang, 2018) methods are proposed. Curriculum learning (Kano et al., 2017; Wang et al., 2020) is proposed to improve performance of ST models.
## 6 Conclusion
In this paper, we designed a unified ST framework.
Compared with the previous ST frameworks which can only utilize one single modality text in MT teacher, our method can use both information of the two modalities simultaneously by concatenating speech and text. Our ST method can better utilize the cross-modal information. Experiments show that our method can significantly improve ST performance regardless of using the limited ternary data or adding auxiliary external data.
## Limitations
A lot of recent work especially in computer vision has leveraged the unsupervised methods or unpaired multi-modality data to pre-trained crossmodal language model. Applying the same idea into speech language model is also discussed in some recent research works. To compare fairly with previous works in ST area, we do not build our model on top of such frameworks and discuss how to utilize the raw audio. In terms of the model training, multi-tasks may affect each other due to uneven data distribution, and we have just scratched the surface of this part of the analysis.
## Ethics Statement
This work designs a unified cross-modal concatenate ST structure to take better advantage of the two modalities of speech and text. The datasets and pre-trained models we use are publicly available and are widely used in the research community, whether in a constrained or unconstrained situation.
## References
Antonios Anastasopoulos and David Chiang. 2018.
Tied multitask learning for neural speech translation.
In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA,
June 1-6, 2018, Volume 1 (Long Papers), pages 82–
91. Association for Computational Linguistics.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Parnia Bahar, Tobias Bieschke, Ralf Schlüter, and Hermann Ney. 2021. Tight integrated end-to-end training for cascaded speech translation. In IEEE Spoken Language Technology Workshop, SLT 2021, Shenzhen, China, January 19-22, 2021, pages 950–957. IEEE.
Parnia Bahar, Patrick Wilken, Tamer Alkhouli, Andreas Guta, Pavel Golik, Evgeny Matusov, and Christian Herold. 2020. Start-before-end and end-to-end: Neural speech translation by apptek and RWTH aachen university. In *Proceedings of the 17th International* Conference on Spoken Language Translation, IWSLT
2020, Online, July 9 - 10, 2020, pages 44–54. Association for Computational Linguistics.
Luisa Bentivogli, Mauro Cettolo, Marco Gaido, Alina Karakanta, Alberto Martinelli, Matteo Negri, and Marco Turchi. 2021. Cascade versus direct speech translation: Do the differences still make a difference? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and
the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1:
Long Papers), Virtual Event, August 1-6, 2021, pages 2873–2887. Association for Computational Linguistics.
Alexandre Berard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A
proof of concept for end-to-end speech-to-text translation. *CoRR*, abs/1612.01744.
Siddharth Dalmia, Brian Yan, Vikas Raunak, Florian Metze, and Shinji Watanabe. 2021. Searchable hidden intermediates for end-to-end models of decomposable sequence tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1882–1896. Association for Computational Linguistics.
Qianqian Dong, Rong Ye, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, and Lei Li. 2021. Listen, understand and translate: Triple supervision decouples end-to-end speech-to-text translation. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI
2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 12749–12759. AAAI Press.
Yichao Du, Zhirui Zhang, Weizhi Wang, Boxing Chen, Jun Xie, and Tong Xu. 2022. Regularizing end-toend speech translation with triangular decomposition agreement. In *Thirty-Sixth AAAI Conference* on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 10590–10598. AAAI Press.
Qingkai Fang, Rong Ye, Lei Li, Yang Feng, and Mingxuan Wang. 2022. STEMM: self-learning with speechtext manifold mixup for speech translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7050–7062. Association for Computational Linguistics.
Mattia Antonino Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. Mustc: a multilingual speech translation corpus. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, NAACLHLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2012–2017.
Association for Computational Linguistics.
Chi Han, Mingxuan Wang, Heng Ji, and Lei Li. 2021.
Learning shared semantic space for speech-to-text
translation. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of *Findings of ACL*, pages 2214–2225. Association for Computational Linguistics.
Hirofumi Inaguma, Tatsuya Kawahara, and Shinji Watanabe. 2021. Source and target bidirectional knowledge distillation for end-to-end speech translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1872–1881. Association for Computational Linguistics.
Takatomo Kano, Sakriani Sakti, and Satoshi Nakamura.
2017. Structured-based curriculum learning for endto-end english-japanese speech translation. In *Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017*, pages 2630–2634.
ISCA.
Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt:
Vision-and-language transformer without convolution or region supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*,
pages 5583–5594. PMLR.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Hang Le, Juan Miguel Pino, Changhan Wang, Jiatao Gu, Didier Schwab, and Laurent Besacier.
2020. Dual-decoder transformer for joint automatic speech recognition and multilingual speech translation. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 3520–3533. International Committee on Computational Linguistics.
Hang Le, Juan Miguel Pino, Changhan Wang, Jiatao Gu, Didier Schwab, and Laurent Besacier. 2021.
Lightweight adapter tuning for multilingual speech translation. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on* Natural Language Processing, ACL/IJCNLP 2021,
(Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 817–824. Association for Computational Linguistics.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An asr corpus based on public domain audio books. In *2015 IEEE*
International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210.
Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le.
2019. Specaugment: A simple data augmentation method for automatic speech recognition. In Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019, pages 2613–2617.
ISCA.
Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, and Alex Waibel. 2019. Self-attentional models for lattice inputs. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1185–1197.
Association for Computational Linguistics.
Fred WM Stentiford and Martin G Steer. 1988. Machine translation of speech. British Telecom technology journal, 6(2):116–122.
Yun Tang, Juan Miguel Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. 2021. Improving speech translation by understanding and learning from the auxiliary text translation task. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP
2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4252–4261. Association for Computational Linguistics.
Alex Waibel, Ajay N. Jain, Arthur E. McNair, Hiroaki Saito, Alexander G. Hauptmann, and Joe Tebelskis.
1991. JANUS: a speech-to-speech translation system using connectionist and symbolic processing strategies. In 1991 International Conference on Acoustics, Speech, and Signal Processing, ICASSP '91, Toronto, Ontario, Canada, May 14-17, 1991, pages 793–796.
IEEE Computer Society.
Chengyi Wang, Yu Wu, Shujie Liu, Ming Zhou, and Zhenglu Yang. 2020. Curriculum pre-training for end-to-end speech translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 3728–3738. Association for Computational Linguistics.
Chen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, Shen Huang, Qi Ju, Tong Xiao, and Jingbo Zhu. 2021.
Stacked acoustic-and-textual encoding: Integrating the pre-trained models into speech translation encoders. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 2619–2630. Association for Computational Linguistics.
Rong Ye, Mingxuan Wang, and Lei Li. 2021. End-toend speech translation via cross-modal progressive training. In Interspeech 2021, 22nd Annual Conference of the International Speech Communication
Association, Brno, Czechia, 30 August - 3 September 2021, pages 2267–2271. ISCA.
Rong Ye, Mingxuan Wang, and Lei Li. 2022. Crossmodal contrastive learning for speech translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 5099–5113. Association for Computational Linguistics.
Biao Zhang, Ivan Titov, Barry Haddow, and Rico Sennrich. 2020. Adaptive feature selection for end-toend speech translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 2533–2544. Association for Computational Linguistics.
Jiawei Zhao, Wei Luo, Boxing Chen, and Andrew Gilman. 2021. Mutual-learning improves end-to-end speech translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 3989–3994. Association for Computational Linguistics.
## A Appendix
Experience Settings The data statistics are shown in Table 6.
| corpus | ST(H/Sents) | MT(Sents) |
|----------|---------------|---------------|
| En-De | 408/234K | 22.5M(WMT+OS) |
| En-Fr | 492/280K | 20M(WMT) |
| En-Es | 504/270K | - |
Table 6: The statistics for the three language pairs.
H: Hours. Sents: Sentences. OS: OpenSubtiles2018.
WMT: WMT14.
We implement all our experiments on Fairseq3.
We experiment with two architectures4: a Transformer model with 512 hidden units 2048 feedforward size. All ST and ASR models use the same encoder with 12 layers and 6 decoder layers. The corresponding MT model also has 6 encoder and decoder layers. We share parameters of all 6 text encoder Transformer layers with the top 6 Transformer layers in the speech encoder. Hence the preprocessed speech is composed of CNN layers and 6 Transformer layers. The model architecture is same as (Tang et al., 2021), when on constrained ST data.
When using the pre-trained wav2vec2.0
(Baevski et al., 2020) as the preprocessed speech module, we add two additional 1- dimensional convolutional layers to further shrink the audio, with kernel size 5, stride size 2, padding 2, and hidden dimension 1024. Then stack our Unified concatenate model.
For all experiments on limited triple data, we used the Adam optimizer (Kingma and Ba, 2015)
with the learning rate 2e − 3. The dropout rate and the label smoothing are both set as 0.1. We choose λ1 = 1.0, λ2 = 1.0 and η = 0.3 in the training loss equation through grid search ([0.2, 1.5] for λ and [0.1, 0.5] for η).
For adding external corpus experiments, we finetune on the triple data with multi-task learning loss.
We select the alternative batches between ST and MT with sample ratios 1.0 and 0.25, respectively.
We randomly select 1M WMT14 and 1M OpenSubtitle18 as our fine-tune MT data on En-De. We randomly select 2M WMT14 on En-Fr. For all models at inference, we average 10 checkpoints with a beam size 5.
Limited ST Baselines We compare our method with various baseline models on constrained ST
situation:
- E2E ST baseline: The direct ST model translates the speech inputs to the target language text without transcription. The encoder of the E2E ST model is initialized by first training on the ASR data from the triple ST data.
- Cascade baseline: ASR and MT models are independently trained, and then the outputs of the ASR model are taken as the inputs to the MT model. The ASR model uses the same model settings as the corresponding ST model.
- AFS model: AFS model (Zhang et al., 2020)
inserts a module between the ST encoder and a pre-trained ASR encoder to filter speech features for translation. AFS model is an endto-end speech translation.
- Dual-decoder model: Dual-decoder Transformer is an end-to-end ST architecture that jointly performs ASR and ST (Le et al., 2020).
The ASR and MT decoders use attention modules to exchange information with each other.
- Bi KD: Source and target bidirectional Knowledge Distillation (Inaguma et al., 2021).
- mutual KL: Bidirectional KL for ST and MT
(Zhao et al., 2021).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section Limitations
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
section abstract and introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section Appendix
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
table 1 and table 2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section Appendix C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? section Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
she-etal-2023-scone | {S}co{N}e: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning | https://aclanthology.org/2023.acl-short.154 | A number of recent benchmarks seek to assess how well models handle natural language negation. However, these benchmarks lack the controlled example paradigms that would allow us to infer whether a model had truly learned how negation morphemes semantically scope. To fill these analytical gaps, we present the Scoped Negation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six examples with up to two negations where either zero, one, or both negative morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and in-context learning strategies. We find that RoBERTa and DeBERTa models solve ScoNe-NLI after many shot fine-tuning. For in-context learning, we test the latest InstructGPT models and find that most prompt strategies are not successful, including those using step-by-step reasoning. To better understand this result, we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds negation reasoning in short narratives. Here, InstructGPT is successful, which reveals the model can correctly reason about negation, but struggles to do so on NLI examples outside of its core pretraining regime. | # Scone: Benchmarking Negation Reasoning In Language Models With Fine-Tuning And In-Context Learning∗
Jingyuan Selena She Haverford College [email protected] Samuel R. Bowman New York University & Anthropic, PBC
[email protected]
## Abstract
A number of recent benchmarks seek to assess how well models handle natural language negation. However, these benchmarks lack the controlled example paradigms that would allow us to infer whether a model had learned how negation morphemes semantically scope. To fill these analytical gaps, we present the Scoped Negation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six examples with up to two negations where either zero, one, or both negative morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and in-context learning strategies. We find that RoBERTa and DeBERTa models solve ScoNeNLI after many shot fine-tuning. For in-context learning, we test InstructGPT models and find that most prompt strategies are not successful, including those using step-by-step reasoning. To better understand this result, we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds negation reasoning in short narratives. Here, InstructGPT is successful, which reveals the model can correctly reason about negation, but struggles to do so on prompt-adapted NLI examples outside of its core pretraining regime.
## 1 **Introduction**
Negation is a ubiquitous but complex linguistic phenomenon that poses a significant challenge for NLP systems. A diverse array of benchmarks focused on negation have appeared in recent years, many of which contain families of contrasting examples that provide a local view of the model decision boundary (Gardner et al., 2020). For instance, Cooper et al. (1996), McCoy and Linzen (2018),
Wang et al. (2019), Ettinger (2020), Hartmann et al.
(2021), and Kassner and Schütze (2020) all conduct evaluations with minimal pairs of examples that are identical except for a negative morpheme. These examples reveal whether the presence of negation has a causal impact on model predictions.
∗https://github.com/selenashe/ScoNe Christopher Potts Stanford University [email protected] Atticus Geiger Stanford University [email protected] However, negation is not simply present or absent in a sentence. Rather, negation morphemes are semantic operators that take scope in complex ways, as we see in clear contrasts like the person who was at the talk wasn't happy and the person who wasn't at the talk was happy. The recent CondaQA benchmark of Ravichander et al. (2022) includes minimal pairs aimed at determining whether models are sensitive to these differences in scope.
With the current paper, we seek to provide an even fuller picture of the complexities of negation and semantic scope. We introduce the Englishlanguage Scoped Negation Natural Language Inference Benchmark (ScoNe-NLI). ScoNe-NLI extends the negated portion of the Monotonicity NLI
dataset (Geiger et al., 2020) such that each of the 1,202 examples is now a contrast set with six examples in which zero, one, or two negations are present and each negation may or may not have a semantic scope such that the NLI label is impacted by its presence. These six conditions offer a rich picture of how negation affects NLI reasoning, and they allow us to determine whether models are truly able to handle nested negation and scope or whether they have found simplistic solutions.
We evaluate models on ScoNe-NLI using manyshot fine-tuning as well as a wide range of incontext learning strategies. For fine-tuning approaches, we find that RoBERTa and DeBERTa models both solve ScoNe-NLI. For in-context learning, we evaluate the latest InstructGPT model with a variety of prompt strategies. We find that these models perform well on sections of ScoNeNLI where the negation morphemes can simply be ignored, but they systematically fail in conditions where exactly one negative morpheme has semantic scope such that its presence changes the NLI
label. In other words, these models fail to learn in context how negation actually takes scope.
To better understand this result, we introduce a sentence completion test set (ScoNe-NLG) contain1803
| Split | Premise | Rel. | Hypothesis | Examples | No Negation |
|------------------------------------------------|--------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|------------|--------------------------------------------------|
| No negation | The cowboy fell off a horse at | ⊐ | The cowboy fell off a | 1,202 | |
| the competition | racehorse at the competition | Glen is a fan of learning math. When he sees that his new high school requires that he take a calculus course, he Negation Glen is not a fan of learning math. When he sees that his new high school requires that he take a calculus course, he Non-Scoping Negation Glen isn't just a fan of learning math, he's obsessive. When he sees that his new high school requires that he take a calculus course, he | | | |
| One Not | The cowboy did not fear | | | | |
| Scoped | anything, until he fell off a horse at the competition | ⊐ | The cowboy did not fear anything, until he fell off a racehorse at the competition | 1,202 | |
| Two Not | The cowboy, who was not very | | | | |
| Scoped | old, was not proud that he fell off a horse at the competition | ⊐ | The cowboy, who was not very old, was not proud that he fell off a racehorse at the competition | 1,202 | |
| Two Scoped | There is no way that the cowboy did not fall off a horse at the competition | ⊐ | There is no way that the cowboy did not fall off a racehorse at the competition | 1,202 | |
| One Scoped | The cowboy did not fall off a | ⊏ | The cowboy did not fall off a | 1,202 | |
| horse at the competition | racehorse at the competition | | | | |
| One Scoped, One not Scoped | The cowboy did not fall off a horse, but the competition was not too important | ⊏ | The cowboy did not fall off a racehorse, but the competition was not too important | 1,202 | (b) A three-example contrast set from ScoNe-NLG. |
| (a) A six-example contrast set from ScoNe-NLI. | | | | | |
Table 1: Two contrast sets from the ScoNe Benchmark ing examples that seem better aligned with what we can infer about the training data used for InstructGPT models. In each ScoNe-NLG example, negation reasoning is needed to provide a coherent ending to an incomplete narrative (see Figure 1b).
ScoNe-NLG contains minimal triplets of examples where negation is absent, present with relevant scope, or present without relevant scope. InstructGPT is successful on ScoNe-NLG. When considered alongside our negative result for ScoNe-NLI,
this finding seems to show that these models can learn in-context about how negation takes scope, but only when the examples are hand-tailored to be aligned with the training data and aligned with known strengths of these models. Thus, when used together, ScoNe-NLI and ScoNe-NLG serve as a clear diagnostic for exploring useful prompting strategies and assessing the capacity of language models to reason about negation and scope.
## 2 **A Brief Review Of Negation In Nli** Benchmarks
A diverse array of benchmarks and diagnostic experiments have included negation reasoning in recent years (Nairn et al., 2006; McCoy and Linzen, 2018; Wang et al., 2019; Ettinger, 2020; Hartmann et al., 2021; Kassner and Schütze, 2020; Ravichander et al., 2022).
Hossain et al. (2022) analyze a variety of natural language understanding benchmarks and find that negation is underrepresented, and that when negation is present it often has no impact on the example label. Hossain et al. (2020) address this issue by manually adding negation to the premisehypothesis pairs in MNLI (Williams et al., 2018),
SNLI (Bowman et al., 2015), and RTE (Dagan et al., 2007; Cooper et al., 1996).
Yanaka et al. (2019a) introduce the crowdsourced MED dataset, which has many NLI examples where negation generates inferences. Monotonicity NLI (MoNLI; Geiger et al. 2020) consists of modified SNLI sentences that have gold labels impacted by lexical entailments in affirmative contexts (PMoNLI) and lexical entailments reversed by a negation (NMoNLI). BERT fine-tuned on SNLI and MNLI fails to generalize to both of these datasets, but succeeds with further fine-tuning on MED/MoNLI. Some automatically generated NLI
datasets also include negation reasoning (Geiger et al., 2019; Richardson et al., 2020; Yanaka et al., 2019b, 2021).
## 3 **Scone-Nli**
ScoNe-NLI is an extension of MoNLI (Geiger et al., 2020). MoNLI was generated by randomly selecting a sentence from SNLI and replacing a noun with a hypernym (more general term) or
| No | One | Two | Two | One | One Scoped, | |
|-----------------------------------------|----------|------------|------------|--------|---------------|----------------|
| Fine-tuning Datasets | Negation | Not Scoped | Not Scoped | Scoped | Scoped | One not Scoped |
| MAF-NLI | 82.0 | 86.0 | 81.5 | 91.0 | 5.0 | 5.0 |
| MAF-NLI+ MoNLI (Geiger et al., 2020) | 96.2 | 87.5 | 99.5 | 8.9 | 100.0 | 100.0 |
| MAF-NLI+ MED (Yanaka et al., 2020) | 84.8 | 83.5 | 82.0 | 58.9 | 99.5 | 97.0 |
| MAF-NLI+ Neg-NLI (Hossain et al., 2020) | 91.3 | 88.5 | 83.0 | 70.4 | 37.0 | 29.0 |
| MAF-NLI+ MoNLI + ScoNe-NLI | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
| Conditional Q | Is it true that if Premise, then Hypothesis? |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------|
| Hypothesis Q | Assume that Premise. Is it then definitely true that Hypothesis? Answer yes or no. |
| Conditional | If Premise, then Hypothesis. Is this true? |
| Truth Brown et al. | P: Premise\n Q: Hypothesis\n Yes, No, or Maybe? |
| Structured | P: Premise\n H: Hypothesis\nL: |
| Reasoning Logical and commonsense reasoning exam.\n\n Explain your reasoning in detail, then answer with Yes or No. Your answers should follow this 4-line format:\n\n Premise: <a tricky logical statement about the world>.\n Question: <question requiring logical deduction>.\n Reasoning: <an explanation of what you understand about the possible scenarios>\n Answer: <Yes or No>.\n\n Premise: Premise\n Question: Hypothesis\n Reasoning: Let's think logically step by step. The premise basically tells us that | |
Table 3: Prompts used to adapt a 2-way NLI example
(Premise, **Hypothesis**). Newlines are indicated with \n.
Full prompts with few-shot variants are in Appendix E.
hyponym (less general term). The original and edited sentences are then used to form two premise–
hypothesis pairs, one with the label *entailment* and the other with the label *neutral*. In about half of the examples, this replacement is in an affirmative context with no negation (PMoNLI). In the other half, it is under the scope of a single negation
(NMoNLI).
The authors generated ScoNe-NLI by using each example of NMoNLI to create a contrast set of six examples where gold labels are impacted by the scope of zero, one, or two negations, as in Table 1.
To succeed across all sections of ScoNe, models need to attend to the presence of negation as well as the way it scopes semantically. Table 1a shows an actual example of how ScoNe extends MoNLI. We use the train–test split of MoNLI where substituted Table 2: DeBERTa fine-tuning results on ScoNe-NLI. MAF-NLI stands for on MNLI, ANLI, and Fever-NLI.
lexical items are disjoint across training and testing data. Appendix C provides further details.
Fine-Tuning on ScoNe-NLI We used publicly available weights on HuggingFace for the DeBERTa-v3-base models already fine-tuned on MNLI, Fever-NLI, and Adversarial-NLI (Laurer et al., 2022; He et al., 2021). Appendix B contains comparable results for the RoBERTa model (Liu et al., 2019). Fine-tuning results are in Table 2.
Fine-tuning on existing NLI datasets is insufficient for good performance on ScoNe-NLI:
DeBERTa-v3-base fine-tuned on existing NLI
datasets, even those that focus on negation, systematically fails. Thus, it seems that ScoNe-NLI
captures novel aspects of negation reasoning.
In contrast, fine-tuning on MoNLI and ScoNeNLI training data results in near perfect performance on ScoNe-NLI test data. This shows that DeBERTa can learn negation reasoning and generalize to new lexical items.
In-context Learning on ScoNe-NLI We evaluated InstructGPT using OpenAI's API with *textdavinci-002* and *text-davinci-003* engines and a temperature of 0.0 (Brown et al., 2020). We ask InstructGPT to infer NLI labels given the premise and hypothesis using prompts. All prompts are constructed such that if the response contain "yes"
(case-insensitive), then the label *entailment* is predicted, else the label *neutral* is predicted. We use six prompts (Table 3). For each prompt, we implemented both zero-shot and few-shot inference experiments. Appendix E provides the full prompts.
## Instructgpt Makes Systematic Errors Similar To A Baseline That Ignores Negation Entirely. The
best results are for the few-shot reasoning prompt with *davinci-003*. While its overall accuracy of 82% may initially appear to be a success, further analysis reveals otherwise. InstructGPT succeeds only on the sections of ScoNe-NLI where zero or two negations take scope, namely, no negation (99%), one not scoped (97%), two not scoped
| No | One | Two | Two | One | One Scoped, | | | |
|-------------------|------------|------------|--------|--------|----------------|---------|------|------|
| Negation | Not Scoped | Not scoped | Scoped | Scoped | One not Scoped | Overall | | |
| Structured | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | |
| Brown et al. | 0.74 | 0.70 | 0.74 | 0.55 | 0.44 | 0.45 | 0.60 | |
| Conditional Q | 0.79 | 0.84 | 0.80 | 0.50 | 0.52 | 0.44 | 0.65 | |
| Conditional Truth | 0.98 | 0.86 | 0.80 | 0.43 | 0.66 | 0.47 | 0.70 | |
| Hypothesis Q | 0.69 | 0.90 | 0.70 | 0.51 | 0.62 | 0.42 | 0.64 | |
| Reasoning | 0.90 | 0.88 | 0.94 | 0.72 | 0.52 | 0.46 | 0.73 | |
| Zero-shot | Structured | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 |
| Brown et al. | 0.86 | 0.66 | 0.80 | 0.83 | 0.36 | 0.28 | 0.63 | |
| Conditional Q | 0.92 | 0.85 | 0.90 | 0.62 | 0.34 | 0.34 | 0.66 | |
| Conditional Truth | 0.94 | 0.90 | 0.94 | 0.64 | 0.36 | 0.37 | 0.69 | |
| Hypothesis Q | 0.98 | 0.96 | 0.94 | 0.83 | 0.51 | 0.40 | 0.77 | |
| Reasoning | 0.99 | 0.97 | 0.98 | 0.89 | 0.69 | 0.43 | 0.82 | |
| Ignore-Negation | 1.00 | 1.00 | 1.00 | 1.00 | 0.00 | 0.00 | 0.66 | |
| Few-shot | | | | | | | | |
| No | One | One Not | | |
|-----------|--------|-----------|---------|------|
| Negation | Scoped | Scoped | Overall | |
| Zero-shot | 0.99 | 0.90 | 0.88 | 0.92 |
| Few-shot | 0.93 | 1.00 | 0.93 | 0.95 |
Table 5: Results for ScoNe-NLG using davinci-003.
The three conditions correspond to those of ScoNe and test the essential scope-taking properties of negation.
(98%), and two scoped (89%). InstructGPT performs much worse on sections where exactly one negation takes scope, namely one scoped (69%),
one scoped/one not (48%). An idealized baseline entirely ignoring the presence of negation (last row of Table 4) succeeds and fails on the same sections, indicating a systematic flaw in InstructGPT.
## 4 **Scone-Nlg**
InstructGPT fails to reason about negation when given NLI examples that must be adapted to natural language generation (NLG) with prompts. We hypothesized that InstructGPT may correctly reason about negation when evaluated on examples hand tailored to its pretraining objective, because there is no need for prompt engineering (Liu et al., 2021; Wei et al., 2022; Kojima et al., 2022).
Dataset ScoNe-NLG is a natural language generation dataset that contains 74 contrasting triplets of examples of half-completed naturalistic narratives that have different coherent completions depending on the presence and scope of a negation. InstructGPT fails on the sections of ScoNe-NLI
examples containing only one negation, so we opt for contrast sets with three examples that require knowledge of a lexical entailment in an affirmative context without negation, an affirmative context with non-scoping negation, and an negative context with scoping negation, respectively. See Table 1b.
In-context Learning on ScoNe-NLG We used InstructGPT to complete the partial sentence inputs with the *text-davinci-003* engine (temperature of 0.0). In the zero-shot setting, the prompt consists of the ScoNe-NLG example. In the few-shot setting, four demonstrations from ScoNe-NLG are given one with no negation, two with scoping negation, and one with non-scoping negation. See Appendix E.13 for the complete prompts.
To evaluate, the authors went through the responses by hand and determined whether the generated text is coherent and compatible with the initial narrative. The authors agreed on these annotations for 216/222 of the zero-shot responses with a Fleiss kappa of 0.84 and 220/222 of the few-shot responses with a Fleiss kappa of 0.91. These agreement rates are so high that we evaluate InstructGPT
only for the cases where the annotators agree. Here, InstructGPT is successful but not perfect, achieving 95% and 92% accuracy in the few and zero-shot settings, respectively. We do not observe the systematic failures seen on ScoNe-NLI.
![4_image_0.png](4_image_0.png)
(a) An interpretable program that solves ScoNe-NLI by computing two Boolean variables that encode whether the first and second negation scope and reversing entailment if exactly one is true.
![4_image_1.png](4_image_1.png)
(b) An interpretable program that solves ScoNe-NLI by counting the scoped negations and reversing entailment if there is exactly one.
IGNORE-SCOPE(p, h)
![4_image_2.png](4_image_2.png)
(c) A flawed heuristic program: we count the negations and reverse entailment if there is a single negation, which is equivalent to ignoring the scope of negation.
IGNORE-NEGATION(p, h)
![4_image_3.png](4_image_3.png)
1 *lexrel* ← GET-LEXREL(p, h)
2 **return** *lexrel*
(d) A flawed heuristic program for ScoNe-NLI that outputs the lexical relation and ignores negation entirely.
Figure 1: Four human-interpretable algorithms for ScoNe-NLI. The first two solve the task perfectly, and the other two implement flawed heuristics that a model might learn to implement. The function GET-LEXREL retrieves the relation between the aligned words in the premise and hypothesis, COUNT-SCOPED counts scoped negations, COUNT-NEG counts negations regardless of scope, and GET-FIRST returns true if the first negation scopes, while GET-SECOND returns true if there is a second negation and it scopes.
## 5 **Future Work On Interpretability**
ScoNe is based in naturalistic examples, but it also has a controlled structure that offers valuable opportunities to move beyond simple behavioral testing and more deeply understand how models solve tasks related to lexical entailment and negation.
The theory of causal abstraction provides a framework for interpretability (Geiger et al., 2023a), where a neural model can be understood to implement the intermediate variables and internal structure of a program or algorithm (Geiger et al., 2021, 2022; Wu et al., 2022b,a; Huang et al.,
2022; Geiger et al., 2023b). In fact, the MoNLI
dataset and the technique of interchange interventions (which is the primary technique in causal abstraction analysis) were jointly introduced in Geiger et al. 2020, where interchange interventions were used to investigate whether a BERT model implements a simple, human-interpretable algorithm that can perfectly label MoNLI using a variable representing lexical entailment and a variable representing the presence of negation.
With ScoNe, we can ask even deeper interpretability questions of this form. To encourage future work in this direction, we present a range of algorithmic solutions in Figure 1. Two of these solutions solve ScoNe and could perhaps explain neural models that learn the task perfectly, and two others implement flawed heuristics that could explain neural models with poor task performance.
Figure 1a and Figure 1b present two intuitive and correct algorithms that solve ScoNe, but have distinct intermediate variables and internal structure. The first computes two Booleans representing whether each negation scopes, and the second computes a count of how many negations scope.
Figure 1d is the flawed heuristic that ignores negation that we discussed in Section 3 as a hypothesis about how models fail at our task. Figure 1d is a second flawed heuristic that counts the number of negations present but ignores scope.
Using the toolkit of causal abstraction, we can assess models not only behaviorally, but also evaluate whether they implement an interpretable algorithm.
The results of Geiger et al. (2023b) begin to show how such analyses could be extended to in-context learning with LLMs, as in Section 4.
## 6 **Conclusion**
We introduced ScoNe, a benchmark for fine-tuning and in-context learning experiments on negation. ScoNe is challenging for NLI models fine-tuned on other datasets, even those designed for negation reasoning, but modest amount of fine-tuning on ScoNe leads to success. For in-context learning, we find that that InstructGPT models fail dramatically on ScoNe. However, we also introduce ScoNe-NLG, which uses more narrative-like examples to probe models' capacity to handle negation, and show that InstructGPT is successful with zeroshot and few-shot prompts for this task. These results show that ScoNe supports fine-grained assessments of whether models can reason accurately about natural language negation, and our discussion in Section 5 suggests that ScoNe can be a powerful tool for discovering how models reason semantically.
## Limitations
We are releasing ScoNe as a diagnostic tool for conducting controlled scientific experiments. This is our primary intended use, and we advise against uncritical use of ScoNe for real-world applications, as we have not audited the dataset for such purposes.
As a diagnostic tool, ScoNe's primary limitation is its focus on English. Cross-linguistically, we find many strategies for expressing negation. The English-language strategy of using mostly adverbial modifiers for sentential negation is not the only one by any means, and we would expect to see quite different results for languages in which negation is expressed, for example, with verbal suffixes.
This highlights the value of potential future efforts extending ScoNe to other languages.
By the same token, we acknowledge that many linguistic phenomena interact with negation even internal to English. ScoNe restricts to negation in the context of lexical entailment, and mostly uses
"not" as the negative morpheme. This excludes a wide range of negation morphemes and negation strategies that ultimately need to be brought into the picture.
Finally, we note that there may be undesirable biases in ScoNe that could interact with biases in the models. ScoNe is in part derived from SNLI,
which is known to contain gaps, social biases, and artifacts (Poliak et al., 2018; McCoy et al., 2019; Belinkov et al., 2019; Gururangan et al., 2018; Tsuchiya, 2018), and ScoNe may inherit some of these.
## References
Yonatan Belinkov, Adam Poliak, Stuart Shieber, Benjamin Van Durme, and Alexander Rush. 2019. Don't take the premise for granted: Mitigating artifacts in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 877–891, Florence, Italy.
Association for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al.
1996. Using the framework. Technical report, LRE
62-051 D-16, The FraCaS Consortium.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2007. The pascal recognising textual entailment challenge. In *Machine Learning Challenges Workshop*.
Allyson Ettinger. 2020. What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. *Transactions of the Association* for Computational Linguistics, 8:34–48.
Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou.
2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 1307–1323. Association for Computational Linguistics.
Atticus Geiger, Ignacio Cases, Lauri Karttunen, and Christopher Potts. 2019. Posing fair generalization tasks for natural language inference. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 4475–4485, Stroudsburg, PA. Association for Computational Linguistics.
Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts. 2021. Causal abstractions of neural networks. In *Advances in Neural Information Processing Systems*, volume 34, pages 9574–9586.
Atticus Geiger, Chris Potts, and Thomas Icard. 2023a.
Causal abstraction for faithful interpretation of AI
models. ArXiv:2106.02997.
Atticus Geiger, Kyle Richardson, and Christopher Potts.
2020. Neural natural language inference models partially embed theories of lexical entailment and negation. In Proceedings of the Third BlackboxNLP
Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 163–173, Online. Association for Computational Linguistics.
Atticus Geiger, Zhengxuan Wu, Hanson Lu, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah Goodman, and Christopher Potts. 2022. Inducing causal structure for interpretable neural networks. In *Proceedings of the 39th International Conference on Machine* Learning, volume 162 of *Proceedings of Machine* Learning Research, pages 7324–7338. PMLR.
Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, and Noah D. Goodman. 2023b. Finding alignments between interpretable causal variables and distributed neural representations. Ms., Stanford University.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith.
2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics.
Mareike Hartmann, Miryam de Lhoneux, Daniel Hershcovich, Yova Kementchedjhieva, Lukas Nielsen, Chen Qiu, and Anders Søgaard. 2021. A multilingual benchmark for probing negation-awareness with minimal pairs. In *Proceedings of the 25th Conference on* Computational Natural Language Learning, pages 244–257, Online. Association for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In *International* Conference on Learning Representations.
Md Mosharaf Hossain, Dhivya Chinnappa, and Eduardo Blanco. 2022. An analysis of negation in natural language understanding corpora. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 716–723, Dublin, Ireland. Association for Computational Linguistics.
Md Mosharaf Hossain, Venelin Kovatchev, Pranoy Dutta, Tiffany Kao, Elizabeth Wei, and Eduardo Blanco. 2020. An analysis of natural language inference benchmarks through the lens of negation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9106–9118, Online. Association for Computational Linguistics.
Jing Huang, Zhengxuan Wu, Kyle Mahowald, and Christopher Potts. 2022. Inducing character-level structure in subword-based language models with Type-level Interchange Intervention Training. Ms.,
Stanford University and UT Austin.
Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models:
Birds can talk, but cannot fly. In Proceedings of the
58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818, Online. Association for Computational Linguistics.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *ArXiv*,
abs/2205.11916.
Moritz Laurer, Wouter van Atteveldt, Andreu Casas, and Kasper Welbers. 2022. Less annotating, more classifying - addressing the data scarcity issue of supervised machine learning with deep transfer learning and bert-nli.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
ACM Computing Surveys (CSUR).
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
R. Thomas McCoy and Tal Linzen. 2018. Non-entailed subsequences as a challenge for natural language inference. *CoRR*, abs/1811.12112.
Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics.
Rowan Nairn, Cleo Condoravdi, and Lauri Karttunen.
2006. Computing relative polarity for textual inference. In *Proceedings of the Fifth International* Workshop on Inference in Computational Semantics
(ICoS-5).
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems 32*, pages 8024–8035. Curran Associates, Inc.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018.
Hypothesis only baselines in natural language inference. In *Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics*, pages 180–191, New Orleans, Louisiana. Association for Computational Linguistics.
Abhilasha Ravichander, Matt Gardner, and Ana Marasovic. 2022. ´ Condaqa: A contrastive reading comprehension dataset for reasoning about negation.
Kyle Richardson, Hai Hu, Lawrence S. Moss, and Ashish Sabharwal. 2020. Probing natural language inference models through semantic fragments. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI
2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8713–
8721. AAAI Press.
Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recognizing textual entailment. In *Proceedings of the Eleventh International Conference on Language Resources and* Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019.
Glue: A multi-task benchmark and analysis platform for natural language understanding. In *7th International Conference on Learning Representations,*
ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
OpenReview.net.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Zhengxuan Wu, Karel D'Oosterlinck, Atticus Geiger, Amir Zur, and Christopher Potts. 2022a. Causal Proxy Models for concept-based model explanations.
ArXiv:2209.14279.
Zhengxuan Wu, Atticus Geiger, Joshua Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, and Noah Goodman. 2022b. Causal distillation for language models. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human*
Language Technologies, pages 4288–4295, Seattle, United States. Association for Computational Linguistics.
Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, and Kentaro Inui. 2020. Do neural models learn systematicity of monotonicity inference in natural language?
In *Annual Meeting of the Association for Computational Linguistics*.
Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019a. Can neural networks understand monotonicity reasoning? In *Proceedings of the 2019* ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 31–40, Florence, Italy. Association for Computational Linguistics.
Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019b. HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning. In *Proceedings of the Eighth Joint* Conference on Lexical and Computational Semantics
(*SEM 2019), pages 250–255, Minneapolis, Minnesota. Association for Computational Linguistics.
Hitomi Yanaka, Koji Mineshima, and Kentaro Inui.
2021. SyGNS: A systematic generalization testbed based on natural language semantics. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 103–119, Online. Association for Computational Linguistics.
## Appendices A **Experimental Details** A.1 **Fine-Tuning Protocol**
For our fine-tuning experiments, we used a learning rate of 1e-5, batch size of 4, gradient accumulation steps of 6 for a total of 10 epochs. We used these default hyperparameters as they were successful in fine-tuning on ScoNe. We implemented these experiments with Pytorch (Paszke et al., 2019) and used the scikit learn package (Pedregosa et al., 2011).
## A.2 **Hugging Face Models**
We test RoBERTa1and DeBERTa2in these experiments. We used the roberta-large model finetuned on MNLI3 with 354 million parameters, 500K steps, and trained on 1,024 V100 GPUs (Liu et al., 2019). DeBERTa-v3-base-mnli-fever-anli model4 was fine-tuned on MNLI, Fever-NLI,5and ANLI.6 RoBERTa weights link: https://huggingface.co/roberta-large-mnli Deberta weights link: https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
## A.3 **Fine-Tuning Datasets**
We further fine-tuned our model on the datasets MoNLI,7 Negation-NLI, 8 MED. 9
| No | One | Two | Two | One | One Scoped, | |
|-----------------------------------------|----------|------------|------------|--------|---------------|----------------|
| Fine-tuning Datasets | Negation | Not Scoped | Not Scoped | Scoped | Scoped | One not Scoped |
| MAF-NLI | 96.5 | 97.0 | 97.0 | 96.5 | 3.0 | 5.0 |
| MAF-NLI+ MoNLI (Geiger et al., 2020) | 85.4 | 100.0 | 100.0 | 4.5 | 100.0 | 100.0 |
| MAF-NLI+ MED (Yanaka et al., 2020) | 85.1 | 92.0 | 89.5 | 44.6 | 85.5 | 81.5 |
| MAF-NLI+ Neg-NLI (Hossain et al., 2020) | 93.1 | 97.5 | 93.0 | 73.2 | 20.5 | 17.5 |
| MAF-NLI+ MoNLI + ScoNe-NLI | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
Table 6: RoBERTa fine-tuning results on ScoNe-NLI. MAF-NLI stands for on MNLI, ANLI, and Fever-NLI.
## C **Scone Dataset Details**
For some examples, we modified the lexical items replaced. Consider the NMoNLI sentence pair 'a man is not tossing anything'-'a man is not tossing socks' (entailment), and non-scoping counterpart 'a man not here is tossing something'-'a man not here is tossing socks' (neutral). Here, 'anything' must be replaced by 'something'. The positive and negative examples in MoNLI *do not* come in minimal pairs, so the examples in ScoNe-NLI with no negation are not from PMoNLI.
## B **Roberta Results** D **Prompting Methods**
The experimental runs reported in the paper were conducted on January 11, 2023. We used InstructGPT10 models with 1.3 billion parameters and 6 billion parameter. The exact cost of constructing the InstructGPT models is not public, but the pre-training protocol involves (1) fine-tuning a GPT3 model on an instruction following dataset, (2) fine-tuning a GPT3 model to rank different answers to the instruction following dataset, and (3) using reenforcement learning to combine these two models. We use a temperature parameter of 0.0 for all experiments. If the response contains "yes" (case-insensitive), then we infer the label entailment, else we infer neutral. Across experiments, the only thing that varies is the nature of the prompt function.
## E **In-Context Learning Prompts**
We have indicated all actual newlines with \n. The newlines in the formatting are just to make them intuitive to read.
## E.1 **Conditional Question Prompt** Prompt Example
Is it true that if we didn't eat pizza, then we didn't eat food?
## E.2 **Few-Shot Conditional Question Prompt** Prompt Example
Q1: Is it true that if a not so tall person reading a paper is not currently sitting inside a building, then a not so tall person reading a paper is not currently sitting inside a club?\n A1: Yes\n
\n Q2: Is it true that if the man does not own a dog and does not own a cat, then the man does not own a retriever and does not own a cat?\n A2: Yes\n
\n Q3: Is it true that if a not so tall person reading a paper is not currently sitting inside a cabin, then a not so tall person reading a paper is not currently sitting inside a building?\n A3: Maybe\n
\n Q4: Is it true that if a not so tall person reading a paper is not currently sitting inside a casino, then a not so tall person reading a paper is not currently sitting inside a building? A4: Maybe\n
\n Q: Is it true that if we didn't eat pizza, then we didn't eat food?\n A:
## E.3 **Hypothesis Question Prompt**
Prompt example
![9_image_0.png](9_image_0.png)
Assume that we didn't eat pizza. Is it then definitely true that we didn't eat food? Answer Yes or No.
## E.4 **Few-Shot Hypothesis Question Prompt** Prompt Example
Q1: Assume that a not so tall person reading a paper is not currently sitting inside a building. Is it then definitely true that a not so tall person reading a paper is not currently sitting inside a casino?
Answer Yes or No.\n A1: Yes\n
\n Q2: Assume that the girl will not get a stuffed dog as a gift, but not because she failed the exam. Is it then definitely true that the girl will not get a stuffed pinscher as a gift, but not because she failed the exam? Answer Yes or No.\n A2: Yes\n
\n Q3: Assume that the girl will not get a stuffed shetland as a gift, but not because she failed the exam. Is it then definitely true that the girl will not get a stuffed dog as a gift, but not because she failed the exam? Answer Yes or No.\n A3: No\n
\n Q4: Assume that a not so tall person reading a paper is not currently sitting inside a monastery. Is it then definitely true that a not so tall person reading a paper is not currently sitting inside a building?
Answer Yes or No.\n A4: No\n
\n Q: Assume that we didn't eat pizza. Is it then definitely true that we didn't eat food? Answer Yes or No.\n A:
E.5 **Conditional Truth Evaluation Prompt**
Prompt example If we didn't eat pizza, then we didn't eat food. Is this true?
## E.6 **Few-Shot Conditional Truth Evaluation Prompt**
Prompt example C1: If the man does not own a dog and does not own a cat, then the man does not own a shetland and does not own a cat. Is this true?\n A1: Yes\n
\n C2: If a not so tall person reading a paper is not currently sitting inside a building, then a not so tall person reading a paper is not currently sitting inside a house. Is this true?\n A2: Yes\n
\n C3: If the man does not own a collie and does not own a cat, then the man does not own a dog and does not own a cat. Is this true?\n A3: Maybe\n
\n C4: If the man does not own a corgi and does not own a cat, then the man does not own a dog and does not own a cat. Is this true?\n A4: Maybe\n
\n C:If we didn't eat pizza, then we didn't eat food. Is this true?\n A:
## E.7 **Brown Et Al Style Prompt** Prompt Example
C: We didn't eat pizza\n Q: We didn't eat food. Yes, No, or Maybe?
## E.8 **Few-Shot Brown Et Al Style Prompt**
Prompt example C1: The man, who's eyes are not open, is not steering a car.\n Q1: The man, who's eyes are not open, is not steering a sedan. Yes, No, or Maybe?\n A2: Yes\n
\n C2: A dog not on the playground did not catch any ball.\n Q2: A dog not on the playground did not catch any volleyball. Yes, No, or Maybe?\n A3: Yes\n
\n C3: the man does not own a collie and does not own a cat.\n Q3: the man does not own a dog and does not own a cat. Yes, No, or Maybe?\n A4: Maybe\n
\n C4: A not so tall person reading a paper is not currently sitting inside a inn.\n Q4: A not so tall person reading a paper is not currently sitting inside a building. Yes, No, or Maybe?\n A5: Maybe\n
\n C: We didn't eat pizza\n Q: We didn't eat food. Yes, No, or Maybe?\n A:
## E.9 **Structured Prompt**
Prompt example P: We didn't eat pizza\n H: We didn't eat food\n L:
## E.10 **Few-Shot Structured Prompt** Prompt Example
P1: The players who did not score did not have a ball.\n H1: The players who did not score did not have a baseball.\n L1: entailment\n
\n P2: the man does not own a dog and does not own a cat.\n H2: the man does not own a poodle and does not own a cat.\n L2: entailment\n
\n P3: the man does not own a terrier and does not own a cat.\n H3: the man does not own a dog and does not own a cat.\n L3: neutral\n
\n P4: the man does not own a husky and does not own a cat.\n H4: the man does not own a dog and does not own a cat.\n L4: neutral\n
\n P: We didn't eat pizza\n H: We didn't eat food\n L:
## E.11 **Reasoning Prompt** Prompt Example
Logical and commonsense reasoning exam.\n
\n Explain your reasoning in detail, then answer with Yes or No. Your answers should follow this 4-line format:\n
\n Premise: <a tricky logical statement about the world>.\n Question: <question requiring logical deduction>.\n Reasoning: <an explanation of what you understand about the possible scenarios>.\n Answer: <Yes or No>.\n
\n Premise: we didn't eat pizza\n Question: Can we logically conclude for sure that we didn't eat food?\n Reasoning: Let's think logically step by step. The premise basically tells us that
## E.12 **Few-Shot Reasoning Prompt**
For this prompt, we insert two demonstrations right before the test example. These are of the correct type for the test example, and they exemplify each of the two labels. The demonstrations are from a fixed set of examples, which we include here: E.12.1 **No Negation**
Here are some examples of the kind of reasoning you should do:\n
\n Premise: The students ate pizza\n Question: Can we logically conclude for sure that the students ate food?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students ate pizza entails that the students ate food.\n Answer: Yes\n
\n Premise: The students ate food\n Question: Can we logically conclude for sure that the students ate pizza?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students ate food does not allow us to conclude that the students ate pizza. They might have eaten something else.\n Answer: No\n
\n E.12.2 **One Scoped**
Here are some examples of the kind of reasoning you should do:\n
\n Premise: The students didn't eat any pizza\n Question: Can we logically conclude for sure that the students didn't eat any food?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students didn't eat any pizza does not allow us to conclude that the students didn't eat any food. They might have eaten something else.\n Answer: No\n
\n Premise: The students didn't eat any food\n Question: Can we logically conclude for sure that the students didn't eat any pizza?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students didn't eat any food entails that the students didn't eat any pizza.\n Answer: Yes\n
\n
## E.12.3 **One Not Scoped** Prompt Example
Here are some examples of the kind of reasoning you should do:\n
\n Premise: The students who weren't in class ate pizza\n Question: Can we logically conclude for sure that the students who weren't in class ate food?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students who weren't in class ate pizza entails that the students who weren't in class ate food.\n Answer: Yes\n
\n Premise: The students who weren't in class ate food\n Question: Can we logically conclude for sure that the students who weren't in class ate pizza?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students who weren't in class ate food does not allow us to conclude that the students who weren't in class ate pizza. They might have eaten something else.\n Answer: No\n
\n E.12.4 **One Scoped, One Not Scoped**
Here are some examples of the kind of reasoning you should do:\n
\n Premise: The students who weren't in class didn't eat any pizza\n Question: Can we logically conclude for sure that the students who weren't in class didn't eat any food?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students who weren't in class didn't eat any pizza does not allow us to conclude that the students who weren't in class didn't eat any food. They might have eaten something else.\n Answer: No\n
\n Premise: The students who weren't in class didn't eat any food\n Question: Can we logically conclude for sure that the students who weren't in class didn't eat any pizza?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students who weren't in class didn't eat any food entails that the students who weren't in class didn't eat any pizza.\n Answer: Yes\n
\n E.12.5 **Two Not Scoped**
Here are some examples of the kind of reasoning you should do:\n
\n Premise: The students who weren't in class ate pizza that wasn't hot\n Question: Can we logically conclude for sure that the students who weren't in class ate food that wasn't hot?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students who weren't in class ate pizza that wasn't hot entails that the students who weren't in class ate food that wasn't hot.\n Answer: Yes\n
\n Premise: The students who weren't in class ate food that wasn't hot\n Question: Can we logically conclude for sure that the students who weren't in class ate pizza that wasn't hot?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that the students who weren't in class ate food that wasn't hot does not allow us to conclude that the students who weren't in class ate pizza that wasn't hot. They might have eaten something else.\n Answer: No\n
\n E.12.6 **Two Scoped**
Here are some examples of the kind of reasoning you should do:\n
\n Premise: It is not the case that the students didn't eat any pizza\n Question: Can we logically conclude for sure that it is not the case that the students didn't eat any food?\n Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that it is not the case that the students didn't eat any pizza entails that it is not the case that the students didn't eat any food.\n Answer: Yes\n
\n Premise: It is not the case that the students didn't eat any food\n Question: Can we logically conclude for sure that it is not the case that the students didn't eat any pizza? Reasoning: Let's think logically step by step. The premise basically tells us that pizza is a type of food. Therefore, the premise that it is not the case that the students didn't eat any food does not allow us to conclude that it is not the case that the students didn't eat any pizza. They might have eaten something else.\n Answer: No\n
\n
## E.13 **Scone-Nlg Prompts**
In the zero-shot condition, models are simply prompted with the ScoNe-NLG examples. In the few-shot
condition, the test is example is proceeded with a fixed set of four demonstrations, separated by double
newlines. The examples are as follows:
Prompt example
Glen is not a fan of learning math. When he sees that his new high school requires that he take a
geometry course, he is not pleased.\n
\n
I saw John take his BMW to the store the other day, so when Suzy asked me if John owns a car, I
said yes.\n
\n
I've seen John with a dog that isn't very cute, so when Suzy asked me if John owns a pet, I said
yes.\n \n
I recently confirmed that John is not allergic to any shellfish. So it makes sense that when we served
shrimp
## F **In-Context Learning Results For Davinci-002**
Table 7: In-context learning results for GPT-3 (davinci-002 engine).
| No | One | Two | Two | One | One Scoped, | | | |
|-------------------|------------|------------|--------|--------|----------------|---------|------|------|
| Negation | Not Scoped | Not scoped | Scoped | Scoped | One not Scoped | Overall | | |
| Structured | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | |
| Brown et al. | 0.69 | 0.60 | 0.59 | 0.55 | 0.50 | 0.48 | 0.57 | |
| Conditional Q | 0.76 | 0.55 | 0.65 | 0.50 | 0.50 | 0.50 | 0.58 | |
| Conditional Truth | 0.76 | 0.64 | 0.66 | 0.60 | 0.50 | 0.57 | 0.62 | |
| Hypothesis Q | 0.80 | 0.83 | 0.86 | 0.62 | 0.45 | 0.40 | 0.66 | |
| Reasoning | 0.85 | 0.70 | 0.68 | 0.62 | 0.57 | 0.56 | 0.66 | |
| Zero-shot | Structured | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 |
| Brown et al. | 0.82 | 0.75 | 0.78 | 0.72 | 0.35 | 0.29 | 0.62 | |
| Conditional Q | 0.92 | 0.82 | 0.78 | 0.52 | 0.36 | 0.32 | 0.62 | |
| Conditional Truth | 0.92 | 0.89 | 0.88 | 0.59 | 0.36 | 0.37 | 0.67 | |
| Hypothesis Q | 0.99 | 0.91 | 0.92 | 0.68 | 0.38 | 0.40 | 0.72 | |
| Reasoning | 0.73 | 0.85 | 0.78 | 0.62 | 0.74 | 0.54 | 0.71 | |
| Few-shot | | | | | | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Yes, primarily in the Limitations section.
✓ A2. Did you discuss any potential risks of your work?
Yes, in the Limitations section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Yes, in the abstract and the introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 3 And 4.
✓ B1. Did you cite the creators of artifacts you used?
Section 3.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A and D.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
In Limitations, and in Appendix A and D, and in supplementary materials.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
In the Introduction and in Limitations section.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sections 3 and 4.
## C ✓ **Did You Run Computational Experiments?** Sections 3 And 4.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sections 3 and 4.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
zhou-etal-2023-revisiting | Revisiting Automated Prompting: Are We Actually Doing Better? | https://aclanthology.org/2023.acl-short.155 | Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates that automation can outperform fine-tuning in certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of K-shot learning settings. We find that automated prompting does not consistently outperform simple manual prompting. Our work suggests that, in addition to fine-tuning, manual prompting should be used as a baseline in this line of research. | # Revisiting Automated Prompting: Are We Actually Doing Better?
Yulin Zhou1 Yiren Zhao2Ilia Shumailov3 Robert Mullins1 **Yarin Gal**3 1University of Cambridge 2Imperial College London 3University of Oxford [email protected] [email protected] [email protected] [email protected] [email protected]
## Abstract
Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and *prompting* significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates that automation can outperform fine-tuning in certain K-shot learning scenarios (Shin et al., 2020; Zhang et al.,
2021). In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of K-shot learning settings. We find that *automated prompting* does not consistently outperform simple manual prompting. Our work suggests that, in addition to fine-tuning, manual prompting should be used as a baseline in this line of research.
## 1 Introduction
Transformer-based Large Language Models
(LLMs) are now considered foundation models for downstream tasks (Bommasani et al., 2021).
The *pre-train then fine-tune* approach achieved state-of-the-art performance on a range of Natural Language Processing (NLP) tasks (Liu et al.,
2019; Raffel et al., 2020; Brown et al., 2020).
Unfortunately, in many NLP applications, the lack of high-quality labelled training data is a barrier to producing a model with good performance in the pre-train and then fine-tune approach. To address this issue, *prompt-based learning* (Petroni et al.,
2019; Schick and Schütze, 2020a,b; Liu et al.,
2021a) emerged as a new paradigm for tuning a high-quality, pre-trained LLM in a few-shot learning scenario, where only a few samples are available for downstream task learning.
In the prompt-based learning paradigm (Figure 1), an input X is modified using a template function p, also known as a prompting function and has one or more placeholders called mask tokens
![0_image_0.png](0_image_0.png)
<mask>, resulting in a prompted input X′ = p(X)
(Liu et al., 2021b). Additionally, a verbaliser designs an answer domain Z, so that for an output label domain Y, there is a many-to-one mapping for an answer z ∈ Vy ⊆ Z to an output label y ∈ Y in accordance with the downstream task.
Considering a language model fo pre-trained on a large corpus of text, such as Wikipedia, the goal of prompt-based learning is to fine-tune it on a small dataset of prompted inputs X′and corresponding output y, in order to produce a high-quality language model fp capable of generating an answer z for a given input X.
Prompting formulates downstream tasks such as sentiment analysis and text classification to cloze completion (also known as filling in the blanks).
Furthermore, using prompts and fine-tuning allows models to gain superior few-shot learning capabilities (Lester et al., 2021; Schick and Schütze, 2020a; Shin et al., 2020). Despite the relative success of prompt-based learning, the design of prompts can be a challenging task. As a result, many research studies sought to *automate* the process of designing suitable prompts for downstream tasks (Liu et al., 2021c; Zhang et al., 2021; Shin et al., 2020). The motivation for automating prompt design is usually two-fold: first, manually designing prompts can be 1822 time-consuming; and second, automated ones can often provide better performance. In this work, we question *the second motivation* and demonstrate that *existing automated prompts do not consistently* outperform their manual counterparts under various K-shot learning setups. In this paper, we make the following contributions:
- We thoroughly investigate automated prompts and demonstrate that they do not consistently outperform manual prompts, even when the latter are created using basic heuristics and selected among a small number of options
(Section 3.2).
- We show empirically that fine-tuning only serves a strong baseline when K ≥ 100 in a K-shot learning setup (Section 3.2).
- By visualising the prompts generated by autoprompting, we explain why these prompts are not necessarily better than manually designed ones (Section 3.4).
- Supported by our empirical evidence and evaluation, we strongly recommend that *future* research should consider manual prompts as a simple yet effective baseline.
## 2 Related Work
The rise of the *prompting-based learning paradigm* comes with the development of LLMs (Brown et al., 2020), which were demonstrated to be good few-shot learners (Liu et al., 2021d). To begin with, researchers focused on manually crafted prompts for downstream tasks (Petroni et al., 2019; Liu et al., 2021b; Scao and Rush, 2021; Zhao et al., 2021; Schick and Schütze, 2020a), yet soon shifted towards automated prompt designs. Schick et al. investigated how to automatically identify label words for a prompt (Schick and Schütze, 2020a,b), while Shin *et al.* proposed AutoPrompt, a framework for automatically generating prompts for various tasks, through a gradient-based search
(Shin et al., 2020). Gao *et al.* used another LLM,
T5 (Raffel et al., 2020), to generate both the prompting templates and verbaliser answer domains (Gao et al., 2020). Han *et al.* incorporated logic rules into prompt designs, combining several simple subprompts according to these rules (Han et al., 2022).
All of the above mentioned methods are based on the assumption that the prompt design has to rely on discrete tokens.
Liu *et al.* and Lester *et al.* demonstrated that prompts could be trainable continuous embeddings, or soft prompts, instead of discrete tokens. These soft prompts can be learned with a frozen language model (LLM) on a target task (Liu et al.,
2021d; Lester et al., 2021; Zhang et al., 2021). Liu *et al.* further discovered that Deep Prompts, which are soft prompts used in every layer of the model, allow for scaling to large LLMs for complex natural language processing (NLP) tasks (Liu et al., 2021c). Zhang *et al.* developed Differentiable Prompts, which put the label tokens design of the prompt into a continuous space and optimised it jointly with soft prompts (Zhang et al.,
2021). An extensive evaluation was conducted by Zhang *et al.* on various downstream tasks.
Most of the work on automating prompt design mentioned above has two major motivations:
to reduce the amount of time it takes to design prompts manually; and to potentially gain better performance, since manual prompt formats can be sub-optimal (Zhang et al., 2021). While the first motivation may be valid in some cases, it largely depends on the task complexity and the amount of data available - it is sometimes possible for nonexperts to design a prompt sufficient for simple tasks with a large amount of data. The principal focus of this work, however, is on the second motivation: can automated prompts really outperform manual prompts in a consistent manner? A comparison between automated and manual prompts is lacking in current research. To our knowledge, automated prompting methods focus solely on comparing to fine-tuning in a few-shot learning setup, while a comparisons to manual prompting methods remain unexplored. In this paper, we consider AutoPrompt (Auto) (Shin et al., 2020) and Differential Prompt (Diff) (Zhang et al., 2021) as representatives, where one is based on discrete tokens, while the other is based on continuous embeddings. We compare them with manually designed prompts and fine-tuning without prompting on various tasks.
## 3 Evaluation 3.1 Experiment Setup
A robust framework was developed to assess prompting model performance under K-shot learning scenarios where only K samples per class are available for the training and validation datasets.
Three prompting models were re-implemented:
LM-BFF (manual) (Gao et al., 2020), AutoPrompt
(Auto) (Shin et al., 2020), and DART (Diff) (Zhang et al., 2021) models. During prompt-based learning, each prompting model is allowed to fine-tune the parameters of the pre-trained language model using the limited training and validation datasets.
## 3.1.1 Datasets And Model
We conducted comprehensive experiments on six datasets to compare the performance of prompting models fine-tuned on the pre-trained RoBERTalarge model (Liu et al., 2019). Table 2 in Appendix B shows we picked three sentiment analysis and three textural entailment tasks.
## 3.1.2 Prompt Templates And Verbalisers
We design prompts to concatenate the input text and the *<mask>* token, alongside a verbaliser that maps from the answer domain to the output label domain. Manually designed prompts and verbalisers are adapted from the Public Pool of Prompts (Bach et al., 2022) and previous work on prompting (Gao et al., 2020; Xu et al., 2022). For each dataset, we selected four to six prompt-andverbaliser pairs, compared their performance under the same K = 16 few-shot scenario, and picked the best-performing pair for further experiments with different K values. Detailed manually designed prompts and verbalisers, as well as their performance measures, are illustrated in Table 3, and the best-performing pairs are summarised in Table 4 in Appendix C.
An automated discrete prompt replaces the template with trigger tokens <T>. Following the same settings used in AutoPrompt (Shin et al., 2020), we inserted ten trigger tokens between the input text and the *<mask>* token. Under a K-shot scenario, the verbaliser mapping is automatically generated from the train and validation dataset, each with K
samples per class. Table 5 in Appendix D shows the automated discrete prompts and verbalisers for each dataset. A differential prompt starts from the manually designed prompt but treats both the template and the verbaliser as a collection of differentiable parameters.
Take the dataset SST2 as an example: a suitable manually designed prompt could be "<sentence>
. It was <mask> ." with a verbaliser {bad 7→
0, good 7→ 1}; An automated discrete prompt could be "<sentence> <T> ... <T> <mask> ." with ten trigger tokens <T>.
## 3.1.3 Hyper-Parameters
We conducted a beam search using the AdamW
optimiser (Loshchilov and Hutter, 2017) for the optimal batch size, learning rate and weight decay for each set of experiments with the same dataset and K-shot value. Each experiment is run with 100 epochs and an early stopping value of 5, *i.e.*, when the validation loss is non-decreasing for 5 epochs.
The detailed hyper-parameters used in each set of experiments are listed in Table 6, and details on the evaluation metrics are in Appendix E.
## 3.2 Main Results
Table 1 illustrates the performance of various prompting strategies. We observe that manual prompts exhibit the best performance in 13 out of the 24 setups (6 different datasets and 4 different Ks), and the second-best performance in 8 of them. Automated prompts (both Auto and Diff)
only show a clear advantage in TWEETS-HATEOFFENSIVE when K = 100. The baseline in Table 1 is direct fine-tuning on the K samples.
We also see that automated prompts can be catastrophically ineffective in certain setups. For example, as shown in Table 5, Auto performs much worse than Manual or Baseline in MNLIMATCHED when K = 100. Diff also significantly underperforms Manual in TWEETS-HATEOFFENSIVE when K = 16. In later parts of this section, we provide an analysis of the generated prompts and explore the reasons for this phenomenon. Finally, we demonstrate that Baseline sometimes performs well when K is large. This is seen in SST2 when K = 100, 1000 and also ENRON-SPAM when K = 100. In general, we make the following observations:
- Manual prompting outperforms automated prompting (Auto and Diff) with different Kshot setups on most tasks.
- Automated prompting sometimes cannot even outperform fine-tuning, *e.g.* MNLIMISMATCHED K = 100, 1000.
- When K is small, prompting can greatly improve performance, *e.g.* on SST2 and MNLI.
- Automated prompting can fail catastrophically
(*e.g.* MNLI-MISMATCHED K = 1000) and have a high variance in performance (*e.g.* 15.5 standard deviation on SST2), while manual prompting is more robust.
SST2 QNLI
K Baseline Auto Diff Manual Baseline Auto Diff Manual
8 59.8 ± 8.6 51.7 ± 1.9 88.0 ± 1.6 77.6 ± 4.6 49.9 ± 1.0 51.5 ± 0.7 50.5 ± 2.1 54.6 ± 2.8
16 72.1 ± 15.0 70.1 ± 3.9 87.8 ± 0.7 86.9 ± 1.6 49.9 ± 0.2 53.4 ± 1.3 59.5 ± 3.6 74.1 ± 1.2
100 89.6 ± 0.5 83.5 ± 4.3 88.6 ± 0.7 89.4 ± 1.0 78.9 ± 2.3 74.0 ± 4.3 80.2 ± 2.1 82.7 ± 0.7
1000 92.7 ± 0.2 92.5 ± 0.2 90.1 ± 0.7 92.3 ± 0.2 87.2 ± 1.0 83.2 ± 3.8 85.2 ± 1.1 88.0 ± 0.3
MNLI-Matched MNLI-Mismatched
K Baseline Auto Diff Manual Baseline Auto Diff Manual
8 34.6 ± 2.4 34.2 ± 1.1 51.3 ± 1.1 55.7 ± 3.3 33.8 ± 0.8 33.8 ± 0.5 47.6 ± 3.0 56.0 ± 1.4
16 33.3 ± 0.2 34.9 ± 0.7 61.4 ± 1.5 60.2 ± 3.7 32.8 ± 1.3 35.6 ± 0.8 59.4 ± 1.1 60.2 ± 2.7
100 63.1 ± 1.3 42.3 ± 0.5 72.1 ± 0.8 74.1 ± 1.2 73.6 ± 2.1 39.5 ± 1.0 73.3 ± 1.2 77.0 ± 1.2
1000 82.7 ± 0.5 72.9 ± 2.3 80.0 ± 0.8 83.2 ± 0.3 84.3 ± 0.5 76.6 ± 3.7 82.0 ± 0.4 85.0 ± 0.2
ENRON-SPAM TWEETS-HATE-OFFENSIVE
K Baseline Auto Diff Manual Baseline Auto Diff Manual
8 49.1 ± 36.6 73.4 ± 6.0 80.7 ± 5.7 67.9 ± 12.2 14.5 ± 9.5 12.1 ± 4.6 32.5 ± 7.1 25.8 ± 16.5
16 84.2 ± 4.0 80.5 ± 2.6 88.0 ± 2.3 89.4 ± 3.0 38.0 ± 4.1 42.5 ± 2.6 37.2 ± 7.7 46.7 ± 2.5
100 97.1 ± 0.4 90.8 ± 0.4 96.3 ± 0.8 96.3 ± 0.5 44.9 ± 0.9 51.4 ± 3.4 59.7 ± 2.8 47.0 ± 0.8
1000 98.0 ± 0.5 97.0 ± 0.7 99.0 ± 0.1 98.7 ± 0.2 66.5 ± 1.5 66.8 ± 1.8 67.7 ± 3.3 67.5 ± 2.1
![3_image_0.png](3_image_0.png)
## 3.3 More K**-Shot Experiments**
Figure 2 demonstrates the performance of different prompting styles with more K values on SST2, QNLI (Wang et al., 2018) and ENRON-SPAM
(Metsis et al., 2006).
We observe that the performance of all methods starts to converge with larger K values, which is consistent with existing literature (Shin et al.,
2020). It is also worth mentioning that the automated prompting methods do not consistently outperform manual prompting on this large range of K
values. More results are available in Appendix F.
## 3.4 Visualizing Auto-Prompts
As previously discussed, automated prompting can sometimes fail catastrophically. Table 5 summarises all the automated discrete prompts and verbaliser answer domains. Since the answer domain is generated from the K samples per class, it may not be general enough or optimal for the entire dataset. On the other hand, manual prompts and verbalisers are designed based on common knowledge that humans possess from countless examples encountered in daily life. One possible improvement idea on AutoPrompt is to start with a manually designed prompt and update both the prompt and the verbaliser through a gradient-based search in an iterative manner.
## 3.5 Limitations
All prompting methods are trying to extract knowledge from the Large Language Models (LLMs).
Our paper compares their knowledge extraction abilities. Thus, the performance of RoBERTa-large can serve as a reference point and provide insights for other LLMs. However, it is still necessary to assess each large language model independently to understand its capabilities comprehensively.
We only tested a handful of simple manual prompt-and-verbaliser pairs which are included in Tables 3 and 4. It is entirely possible that there is a lot of room for improvement in the design of manual prompt-and verbaliser pairs, thus providing us a even stronger baseline. We have opted to use ten trigger tokens in Auto, in alignment with the experiment settings originally presented in the AutoPrompt paper (Shin et al., 2020). However, since the verbaliser domains generated under few-shot learning settings are noisy, reducing the number of trigger tokens may improve performance.
## 4 Conclusion
In this paper, we revisit the results generated from automated prompting, and show that *automated* prompting cannot consistently outperform simple manual prompting on a variety of tasks. We also demonstrate that the performance of automated prompting is heavily dependent on the amount of data available, and in some cases can even be worse than fine-tuning. On the other hand, manual prompting is more robust to the amount of data available, and can have similar performance to finetuning if not outperforming. We take a closer look at the prompts and verbalisers generated by automated discrete prompting (AutoPrompt) and point out that few-shot learning settings make it challenging to generate prompts and verbalisers that perform well. We hope that this work will motivate researchers to use manual prompts as a general baseline.
## Acknowledgment
The authors would like to thank the anonymous reviewers for their helpful suggestions.
## References
Stephen H. Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-David, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Alan
Fries, Maged S. Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-Jian Jiang, and Alexander M. Rush. 2022. Promptsource: An integrated development environment and repository for natural language prompts.
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language.
arXiv:1703.04009.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2020.
Making pre-trained language models better few-shot learners. *arXiv preprint arXiv:2012.15723*.
Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2022. Ptr: Prompt tuning with rules for text classification. *AI Open*.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*.
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021a. What makes good in-context examples for gpt-3? *arXiv* preprint arXiv:2101.06804.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021b. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
arXiv preprint arXiv:2107.13586.
Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021c. P-tuning v2:
Prompt tuning can be comparable to fine-tuning universally across scales and tasks. *arXiv preprint* arXiv:2110.07602.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021d. Gpt understands, too. *arXiv preprint arXiv:2103.10385*.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization.
Vangelis Metsis, Ion Androutsopoulos, and Georgios Paliouras. 2006. Spam filtering with naive bayes -
which naive bayes? In International Conference on Email and Anti-Spam.
Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? *arXiv preprint arXiv:1909.01066*.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Teven Le Scao and Alexander M Rush. 2021. How many data points is a prompt worth? *arXiv preprint* arXiv:2103.08493.
Timo Schick and Hinrich Schütze. 2020a. Exploiting cloze questions for few shot text classification and natural language inference. *arXiv preprint* arXiv:2001.07676.
Timo Schick and Hinrich Schütze. 2020b. It's not just size that matters: Small language models are also few-shot learners. *arXiv preprint arXiv:2009.07118*.
Taylor Shin, Yasaman Razeghi, Robert L Logan IV,
Eric Wallace, and Sameer Singh. 2020. Autoprompt:
Eliciting knowledge from language models with automatically generated prompts. *arXiv preprint* arXiv:2010.15980.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman.
2018. Glue: A multi-task benchmark and analysis platform for natural language understanding.
arXiv:1804.07461.
Lei Xu, Yangyi Chen, Ganqu Cui, Hongcheng Gao, and Zhiyuan Liu. 2022. Exploring the universal vulnerability of prompt-based learning paradigm.
Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, and Huajun Chen. 2021. Differentiable prompt makes pre-trained language models better few-shot learners. *arXiv* preprint arXiv:2108.13161.
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In *International Conference on Machine Learning*, pages 12697–12706. PMLR.
## B Dataset Details C **Manual Prompt-And-Verbaliser Designs** D Generated Auto-Prompts A Model And Infrastructure Details E Hyper-Parameters And Evaluation Metrics For Training
All our experiments are run parallelly on 4 NVIDIA Tesla V100 GPUs; for smaller K values (e.g., K = 100), most experiments require less than 1 GPU hour, while a setting with a larger K
value (e.g., K = 1000) may require 2 GPU hours.
We conducted comprehensive experiments on six datasets (SST2, QNLI, MNLI-MATCHED, MNLIMISMATCHED, ENRON-SPAM and TWEETSHATE-OFFENSIVE) to compare the performance of prompting models fine-tuned on the pre-trained RoBERTa-large model. As shown in Table 2, we picked three sentiment analysis and three textural entailment tasks. Among the six, three are binary classifications (SST2, QNLI and ENRON-SPAM),
while the remaining datasets have three categories each (MNLI-MATCHED, MNLI-MISMATCHED
and TWEETS-HATE-OFFENSIVE).
In the **Prompt Templates and Verbalisers** part in Section 3.1.2, we discussed how we picked the bestperforming prompt-and-verbaliser pairs. We show the picked manual prompt with their picked verbalisers in Table 3, covering SST2, QNLI, MNLIMATCHED, MNLI-MISMATCHED, ENRONSPAM and TWEETS-HATE-OFFENSIVE. The underlying mechanism for finding a good manual prompt is detailed in Section 3.1.2. As one can see in these tables, the manual prompts used are very simple and requires minimal domain knowledge.
In the **Prompt Templates and Verbalisers** part in Section 3.1.2, we also mentioned that an automated discrete prompt replaces the template with trigger tokens <T>. Following the same settings used in AutoPrompt (Shin et al., 2020), we inserted ten trigger tokens between the input text and the
<mask> token. All automated discrete prompts and their automatically generated verbalisers are listed in Table 5. In contrast to the manual prompts shown in Appendix C, the auto-prompts generated are now more complex.
The RoBERTa-large model (Liu et al., 2019) is pretrained on a large corpus of raw English text using masked language modelling (MLM) objective; it contains 354 million parameters.
In terms of the evaluation metrics which measure the performance of the prompting models,
| Dataset | # Class | Test Sample | Description A sentiment analysis task on movie reviews from the GLUE |
|-----------------------|-----------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| SST2 | 2 | 33674 | benchmark (Wang et al., 2018). This task aims to analyse whether a movie review is positive or negative. A textual entailment task on question-answer pairs from the GLUE benchmark (Wang et al., 2018). The objective is to |
| QNLI | 2 | 5463 | determine whether the context sentence contains the answer to the question. A multi-class (i.e., entailment, neutral, contradiction) textual entailment task on premise-hypothesis pairs from the GLUE benchmark (Wang et al., 2018). Matched version only preserves pairs within the same genre (e.g., science fiction, speech). |
| MNLI-MATCHED | 3 | 4907 | Same as MNLI-MATCHED, the mismatched version is a textual entailment task on premise-hypothesis pairs from the |
| MNLI-MISMATCHED | 3 | 4916 | GLUE benchmark (Wang et al., 2018), but it only preserves pairs within different genres. |
| ENRON-SPAM | 2 | 15858 | A safety critical binary sentiment analysis task determining whether an email text is a spam (Metsis et al., 2006). A safety critical multi-class sentiment analysis task which |
| TWEETS-HATE-OFFENSIVE | 3 | 12391 | aims to classify whether a tweet text contains hate speech, offensive speech or neither (Davidson et al., 2017). |
Table 2: Six datasets selected in the project. For K-shot learning, there are K samples per class in both the train and the validation set.
| SST2 | QNLI | | | | |
|-------------------------------------|-------------------------|-------------------------------------|-----------------------------------|-----------------|---------------------|
| Prompt Design | Answer 7→ Label | Accuracy | Prompt Design | Answer 7→ Label | Accuracy 64.5 ± 4.8 |
| bad 7→ 0, good 7→ 1 | 86.9 ± 1.6 | <question> . <mask> , <sentence> . | 60.5 ± 2.3 | | |
| dog 7→ 0, cat 7→ 1 | 84.7 ± 3.4 | <question> ? <mask> <sentence> . | 68.7 ± 3.2 | | |
| {Yes 7→ 0, | | | | | |
| cat 7→ 0, dog 7→ 1 | 68.7 ± 6.9 | <sentence> ? <mask> , <question> . | 74.1 ± 1.2 | | |
| No 7→ 1} | | | | | |
| great 7→ 0, terrible 7→ 1 | 67.4 ± 5.0 | <question> <mask> <sentence> | 50.0 ± 0.2 | | |
| <sentence> ? <mask> , <question> | 66.7 ± 10.2 | | | | |
| MNLI-Matched | MNLI-Mismatched | | | | |
| Prompt Design | Answer 7→ Label | Accuracy | Prompt Design | Answer 7→ Label | Accuracy |
| <premise> ? <mask> , <hypothesis> . | 60.2 ± 3.7 | <premise> ? <mask> , <hypothesis> . | 60.2 ± 2.7 | | |
| <premise> . <mask> , <hypothesis> . | 58.6 ± 4.8 | <premise> . <mask> , <hypothesis> . | 56.3 ± 1.5 | | |
| {Yes 7→ 0, | {Yes 7→ 0, | | | | |
| <premise> ? <mask> <hypothesis> . | Maybe 7→ 1, | 55.6 ± 1.7 | <premise> ? <mask> <hypothesis> . | Maybe 7→ 1, | 58.4 ± 1.1 |
| <hypothesis> ? <mask> , <premise> . | 51.9 ± 4.2 | <hypothesis> ? <mask> , <premise> . | 57.9 ± 0.8 | | |
| No 7→ 2} | No 7→ 2} | | | | |
| <premise> <mask> <hypothesis> | 51.2 ± 4.2 | <premise> <mask> <hypothesis> | 49.4 ± 2.4 | | |
| <hypothesis> ? <mask> , <premise> | 52.4 ± 2.9 | <hypothesis> ? <mask> , <premise> | 56.0 ± 1.0 | | |
| ENRON-SPAM | TWEETS-HATE-OFFENSIVE | | | | |
| Prompt Design | Answer 7→ Label | F1 score | Prompt Design | Answer 7→ Label | F1 score |
| <mask> : <text> . | ham 7→ 0, spam 7→ 1 | 82.8 ± 1.9 | <tweet> . This post is <mask> . | {hateful 7→ 0, | 46.7 ± 2.5 |
| This is a <mask> : <text> . | ham 7→ 0, spam 7→ 1 | 82.8 ± 2.8 | This post is <mask> : <tweet> . | offensive 7→ 1, | 40.3 ± 3.8 |
| <mask> email : <text> . | genuine 7→ 0, spam 7→ 1 | 89.4 ± 3.0 | <tweet> . This was <mask> . | harmless 7→ 2} | 39.8 ± 4.5 |
| <text> . This was a <mask> . | ham 7→ 0, spam 7→ 1 | 76.8 ± 3.3 | <mask> speech : <tweet> . | 36.8 ± 11.7 | |
| terrible 7→ 0, great 7→ 1 | 86.0 ± 2.7 | <question> ? <mask> , <sentence> . | | | |
| <sentence> . It was <mask> . | | | | | |
Table 3: The prompt-and-verbaliser pairs are tested under the few-shot scenario K = 16, and the best-performing pair is highlighted in bold. The mean and standard deviation of scores are computed across five independent runs.
| Dataset | Prompt Design | Answer 7→ Label |
|-----------------------|-------------------------------------|---------------------------------------------|
| SST2 | <sentence> . It was <mask> . | bad 7→ 0, good 7→ 1 |
| QNLI | <sentence> ? <mask> , <question> . | Yes 7→ 0, No 7→ 1 |
| MNLI-MATCHED | <premise> ? <mask> , <hypothesis> . | Yes 7→ 0, Maybe 7→ 1, No 7→ 2 |
| MNLI-MISMATCHED | <premise> ? <mask> , <hypothesis> . | Yes 7→ 0, Maybe 7→ 1, No 7→ 2 |
| ENRON-SPAM | <mask> email : <text> . | genuine 7→ 0, spam 7→ 1 |
| TWEETS-HATE-OFFENSIVE | <tweet> . This post is <mask> . | hateful 7→ 0, offensive 7→ 1, harmless 7→ 2 |
Table 4: Summarised for each dataset, the best-performing manual prompt and verbaliser.
| Task | Prompt design | K | Answer 7→ Label |
|---------------------------------------|--------------------------------------------|---------------------------------------------------------|-------------------|
| 8 | impunity 7→ 0, ASHINGTON 7→ 1 | | |
| 16 | worthless 7→ 0, Kom 7→ 1 | | |
| <sentence> <T> <T> <T> <T> <T> | 32 | Worse 7→ 0, 天 7→ 1 | |
| <T> <T> <T> <T> <T> <mask> . | 64 | horrible 7→ 0, magic 7→ 1 | |
| 100 | worse 7→ 0, 天 7→ 1 | | |
| 1000 | worse 7→ 0, Excellent 7→ 1 | | |
| SST2 | 8 | implement 7→ 0, defensively 7→ 1 | |
| 16 | counter 7→ 0, Bits 7→ 1 | | |
| <question> <mask> <T> <T> <T> <T> <T> | 32 | Meteor 7→ 0, univers 7→ 1 | |
| <T> <T> <T> <T> <T> <sentence> | 64 | ormon 7→ 0, stood 7→ 1 | |
| 100 | idelines 7→ 0, opard 7→ 1 | | |
| 1000 | G, 7→ 0, overloaded 7→ 1 | | |
| QNLI | 8 | efforts 7→ 0, democratically 7→ 1, Congratulations 7→ 2 | |
| 16 | OWN 7→ 0, hypocritical 7→ 1, examiner 7→ 2 | | |
| <premise> <mask> <T> <T> <T> <T> <T> | 32 | Alicia 7→ 0, historians 7→ 1, BF 7→ 2 | |
| <T> <T> <T> <T> <T> <hypothesis> | 64 | tweets 7→ 0, onboard 7→ 1, Anniversary 7→ 2 | |
| 100 | filmmakers 7→ 0, combat 7→ 1, absence 7→ 2 | | |
| 1000 | thus 7→ 0, MED 7→ 1, independent 7→ 2 | | |
| MNLI-MATCHED | 8 | Whilst 7→ 0, oka 7→ 1, smokers 7→ 2 | |
| 16 | Accordingly 7→ 0, )? 7→ 1, foreigners 7→ 2 | | |
| <premise> <mask> <T> <T> <T> <T> <T> | 32 | ibliography 7→ 0, qa 7→ 1, Governments 7→ 2 | |
| <T> <T> <T> <T> <T> <hypothesis> | 64 | LER 7→ 0, jack 7→ 1, foreigners 7→ 2 | |
| 100 | HEL 7→ 0, gaming 7→ 1, imperialism 7→ 2 | | |
| 1000 | Vladimir 7→ 0, acting 7→ 1, dislike 7→ 2 | | |
| MNLI-MISMATCHED | 8 | Reviewer 7→ 0, Pure 7→ 1 | |
| 16 | debian 7→ 0, Discount 7→ 1 | | |
| <question> <mask> <T> <T> <T> <T> <T> | 32 | hillary 7→ 0, Vampire 7→ 1 | |
| <T> <T> <T> <T> <T> <sentence> | 64 | schedules 7→ 0, Romance 7→ 1 | |
| 100 | subcommittee 7→ 0, Beauty 7→ 1 | | |
| 1000 | committee 7→ 0, ophobic 7→ 1 | | |
| ENRON-SPAM | 8 | Slater 7→ 0, herself 7→ 1, issued 7→ 2 | |
| 16 | kicking 7→ 0, her 7→ 1, selections 7→ 2 | | |
| <premise> <mask> <T> <T> <T> <T> <T> | 32 | athi 7→ 0, herself 7→ 1, vernight 7→ 2 | |
| <T> <T> <T> <T> <T> <hypothesis> | 64 | racist 7→ 0, Marie 7→ 1, skies 7→ 2 | |
| 100 | racist 7→ 0, vaginal 7→ 1, Miracle 7→ 2 | | |
| 1000 | homophobia 7→ 0, b***h 7→ 1, heavens 7→ 2 | | |
| TWEETS-HATE-OFFENSIVE | | | |
| Dataset | Model | Batch Size | η | wd | Dataset | Model | Batch Size | η | wd |
|--------------|-----------------------|--------------|------|--------|-----------|---------|--------------|-----|------|
| Auto | 8 | 1e-5 | 0.01 | Auto | 4 | 2e-5 | 0.1 | | |
| SST2 | QNLI | | | | | | | | |
| Diff | 8 | 1e-5 | 0.01 | Diff | 4 | 1e-5 | 0.1 | | |
| Manual | 4 | 2e-5 | 0.01 | Manual | 4 | 2e-5 | 0.01 | | |
| Auto | 4 | 2e-5 | 0.01 | Auto | 4 | 2e-5 | 0.01 | | |
| MNLI-MATCHED | MNLI-MISMATCHED | | | | | | | | |
| Diff | 4 | 1e-5 | 0.01 | Diff | 8 | 1e-5 | 0.01 | | |
| Manual | 4 | 2e-5 | 0.01 | Manual | 4 | 2e-5 | 0.01 | | |
| Auto | 8 | 1e-5 | 0.01 | Auto | 8 | 2e-5 | 0.1 | | |
| ENRON-SPAM | TWEETS-HATE-OFFENSIVE | | | | | | | | |
| Diff | 8 | 2e-5 | 0.0 | Diff | 8 | 2e-5 | 0.0 | | |
| Manual | 8 | 2e-5 | 0.05 | Manual | 8 | 2e-5 | 0.1 | | |
Table 5: Auto prompts designed alongside with the automatically generated verbalisers for each dataset.
Table 6: Details of the selected hyper-parameters, including batch size, learning rate η and weight decay wd for each set of experiments with the same dataset and prompting model.
we utilised two different metrics according to the nature of the datasets: (1) Multi-class classification accuracy for balanced datasets SST2, QNLI,
MNLI-MATCHED and MNLI-MISMATCHED.
(2) F1 score captures both precisions and recalls for safety-critical or unbalanced datasets ENRONSPAM and TWEETS-HATE-OFFENSIVE.
Table 6 provides details for the training setups.
We show the batch sizes, learning rates and weight decay values used in the experiments. We also show the optimal hyper-parameters for each set of experiments with the same dataset and prompting model. For example, the optimal hyper-parameters for the dataset SST2 with the prompting model Auto are batch size 8, learning rate 10−5and weight decay 0.01.
![8_image_0.png](8_image_0.png)
## F Additional Results For More K-Shot Experiments
In Figure 2 (Section 3.3), we show the performance with more K values for SST2, QNLI and ENRONSPAM. Additional results in the same setup are shown in Figure 3.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 3.5 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 3.1
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A and Appendix B
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3, Appendix A and Appendix B
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3, Appendix A and Appendix B
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix B
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.1 and Appendix E
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3.2 3.3 and Appendix F
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.1 and Appendix A
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ganesh-etal-2023-mind | Mind the Gap between the Application Track and the Real World | https://aclanthology.org/2023.acl-short.156 | Recent advances in NLP have led to a rise in inter-disciplinary and application-oriented research. While this demonstrates the growing real-world impact of the field, research papers frequently feature experiments that do not account for the complexities of realistic data and environments. To explore the extent of this gap, we investigate the relationship between the real-world motivations described in NLP papers and the models and evaluation which comprise the proposed solution. We first survey papers from the NLP Applications track from ACL 2020 and EMNLP 2020, asking which papers have differences between their stated motivation and their experimental setting, and if so, mention them. We find that many papers fall short of considering real-world input and output conditions due to adopting simplified modeling or evaluation settings. As a case study, we then empirically show that the performance of an educational dialog understanding system deteriorates when used in a realistic classroom environment. | # Mind The Gap Between The Application Track And The Real World Ananya Ganesh Jie Cao E. Margaret Perkoff Rosy Southwell Martha Palmer Katharina Kann
University of Colorado Boulder
![0_image_0.png](0_image_0.png)
[email protected]
## Abstract
Recent advances in NLP have led to a rise in inter-disciplinary and application-oriented research. While this demonstrates the growing real-world impact of the field, research papers frequently feature experiments that do not account for the complexities of realistic data and environments. To explore the extent of this gap, we investigate the relationship between the realworld motivations described in NLP papers and the models and evaluation which comprise the proposed solution. We first survey papers from the *NLP Applications* track from ACL 2020 and EMNLP 2020, asking which papers have differences between their stated motivation and their experimental setting, and if so, mention them.
We find that many papers fall short of considering real-world input and output conditions due to adopting simplified modeling or evaluation settings. As a case study, we then empirically show that the performance of an educational dialog understanding system deteriorates when used in a realistic classroom environment.
## 1 Introduction
Modern NLP systems, powered by large language models (LLMs), now have the ability to perform well at foundational natural language understanding and generation tasks (Wang et al., 2018; Brown et al., 2020). Such systems have also increased access and made inter-disciplinary contributions possible across fields such as medicine, law, education, and science. In NLP venues like ACL, the growth in applied and inter-disciplinary work can be witnessed in the NLP Applications track, which received the second-highest number of submissions at EMNLP 2022.
Recently published research from these tracks includes work on complex and important tasks such as synthesizing code for visualization (Chen et al., 2021), classifying operational risk in finance (Zhou et al., 2020), and verifiying scientific claims (Wadden et al., 2020). However, the inherent complexities associated with real-world data distributions and workflows can lead to the actual problem being simplified into an artificial setting that does not realistically reflect the original motivation. For instance, systems may make assumptions about the input available (e.g., require providing pseudocode/docstrings for code generation), or only evaluate on manually curated clean data as opposed to noisier data such as automatic speech recognition (ASR) outputs.
Motivated by this observation and in line with the ACL 2023 theme track, **we set out to investigate the relationship between the motivation**
described in the introductions and the actual experiments in application-focused NLP papers.
We survey papers from the *NLP applications* tracks of ACL 2020 and EMNLP 2020. Specifically, we ask if there are gaps between motivation and experimentation, in the form of i) sub-tasks that are required for the application, but haven't been mentioned in the paper ii) data distributions that are expected in real-world conditions, but haven't been included in the paper's modeling or evaluation. We find that authors do not always explicitly mention assumptions they make, and often operate in con1833
| Question | Counts |
|----------------------------------------------------------------------------------|--------------------------|
| Does the paper comprehensively describe the use case for a reader to understand? | Yes: 15 |
| Is the paper dealing with an entire task or a subtask only? | Entire: 11; Subtask: 4 |
| Does the paper mention the other missing subtasks explicitly? | Yes: 1; No: 3 |
| Is the downstream evaluation realistic? | Yes: 7; No: 7; Unsure: 1 |
strained scenarios highly different from their intended motivation.
To empirically demonstrate the severity of this problem, we then present a case study investigating the performance of an educational dialog system, when the inputs are changed from manually transcribed data to transcripts from a state-of-the-art ASR system. The purpose of the system is to classify utterances made by a student in a classroom into *talkmoves* (Michaels and O'Connor, 2015; O'Connor and Michaels, 2019) that reflect the communication strategies they use, such as *making a* claim, relating to another student. We find that performance drops by 14.6 points (21.2%) when evaluting on Google ASR instead of human transcripts. However, ASR was not identified as a key component of the evaluation pipeline by the original work. We argue that as the field grows and NLP
models get better and better at simulated and constrained settings, it is important for us to explicitly consider additional complexities of our systems in practice. We then present suggestions for authors and organizers of conferences, towards this end.
## 2 Survey 2.1 Method
For the survey of application-oriented research papers, we look at all papers from the *NLP Applications* track of two recent NLP conferences, ACL
2020 and EMNLP 2020, which have a total of 115 papers. These conferences, which were conducted virtually, provide publicly available interfaces,1that allow automatically filtering papers by the track they were submitted to.
We then manually filter papers to identify those that propose and work on *new tasks*. We choose these since papers that tackle existing tasks, such as fact checking, might be restricted to existing benchmarks and datasets that are established in a topic (Thorne et al., 2018). In contrast, papers 1https://virtual.2020.emnlp.org/index.html https://virtual.2020.acl.org/index.html that propose a new task, such as recommending fonts suitable for written text (Shirani et al., 2020),
can integrate considerations about the environment where the task will be used, into their problem formulation and evaluation setup. We end up with 12 papers from EMNLP 2020, and 3 papers from ACL 2020 that deal with new tasks.
We then answer four questions about each paper:
1. *Does the paper comprehensively describe the* use case for a reader to understand? This question helps us establish that the motivations of the authors are clear to us before proceeding with the survey. We discard papers if the answer is no here.
2. *Is the paper dealing with an entire task or a* sub-task only? An example of the sub-task only would be if the desired application was assisting students with writing by providing feedback, but the actual task worked on was detecting errors in writing, with the task of formulating feedback being a sub-task for future work.
3. *Does the paper mention the other missing subtasks explicitly?* We investigate if the authors either mention existing systems that work on the other sub-tasks, or explicitly describe the remaining steps as future work. This is only collected when the answer to Q2 is "sub-task only".
4. *Is the downstream evaluation realistic?* An example of the answer being No, is if the expected use-case requires classifying spoken dialog in real-time, but the paper only evaluates on manually transcribed data.
The survey is conducted by three authors of this paper, who have all been working on NLP for 3+
years. In cases where agreement is not perfect, we report the majority answer. While all four questions take either yes or no for an answer, we optionally collect reasons for answering no on Questions 1 and 4. We only accept *unsure* as an answer when no decision can be made.
## 2.2 Findings
The results of the survey are presented in Table 1.
In response to the second question, we find that 4 out of 15 papers work on sub-tasks of the overall system; however, only one of these papers explicitly mentions the other sub-tasks as components of the pipeline. Overlooked are tasks such as machine translation, performing grammatical error correction, and performing document retrieval prior to classification. In response to the fourth question, we find that 7 out of 15 papers do not include evaluations that are realistic for the setting in which they might be deployed. Some comments provided by the annotators as evidence include "evaluating only on transcribed dialog and not on ASR", "evaluating only on data translated from the original language",
"not incorporating retrieval performance into evaluation pipeline" and "not checking the validity of integrated evidence." One of the responses to the last question is *unsure*, provided by two of the annotators, while the third annotator answered yes. One annotator's rationale for being unable to decide is that the output space modeled in the paper does not adequately reflect that seen by a user, while the second annotator claims that the task is highly subjective.
We compute inter-rater agreement using Krippendorff's α, used when there are more than two annotators (Artstein and Poesio, 2008). On Questions 2,3 and 4, the α values are 0.39, 0.44, and 0.44. While the relatively low values reflect the subjective nature of assessing application-oriented work qualitatively, our three-way annotation process and majority voting reduces the effect of an overly strict or lenient annotator. Overall, our findings indicate that application-oriented papers display some gaps that need to be addressed before the intended application is viable. While this gap often occurs in the evaluation pipeline, we highlight the importance of adequately describing all components or sub-tasks essential for an application in practice.
## 3 Case Study
In this section, we present a case study of an application from the domain of education. The task involves classifying student utterances into *talk* moves (Michaels and O'Connor, 2015), which are strategies provided by the Academically Productive Talk framework (Michaels et al., 2008), that students and teachers use for maintaining productive and respective discourse in a classroom. We empirically analyze the impact of evaluating this task only on a constrained, artificial environment, as opposed to a more realistic setting.
## 3.1 Dataset And Models
Dataset The data consists of conversations among middle school students performing collaborative work in science classrooms, documented in more detail in Southwell et al. (2022). Groups of 2-4 consenting students are seated at each table, and audio is collected through table-top Yeti Blue microphones. In total, 31 five-minute dialogue sessions are chosen for the *talk moves* analysis. Like most papers in our survey, we build a high-quality dataset for our application: samples were filtered and transcribed manually ("human" transcript) by a team of three annotators, resulting in 2003 student utterances. There are five student talk moves under the APT scheme, including Relating to another student, Asking for more info, Making a Claim, Providing evidence or reasoning, and *None*. We additionally include the label *Not enough context* when the annotators cannot make a decision. Examples of all labels can be found in Appendix A. Due to label imbalance, we cluster the labels into 3 categories (NONE, LEARNING COMMUNITY (LC) and OTHER) . Our clustering follows the higher-level grouping of talk moves into *Learning Community*,
Content Knowledge, and *Rigorous Thinking* as defined in (Resnick et al., 2018). The dataset is then divided by session into training/dev/test splits for our model.
Model Following the state-of-the-art model for classifying teacher *talk moves* (Suresh et al., 2022),
we build our student *talk moves* model by finetuning the RoBERTa-base (Liu et al., 2019) model for sequence classification. We use the previous N = 6 utterances as the context when predicting the *talkmove* label for the current utterance, after experimenting with multiple context windows (N)
on our development set. As a baseline, we develop a random classifier using the scikit-learn DummyClassifier (Pedregosa et al., 2011), that ignores input features and uses training label distributions to make a decision. Our models are trained and validated on cleaned human transcriptions. While we do not experiment with *training* on the ASR
transcripts for the current case study, results for this setting can be found in Cao et al. (2023).
## 3.2 Distribution Shift: Human Vs. Asr
However, when deploying our models in the classroom, we do not have access to clean human transcripts, and instead need to work with the outputs of ASR systems. To compare the differences between both, we look at two state-of-the-art ASR systems:
Google (Google, 2023) and OpenAI Whisper (Radford et al., 2022).2 Table 2 shows the distribution shift between human and ASR transcripts. Because of the noisy small-group classroom setting, some student utterances are difficult to recognize, resulting in imperfect ASR transcriptions with incomplete or empty utterances. This causes the input distributions to vary between human and ASR transcripts. Additionally, when the empty utterances are filtered out, the label distribution also shifts across human and different ASRs. To provide as fair a comparison as possible with the original human transcripts, we create two versions of the ASR
data. The first version, denoted using the subscript
'filter' is filtered such that empty utterances are removed, which results in its size varying from the human transcripts. The second version, denoted by the subscript 'all', retains all ASR utterances where the corresponding human transcription is not empty, thus resulting in the same number of utterances as the original human transcripts.
## 3.3 Results
To show the performance gap caused by the above distribution shift, we evaluate our model on both human transcriptions and transcriptions from the two ASR systems. For each ASR transcript, we report both performances on their filtered version (Googlefilter, Whisperfilter) and the all ver2We select Google as it has been shown to work as well for children as adults (Rodrigues et al., 2019) and outperform similar services (Filippidou and Moussiades, 2020).
Table 3: Results on student talk move classification.
| Testing | macro F1 | NONE | LC | OTHER |
|-------------------|------------|--------|-------|---------|
| Random Baselines | | | | |
| Human | 0.316 | 0.393 | 0.353 | 0.201 |
| Googlefilter | 0.321 | 0.379 | 0.352 | 0.230 |
| Whisperfilter | 0.317 | 0.392 | 0.357 | 0.202 |
| Googleall | 0.306 | 0.385 | 0.344 | 0.190 |
| Whisperall | 0.312 | 0.390 | 0.354 | 0.193 |
| Training on Human | | | | |
| Human | 0.689 | 0.701 | 0.783 | 0.581 |
| Googlefilter | 0.591 | 0.555 | 0.635 | 0.581 |
| Whisperfilter | 0.614 | 0.625 | 0.601 | 0.617 |
| Googleall | 0.543 | 0.59 | 0.572 | 0.467 |
| Whisperall | 0.599 | 0.641 | 0.558 | 0.599 |
| Human | Googlefilter | Whisperfilter | | | | |
|-----------|----------------|-----------------|-----|-------|-----|-----|
| train | dev | train | dev | train | dev | |
| Non-Empty | 991 | 371 | 646 | 223 | 869 | 338 |
| NONE | 299 | 109 | 153 | 62 | 252 | 96 |
| LC | 515 | 194 | 361 | 108 | 450 | 176 |
| OTHER | 177 | 73 | 132 | 53 | 167 | 66 |
sion (Googleall, Whisperall). We report macro F1 as well as class-wise F1 for all models, as shown in Table 3. The top rows show performance of the random baseline. Because of the shift in label distributions, as described in Section 3.2, even the input-agnostic random baselines vary for the different versions. Looking at the model performances, we see that overall macro F1 drops by 8.91 points for Whisperall (a 12% drop) and 14.6 points
(a 21% drop) for Googleall when comparing across transcripts that have the same length.
When considering real-world deployment, the potential for such a dramatic drop in performance should be taken into account by both the designer
(including researchers) and the user (such as teachers). However, for similar applications based on classroom discourse analysis, such as classifying teacher talk moves (Suresh et al., 2022), predicting appropriate next teacher talk moves (Ganesh et al., 2021) or measuring teacher uptake of student ideas (Demszky et al., 2021), comparisons to ASR
transcriptions to illustrate real-world performance are rarely made, and, in many cases, ASR as a component is never mentioned.
## 4 Discussion
Through the above survey and case study, we qualitatively and quantitatively examine the gap between task-focused solutions in NLP research, and realistic use cases. We first acknowledge that there has existed a long-standing tradition in NLP to contextualize current research efforts through potential future applications. Looking at task-oriented dialog systems for example, early work such as Deutsch
(1975) was motivated by the need to design computational assistants to support humans in mechanical tasks, and discussed the construction of essential components such as discourse processors, despite missing key upstream and downstream components such as ASR or dialog generation. Investigating sub-problems and their respective solutions in environments that are distinct from real-word settings has largely been unavoidable and sometimes even desirable. However, we argue that with the growth of the field and with the progress enabled by LLMs and related advances, we now have the opportunity to examine how closely our experimental setups can reflect our long term goals. Additionally, for papers that are explicitly in the *Applications* track, which present new applications intended to satisfy a real-world user need, we believe it is even more important to consider the bigger picture, and accurately describe necessary next steps for making the application a reality.
To bridge this gap, we propose a few initial recommendations: i) we suggest including a question on the Responsible NLP Checklist3 pertinent to application-oriented papers, asking if the experimental setup has taken into account the real-world conditions of the application, ii) we recommend that authors describe any potential gaps between their motivation and proposed solution, and if so, state what is lost in the gap (such as ASR), and iii) we call for work to investigate ways to explicitly account for the gap, such as simulating noisy input data in cases where accessing the true distributions is not possible. We invite discussion from the research community on other ways forward.
## 5 Related Work
Our paper adds to a body of work on meta-analysis of NLP papers and the state of NLP research, particularly from the recently introduced theme tracks at *ACL conferences (Bianchi and Hovy, 2021; Bowman, 2022; Kann et al., 2022). Similarly to us in that the authors examine evaluation practices, Bowman and Dahl (2021) points out problems with benchmarking, while Rodriguez et al. (2021) proposes ways to improve leaderboards in order to truly track progress. Other papers that critically examine evaluation and leaderboards include Ribeiro et al. (2020); Dodge et al. (2019) and Ethayarajh 3https://aclrollingreview.org/
responsibleNLPresearch/
and Jurafsky (2020). In contrast, we focus on discrepancies between proposed experimental settings and the stated motivation of research endeavours.
In addition, Bowman (2022) discusses that, similar to problematic hype, underclaiming when talking about NLP models comes with risks, and Bianchi and Hovy (2021) highlights multiple concerning trends in NLP research. More broadly, Lipton and Steinhardt (2019) discuss concerns with ML scholarship, and Church (2020) draws attention to downward trends in reviewing quality and how these can potentially be mitigated.
## 6 Conclusions
We investigate the "gap" between the motivations of application-focused NLP papers and their actual experimental setting. Through a survey of NLP
Applications papers from two NLP conferences, we find that i) necessary components for the application get overlooked when papers focus on subtasks and ii) realistic input sources such as ASR
are not being considered in downstream evaluations. We further highlight the severity of the latter issue through a case study on a dialog understanding system intended for classrooms, showing the drop in performance when ASR input, expected in the real-world, is used. While we outline potential strategies to address this issue, we hope our work will spur further discussion about future steps.
## Limitations
One of the limitations of our survey is that it covers a limited sample space of 15 papers from EMNLP 2020 and ACL 2020. While a larger sample would be helpful in gathering more evidence, access to specific tracks is limited at NLP conferences, unless hosted online via a virtual or hybrid system.
With respect to our case study, we evaluate on the ASR utterances, but with labels corresponding to the original manual transcriptions. For a perfect comparison, the ASR utterances would need to be re-annotated as the talk move could change based on the severity of transcription errors.
## Acknowledgments
We thank the anonymous reviewers for their thoughtful feedback and suggestions. This research was supported by the NSF National AI Institute for Student-AI Teaming (iSAT) under grant DRL
2019805. The opinions expressed are those of the authors and do not represent views of the NSF.
## References
Ron Artstein and Massimo Poesio. 2008. Survey article:
Inter-coder agreement for computational linguistics.
Computational Linguistics, 34(4):555–596.
Federico Bianchi and Dirk Hovy. 2021. On the gap between adoption and understanding in NLP. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3895–3901, Online.
Association for Computational Linguistics.
Samuel Bowman. 2022. The dangers of underclaiming: Reasons for caution when reporting how NLP
systems fail. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 7484–7499, Dublin, Ireland. Association for Computational Linguistics.
Samuel R. Bowman and George Dahl. 2021. What will it take to fix benchmarking in natural language understanding? In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4843–4855, Online. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Jie Cao, Ananya Ganesh, Jon Cai, Rosy Southwell, Margaret Perkoff, Michael Regan, Katharina Kann, James Martin, Martha Palmer, and Sidney D'Mello.
2023. A comparative analysis of automatic speech recognition errors in small group classroom discourse. In Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization, UMAP
'23. Association for Computing Machinery.
Xinyun Chen, Linyuan Gong, Alvin Cheung, and Dawn Song. 2021. PlotCoder: Hierarchical decoding for synthesizing visualization code in programmatic context. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 2169–2181, Online. Association for Computational Linguistics.
Kenneth Ward Church. 2020. Emerging trends: Reviewing the reviewers (again). Natural Language Engineering, 26(2):245–257.
Dorottya Demszky, Jing Liu, Zid Mancenido, Julie Cohen, Heather Hill, Dan Jurafsky, and Tatsunori Hashimoto. 2021. Measuring conversational uptake:
A case study on student-teacher interactions. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pages 1638–1653, Online. Association for Computational Linguistics.
Barbara G. Deutsch. 1975. Establishing context in task-oriented dialogs. *American Journal of Computational Linguistics*, pages 4–18. Microfiche 35.
Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019. Show your work: Improved reporting of experimental results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2185–
2194, Hong Kong, China. Association for Computational Linguistics.
Kawin Ethayarajh and Dan Jurafsky. 2020. Utility is in the eye of the user: A critique of NLP leaderboards.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 4846–4853, Online. Association for Computational Linguistics.
Foteini Filippidou and Lefteris Moussiades. 2020. A
benchmarking of ibm, google and wit automatic speech recognition systems. In IFIP International Conference on Artificial Intelligence Applications and Innovations, pages 73–82. Springer.
Ananya Ganesh, Martha Palmer, and Katharina Kann.
2021. What would a teacher do? Predicting future talk moves. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 4739–4751, Online. Association for Computational Linguistics.
Google. 2023. Google speech-to-text. https://cloud.
google.com/speech-to-text/. [Online; accessed 20-Jan-2022].
Katharina Kann, Shiran Dudy, and Arya D. McCarthy.
2022. A major obstacle for nlp research: Let's talk about time allocation!
Zachary C. Lipton and Jacob Steinhardt. 2019. Troubling trends in machine learning scholarship: Some ml papers suffer from flaws that could mislead the public and stymie future research. *Queue*,
17(1):45–77.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Sarah Michaels and Catherine O'Connor. 2015. Conceptualizing talk moves as tools: Professional development approaches for academically productive discussion. Socializing intelligence through talk and dialogue, 347:362.
Sarah Michaels, Catherine O'Connor, and Lauren B
Resnick. 2008. Deliberative discourse idealized and realized: Accountable talk in the classroom and in civic life. *Studies in philosophy and education*,
27(4):283–297.
Catherine O'Connor and Sarah Michaels. 2019. Supporting teachers in taking up productive talk moves:
The long road to professional learning at scale. *International Journal of Educational Research*, 97:166–
175.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022.
Robust speech recognition via large-scale weak supervision. *arXiv preprint arXiv:2212.04356*.
Lauren B Resnick, Christa SC Asterhan, and Sherice N
Clarke. 2018. Accountable talk: Instructional dialogue that builds the mind. Geneva, Switzerland:
The International Academy of Education (IAE) and the International Bureau of Education (IBE) of the United Nations Educational, Scientific and Cultural Organization (UNESCO).
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902–
4912, Online. Association for Computational Linguistics.
Ana Rodrigues, Rita Santos, Jorge Abreu, Pedro Beça, Pedro Almeida, and Sílvia Fernandes. 2019. Analyzing the performance of asr systems: The effects of noise, distance to the device, age and gender. In Proceedings of the XX International Conference on Human Computer Interaction, pages 1–8.
Pedro Rodriguez, Joe Barrow, Alexander Miserlis Hoyle, John P. Lalor, Robin Jia, and Jordan BoydGraber. 2021. Evaluation examples are not equally informative: How should that change NLP leaderboards? In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4486–4503, Online. Association for Computational Linguistics.
Amirreza Shirani, Franck Dernoncourt, Jose Echevarria, Paul Asente, Nedim Lipka, and Thamar Solorio.
2020. Let me choose: From verbal context to font selection. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 8607–8613, Online. Association for Computational Linguistics.
R. Southwell, S. Pugh, E.M. Perkoff, C. Clevenger, J. Bush, and S. D'Mello. 2022. Challenges and feasibility of automatic speech recognition for modeling student collaborative discourse in classrooms. In Proceedings of the 15th International Conference on Educational Data Mining. International Educational Data Mining Society.
Abhijit Suresh, Jennifer Jacobs, Margaret Perkoff, James H. Martin, and Tamara Sumner. 2022. Finetuning transformers with additional context to classify discursive moves in mathematics classrooms. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA
2022), pages 71–81, Seattle, Washington. Association for Computational Linguistics.
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018.
FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana.
Association for Computational Linguistics.
David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 7534–7550, Online. Association for Computational Linguistics.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the* 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Fan Zhou, Shengming Zhang, and Yi Yang. 2020. Interpretable operational risk classification with semisupervised variational autoencoder. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 846–852, Online.
Association for Computational Linguistics.
## A Talk Move And Label Clustering
Table 4 shows the original student *talk moves* in our dataset. We merged the two labels related to learning community as a single label LC, and then
| Label | TalkMove | Counts | Example |
|----------------------|----------------------|-----------------------------------------------|------------------------------------------|
| NONE | None | 299 | 'OK",'Alright",'Let's do the next step." |
| Relating to | 'My bad", 'Press the | | |
| another | 512 | button",'You need to | |
| student | code that" | | |
| Asking for more info | 3 | 'I don't understand number four." | |
| LC | 'We should place the | | |
| Making a claim | 41 | wire on P2.",'We could do a winky face next." | |
| Providing | 'Because that's how | | |
| evidence or | 1 | loud our class usually | |
| reasoning | is." | | |
| Not Enough Context | 139 | 'Here",'Do you mean [inaudible]" | |
| OTHER | | | |
merged two rare labels "Making a claim", and "Providing evidence and reasoning' with "Not Enough Context", and form a new label OTHER
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
2
✗ B1. Did you cite the creators of artifacts you used?
Left blank.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** 3
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3.1 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
wang-etal-2023-distill | How to Distill your {BERT}: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives | https://aclanthology.org/2023.acl-short.157 | Recently, various intermediate layer distillation (ILD) objectives have been shown to improve compression of BERT models via Knowledge Distillation (KD). However, a comprehensive evaluation of the objectives in both task-specific and task-agnostic settings is lacking. To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings. We show that attention transfer gives the best performance overall. We also study the impact of layer choice when initializing the student from the teacher layers, finding a significant impact on the performance in task-specific distillation. For vanilla KD and hidden states transfer, initialisation with lower layers of the teacher gives a considerable improvement over higher layers, especially on the task of QNLI (up to an absolute percentage change of 17.8 in accuracy). Attention transfer behaves consistently under different initialisation settings. We release our code as an efficient transformer-based model distillation framework for further studies. |
## How To Distill Your Bert: An Empirical Study On The Impact Of Weight Initialisation And Distillation Objectives
Xinpeng Wang∗ Leonie Weissweiler∗⋄ Hinrich Schütze∗⋄ **Barbara Plank**∗⋄
∗Center for Information and Language Processing (CIS), LMU Munich, Germany
⋄Munich Center for Machine Learning (MCML), Munich, Germany
{xinpeng, weissweiler, bplank}@cis.lmu.de
## Abstract
Recently, various intermediate layer distillation (ILD) objectives have been shown to improve compression of BERT models via Knowledge Distillation (KD). However, a comprehensive evaluation of the objectives in both taskspecific and task-agnostic settings is lacking.
To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings. We show that attention transfer gives the best performance overall.
We also study the impact of layer choice when initializing the student from the teacher layers, finding a significant impact on the performance in task-specific distillation. For vanilla KD and hidden states transfer, initialisation with lower layers of the teacher gives a considerable improvement over higher layers, especially on the task of QNLI (up to an absolute percentage change of 17.8 in accuracy). Attention transfer behaves consistently under different initialisation settings. We release our code as an efficient transformer-based model distillation framework for further studies.1
## 1 Introduction
Large-scale pre-trained language models (PLMs)
have brought revolutionary advancements to natural language processing, such as BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), ELECTRA
(Clark et al., 2020) and GPT-3 (Brown et al., 2020).
However, the enormous size of these models has led to difficulties in deploying them in resourceconstrained environments. Therefore significant interest has emerged in developing methods for reducing their size.
Knowledge Distillation (KD) (Hinton et al.,
2015) transfers the knowledge embedded in one model to another, which can be used for crosslingual transfer, cross-modal transfer, and model compression. KD heavily depends on the distillation objective, which determines how knowledge 1https://github.com/mainlp/How-to-distill-your-BERT
is transferred. Many works have tried to design different distillation objectives for Transformerbased (Vaswani et al., 2017) model compression and successfully distilled PLMs into smaller models, either task-specifically (Sun et al., 2019a; Jiao et al., 2020) or task-agnostically—which differ in whether KD is performed at the pre-training stage or during task finetuning (Sanh et al., 2019; Sun et al., 2020b; Wang et al., 2020; Wang et al., 2021).
Despite their impressive results, determining the best distillation objective is difficult due to their diverse comparison setups, such as data preprocessing, student model initialization, layer mapping strategies, task-specific/agnostic settings, and others. This breadth of choices and lack of code has led to comparison on unequal grounds and contradictory findings.2 This shows a substantial need to reproduce and evaluate distillation objectives within the same setting. Motivated by this gap, we conduct experiments on the most common distillation objectives and their combinations in taskspecific and task-agnostic settings. From our empirical evaluation, we show: (1) attention transfer performs consistently well in various initialisation settings, (2) initialisation with lower layers of the teacher gives a considerable improvement over higher layers in task-specific distillation.
In summary, our **contributions** are:
- We perform an evaluation of the effectiveness of different distillation objectives and the layer choice for initializing the student from the teacher layer.
- We make our code available as an efficient distillation framework.
- We provide practical guidance in terms of teacher layer choice for initialisation, distillation objectives and training parameters.
2For example, both Jiao et al. (2020) and Wang et al. (2020)
claimed to be the better method in their setting. See section 5 for detail.
## 2 Related Work
Task-specific Distillation Sun et al. (2019b) taskspecifically compressed BERT by learning from the every k-th layer of the teacher. To avoid leaving out some of the teacher layers, many follow-up works (Wu et al., 2020, Passban et al., 2021, Wu et al., 2021) designed new layer mapping strategies to fuse the teacher layers. Jiao et al. (2020)
used data augmentation to further improve the performance. Initialising the student model with pretrained weights is crucial for performance since the student learns from the teacher only shortly in downstream tasks. Common choices for initialization are: (1) task-agnostically distilling models first, (2) using publicly available distilled models, or (3) initializing with teacher layers. As part of this study, we examine how to maximize the benefits of initializing from teacher layers.
Task-agnostic Distillation In the field of taskagnostic distillation, one line of work is to compress the teacher model into a student model with the same depth but narrower blocks (Sun et al.,
2020b, Zhang et al., 2022). Another line of work is to distill the teacher into a student with fewer layers (Sanh et al., 2019, Jiao et al., 2020, Wang et al., 2020, Wang et al., 2021), which is our focus.
Comparative Studies Li et al. (2021) conducted out-of-domain and adversarial evaluation on three KD methods, which used hidden state transfer or data augmentation. Lu et al. (2022) is closely related to our work, where they also evaluated knowledge types and initialisation schemes. However, they did not consider layer choice when initialising from the teacher, and the evaluation was only for task-specific settings. Hence, our work complements theirs.
## 3 Distillation Objectives
Prediction Layer Transfer Prediction layer transfer minimizes the soft cross-entropy between the logits from the teacher and the student: Lpred =
CEz T /t, z S/t, with z Tand z Sthe logits from the teacher/student and t is the temperature value.
Following the vanilla KD approach (Hinton et al.,
2015), the final training loss is a combination of Lpred and supervision loss Lce (masked language modelling loss Lmlm in the pertaining stage). We denote this objective as **vanilla KD**.
Hidden States Transfer Hidden states transfer penalizes the distance between the hidden states of specific layers from the teacher and the student. Common choices for the representation are the embedding of the [CLS] token (Sun et al.,
2019b) and the whole sequence embedding (Jiao et al., 2020). We use Mean-Squared-Error (MSE)
to measure the distance between the student and teacher embedding, which can be formulated as Lhid = MSEh SWh, h T, where h S ∈ R
dand h T ∈ R
d′are the [CLS] token embedding of specific student and teacher layer, d and d′are the hidden dimensions. The matrix W h ∈ R
d×d′is a learnable transformation. We denote this objective as **Hid-CLS**. In the case of transferring the sequence embedding, one can replace the token embeddings with sequence embeddings HS ∈ R
l×d and HT ∈ R
l×d′, where l is the sequence length.
The objective that transfers the sequence embedding with MSE loss is denoted as **Hid-Seq**.
We also evaluated a contrastive representation learning method which transfers the hidden state representation from the teacher to the student with a contrastive objective (Sun et al., 2020a). We inherited their code for implementation and refer our readers to the original paper for details. We denote this objective as **Hid-CLS-Contrast**.
Attention and Value Transfer The attention mechanism has been found to capture rich linguistic knowledge (Clark et al., 2019), and attention map transfer is widely used in transformer model distillation. To measure the similarity between the multi-head attention block of the teacher and the student, MSE and Kullback-Leibler divergence are the two standard loss functions. The objective using MSE is formulated as Latt =
1 h Ph i=1 MSE(AS
i
, AT
i
), where h is the number of attention heads, matrices Ai ∈ R
l×lrefers to the i-th attention head (before the softmax operation)
in the multi-head attention block. We denote this objective as **Att-MSE**.
Since the attention after the softmax function is a distribution over the sequence, we can also use the KL-divergence to measure the distance: Latt =
1 T H
PT
t=1 PH
h=1 DKL(a T
t,h∥a S
t,h), where T is the sequence length and H is the number of attention heads. We will denote this objective as **Att-KL**. In addition to attention transfer, value-relation transfer was proposed by Wang et al. (2020), to which we refer our readers for details. Value-relation transfer objective will be denoted as **Val-KL**.
Objectives QNLI SST-2 MNLI MRPC QQP RTE CoLA Avg
Acc Acc Acc F1 Acc Acc Mcc
Vanilla KD 66.5±1.49 84.7±0.16 75.1±0.05 71.2±0.80 81.9±0.10 54.0±1.24 69.1±0.00 71.8
Hid-CLS-Contrast 69.3±0.60 85.3±0.56 76.2±0.45 71.1±0.85 83.1±0.69 53.6±0.23 69.0±0.12 72.5 Hid-CLS 75.7±0.57 85.8±0.34 77.0±0.10 71.3±0.41 83.8±1.63 54.0±2.17 68.4±0.35 73.2
Hid-Seq 83.3±0.13 87.4±0.13 78.3±0.13 **72.9**±0.50 87.6±0.00 51.8±1.10 69.2±0.55 75.8
Att-MSE 84.3±0.18 89.2±0.40 78.6±0.25 71.1±0.41 88.7±0.05 54.4±1.03 69.3±0.17 76.5
+Hid-Seq 84.6±0.29 89.2±0.21 78.9±0.10 71.8±0.51 88.8±0.00 54.0±0.93 **69.5**±0.48 77.0
Att-KL 85.3±0.14 89.0±0.26 79.4±0.08 71.4±0.29 89.0±0.05 55.5±2.05 69.3±0.13 77.0
+Hid-Seq 84.6±0.21 89.1±0.46 79.5±0.17 72.4±0.39 89.0±0.06 57.2±0.86 69.3±0.21 77.3 +Val-KL **85.5**±0.24 **89.6**±0.31 **79.6**±0.10 72.2±0.39 **89.1**±0.05 **57.5**±0.70 69.2±0.15 **77.5**
Objectives QNLI SST-2 MNLI MRPC QQP RTE CoLA Avg
Acc Acc Acc F1 Acc Acc Mcc
DistilBERT⋆ 89.2 91.3 82.2 87.5 88.5 59.9 51.3 78.5
TinyBERT† 90.5 91.6 83.5 88.4 90.6 72.2 42.8 79.9
MiniLM§ **91.0** 92.0 **84.0** 88.4 **91.0 71.5** 49.2 81.0
Vanilla KD⋆ 88.6 91.4 82.4 86.5 90.6 61.0 **54.4** 79.3
Hid-CLS 86.5 90.6 79.3 73.0 89.7 61.0 33.9 73.4
Hid-Seq 89.2 91.5 82.3 89.2 90.3 67.2 48.2 79.7 Att-MSE 89.8 91.6 83.2 90.6 90.7 69.7 53.5 **81.3**
+Hid-Seq† 89.7 **92.4** 82.8 90.4 90.8 68.6 52.8 81.1
Att-KL 88.0 89.7 81.1 90.1 90.3 66.1 43.6 78.4
+Hid-Seq 88.9 91.6 82.4 90.0 90.5 66.8 47.9 79.7 +Val-KL§ 89.8 91.6 82.4 **91.0** 90.6 66.7 47.7 80.0
## 4 Experimental Setup
We evaluate our model on the General Language Understanding Evaluation (GLUE) benchmark
(Wang et al., 2018) tasks, including linguistic acceptability (CoLA), sentiment analysis (SST-2), semantic equivalence (MRPC, QQP), and natural language inference (MNLI, QNLI, RTE).
For task-specific distillation, we distill a finetuned RoBERTaBASE (Liu et al., 2019) into a 3layer transformer model on each GLUE task, using the Fairseq (Ott et al., 2019) implementation and the recommended hyperparameters presented in Liu et al. (2019). We follow the training procedure from TinyBERT to perform *intermediate* layer and *prediction layer* distillation sequentially for 10 epochs each, freeing us from tuning the loss weights. For intermediate layer distillation, the student learns from the same teacher's layers that were used for initialising the student. In addition, we always initialise the embedding layer with the teacher's embedding layer.
For task-agnostic distillation, we distill the uncased version of BERTbase into a 6-layer student model, based on the implementation by Izsak et al.
(2021). Here we perform last-layer knowledge transfer since we see no improvement when transferring multiple layers in our experiments. We train the student model for 100k steps with batch size 1024, a peaking learning rate of 5e-4 and a maximum sequence length of 128. The distilled student model is then fine-tuned on the GLUE datasets with grid search over batch size {16, 32} and learning rate {1e-5, 3e-5, 5e-5, 8e-5}. We follow the original training corpus of BERT: English Wikipedia and BookCorpus (Zhu et al., 2015).
Objectives Init. QNLI SST-2 MNLI MRPC QQP RTE CoLA Avg
Acc Acc Acc F1 Acc Acc Mcc
Vanilla KD
4,8,12 66.5±1.49 84.7±0.16 75.1±0.05 71.2±0.80 81.9±0.10 54.0±1.24 69.1±0.00 71.8
1,8,12 82.9±0.31 88.5±0.51 76.6±0.08 71.2±0.88 87.8±0.06 55.5±1.07 70.8±0.29 76.2
1,2,3 **86.2**±0.35 **90.4**±0.28 **78.7**±0.18 **78.6**±0.18 **89.8**±0.05 **57.1**±1.46 **74.9**±0.54 **79.4**
Hid-CLS-Contrast
4,8,12 69.3±0.60 85.3±0.56 76.2±0.45 71.1±0.85 83.1±0.69 53.6±0.23 69.0±0.12 72.5
1,8,12 82.9±0.36 88.6±0.29 77.0±0.58 72.8±0.61 88.0±0.13 55.4±0.75 70.4±0.30 76.4 1,2,3 **86.1**±0.22 **89.6**±0.38 **79.0**±0.12 **73.9**±1.43 **90.1**±0.10 **55.1**±0.67 **71.1**±1.09 **77.8**
Hid-CLS
4,8,12 75.7±0.57 85.8±0.34 77.0±0.10 71.3±0.41 83.8±1.63 54.0±2.17 68.4±0.35 73.2
1,8,12 83.4±0.15 88.1±0.38 77.7±0.10 71.9±0.10 88.6±0.06 56.1±0.88 71.5±0.40 76.7 1,2,3 **85.7**±0.05 **90.3**±0.29 **78.6**±0.14 **74.3**±1.00 **90.1**±0.00 **57.1**±1.37 **73.6**±0.24 **78.5**
Hid-Seq
4,8,12 83.3±0.13 87.4±0.13 78.3±0.13 72.9±0.50 87.6±0.00 51.8±1.10 69.2±0.55 75.8
1,8,12 84.3±0.10 88.6±0.28 78.2±0.08 72.0±0.70 88.6±0.10 55.2±1.40 71.6±0.37 77.6
1,2,3 **85.9**±0.24 **90.7**±0.08 **78.9**±0.10 **75.5**±1.14 **90.0**±0.05 **56.6**±0.74 **74.2**±0.45 **78.8**
Att-KL
4,8,12 85.3±0.14 89.0±0.26 **79.4**±0.08 71.4±0.29 89.0±0.05 55.5±2.05 69.3±0.13 77.0 1,8,12 84.7±0.26 **89.6**±0.13 78.2±0.10 **72.5**±0.24 88.6±0.08 56.5±0.44 **70.4**±0.26 77.2
1,2,3 **86.2**±0.06 88.6±0.19 77.9±0.17 71.3±0.24 **89.0**±0.05 **61.2**±0.72 69.5±0.80 **77.7**
Att-MSE
4,8,12 84.3±0.18 89.2±0.40 **78.6**±0.25 71.1±0.41 88.7±0.05 54.4±1.03 69.3±0.17 76.5 1,8,12 84.3±0.25 **89.8**±0.39 77.5±0.14 **72.5**±1.36 88.4±0.05 57.2±0.96 **70.6**±0.45 77.2
1,2,3 **86.2**±0.13 88.2±0.43 77.8±0.13 72.4±0.49 **88.8**±0.00 **60.3**±1.49 69.6±0.90 **77.6**
## 5 Results
Distillation Objectives Distillation objective performances are compared in Table 1 and Table 2 for task-specific and task-agnostic settings, respectively. In the task-specific setting, attention transfer is the best choice with initialisation from every k-th teacher layer. However, the performance of hidden states transfer and *vanilla KD* can be drastically improved under other initialisation settings, which we discuss in the next section.
In the task-agnostic setting, the *Att-MSE* objective outperforms *Att-KL*, which performs similarly to *vanilla KD* and hidden states transfer. This contradicts the observation in MiniLM (Wang et al.,
2020), where their *Att-KL* based objective outperforms TinyBERT (Jiao et al., 2020) with *Att-MSE*.
However, MiniLM has more training iterations and a larger batch size, which makes comparison difficult. The performance drop of *Att-KL* compared to Att-MSE is mainly due to its poor performance on CoLA (linguistic acceptability of a sentence), on which MiniLM also performs poorly. We hypothesise that MSE can transfer the linguistic knowledge embedded in the attention matrix more effectively because the MSE loss function gives more direct matching than KL-divergence, which was also concluded by Kim et al. (2021).
For reference, we report the result of 3 existing works that use the same objectives in our experiments. The result of DistilBERT and MiniLM are taken from the respective papers. The result of TinyBERT is taken from Wang et al. (2020) for fair comparison since TinyBERT only reported taskspecific distillation result with data augmentation.
We denote the prior works and the corresponding objective we evaluate with the same superscript symbol.
Initialisation We also studied the impact of the choice of teacher layers for initialising the student.
Evaluation score on GLUE task development sets under different teacher layer choices for initialisation are reported in Table 3 and Table 4 for taskspecific and task-agnostic distillation, respectively.
We observe that initiatlization of layers has a huge impact in the task-specific setting. The performance of *vanilla KD* and Hidden states transfer was significantly improved when initialising from lower layers of the teacher (e.g. from 68.1% to 85.9% on QNLI for Vanilla KD). This explains the impressive result of PKD (Sun et al., 2019b),
which initialised the student with first k teacher layers. We believe this is an important observation that will motivate further research into investigating the effectiveness of the different layers of the pre-trained transformer model.
In the task-agnostic setting, we only observe
Objectives Init. QNLI SST-2 MNLI MRPC QQP RTE CoLA Avg
Acc Acc Acc F1 Acc Acc Mcc
Vanilla KD random **88.6 91.4 82.4** 86.5 90.6 61.0 54.4 79.3
first 6 88.3 91.2 82.2 **87.0** 90.6 **62.8 55.4 79.6**
Hid-CLS random 86.5 90.6 79.3 73.0 89.7 61.0 33.9 73.4
first 6 **87.0 91.2 80.7 88.0 90.2 66.0 42.5 77.9**
Hid-Seq random **89.2** 91.5 82.3 89.2 90.3 **67.2** 48.2 79.7
first 6 87.5 91.5 82.3 **90.0 90.5** 66.4 **50.6 79.9**
Att-MSE random **89.8** 91.6 **83.2** 90.6 90.7 **69.7 53.5 81.3**
first 6 89.5 **91.7** 82.8 **91.0 90.8** 66.1 53.4 80.8
considerable improvement with the objective *HidCLS*, which performs poorly when randomly initialized, compared to other objectives. This contradicts Sanh et al. (2019) with a *vanilla KD* objective, where they instead showed improvement of 3 average score when initialising from the teacher over random initialisation. However, our *vanilla-KD*
approach initialised with random weights outperforms their best result (79.3 vs 78.5). Therefore, we hypothesise that the advantage of pre-loading teacher layers over random initialisation diminishes as the student is fully distilled during pre-training.
Significance Test We conducted paired t-testing for all the distillation objectives in Table 1 and the three initialisation choices within each objective in Table 3. For Table 1, all the pairs of objectives are statistically significant (p < 0.05) except four: (AttKL, Att-MSE), (Att-KL, Att-KL + Hid-Seq), (AttKL, Att-MSE + Hid-Seq), (Att-MSE, Att-MSE +
Hid-Seq). This further supports our conclusion that when initialised from every K teacher layer, it is important to do attention transfer, and the specific objective matters less. For Table 3, all three initialisation choices are statistically significantly different from each other for all the objectives, except the pair (1,8,12, 1,2,3) for Att-KL and Att-MSE,
which indicates the robustness of attention transfer under different initialisation choices.
Training Time Since task-agnostic distillation is computationally expensive, we also focus on optimizing our distillation framework for faster training. Our training time is about 58 GPU hours on 40GB A100, compared to TinyBERT (576 GPU
hours on 16GB V100) and DistilBERT (720 GPU
hours on 16GB V100). This is achieved by using a shorter sequence length and an optimized transformer pre-training framework by Izsak et al.
(2021). We see no improvement when using a longer sequence length of 512.
Guidance To sum up, our observations, tradeoffs and recommendations are:
- For task-specific KD, we recommend attention transfer in general, due to its consistently high performance in various initialisation settings (Table 3). The exact attention distillation objective matter less (Table 1). Considering the excellent performance of the vanilla KD approach (Table 3) when initialising with lower teacher layers, we also recommend lower teacher layer initialisation with the vanilla KD approach for its shorter training time and simple implementation.
- For task-agnostic KD, attention transfer with Mean-Squared-Error is the best choice based on our result (Table 2, 4).
- We recommend readers to use our taskagnostic distillation framework and short sequence length for fast training.
## 6 Conclusion
We extensively evaluated distillation objectives for the transformer model and studied the impact of weight initialisation. We found that attention transfer performs consistently well in both task-specific and task-agnostic settings, regardless of the teacher layers chosen for student initialization. We also observed that initialising with lower teacher layers significantly improved task-specific distillation performance compared to higher layers. We release our code and hope this work motivates further research into developing better distillation objectives and compressing in-house models.
## 7 Limitations
We evaluated the most widely used distillation objectives including prediction layer transfer, hidden states transfer and attention transfer. However, some objectives are not included in our evaluation due to missing implementation details in their paper. For example, we only implemented the contrastive intermediate layer distillation objective proposed by Sun et al. (2020a) in task-specific setting, since code and implementation details are missing for task-agnostic setting. New objectives are increasingly appearing for model compression in the field of computer vision, such as Wasserstein contrastive representation distillation (Chen et al., 2021) and distillation with Pearson correlation (Huang et al., 2022), which can be included to have a broader scope of distillation objectives evaluation.
This work empirically studied the impact of the teacher layer choice for initialization and training objectives, however, further analysis is needed to understand why lower teacher layers are essential for initialisation, and why attention transfer behaves consistently well under various teacher layer choices in the task-specific setting, while hidden state transfer does not.
## Acknowledgements
We thank the anonymous reviewers as well as the members of the MaiNLP research lab for their constructive feedback. This research is supported by ERC Consolidator Grant DIALECT 101043235.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Liqun Chen, Dong Wang, Zhe Gan, Jingjing Liu, Ricardo Henao, and Lawrence Carin. 2021. Wasserstein contrastive representation distillation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16296–16305.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT
look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP:
Analyzing and Interpreting Neural Networks for NLP,
pages 276–286, Florence, Italy. Association for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015.
Distilling the knowledge in a neural network. *stat*,
1050:9.
Tao Huang, Shan You, Fei Wang, Chen Qian, and Chang Xu. 2022. Knowledge distillation from a stronger teacher. *arXiv preprint arXiv:2205.10536*.
Peter Izsak, Moshe Berchansky, and Omer Levy. 2021.
How to train BERT with an academic budget. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 10644–
10652, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020.
TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4163–
4174, Online. Association for Computational Linguistics.
Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, and Se-Young Yun. 2021. Comparing kullbackleibler divergence and mean squared error loss in knowledge distillation. In *IJCAI*.
Tianda Li, Ahmad Rashid, Aref Jafari, Pranav Sharma, Ali Ghodsi, and Mehdi Rezagholizadeh. 2021. How to select one among all ? an empirical study towards the robustness of knowledge distillation in natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2021*,
pages 750–762, Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Chengqiang Lu, Jianwei Zhang, Yunfei Chu, Zhengyu Chen, Jingren Zhou, Fei Wu, Haiqing Chen, and Hongxia Yang. 2022. Knowledge distillation of transformer-based language models revisited. arXiv preprint arXiv:2206.14366.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. *arXiv preprint arXiv:1904.01038*.
Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, and Qun Liu. 2021. Alp-kd: Attention-based layer projection for knowledge distillation. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 35, pages 13657–13665.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019a.
Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323–4332, Hong Kong, China. Association for Computational Linguistics.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019b.
Patient knowledge distillation for bert model compression. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323–4332.
Siqi Sun, Zhe Gan, Yuwei Fang, Yu Cheng, Shuohang Wang, and Jingjing Liu. 2020a. Contrastive distillation on intermediate representations for language model compression. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 498–508, Online. Association for Computational Linguistics.
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020b. MobileBERT: a compact task-agnostic BERT for resourcelimited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2158–2170, Online. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2021. MiniLMv2: Multi-head selfattention relation distillation for compressing pretrained transformers. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2140–2151, Online. Association for Computational Linguistics.
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. *Advances in Neural Information Processing Systems*, 33:5776–5788.
Yimeng Wu, Peyman Passban, Mehdi Rezagholizadeh, and Qun Liu. 2020. Why skip if you can combine: A
simple knowledge distillation technique for intermediate layers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1016–1021, Online. Association for Computational Linguistics.
Yimeng Wu, Mehdi Rezagholizadeh, Abbas Ghaddar, Md Akmal Haidar, and Ali Ghodsi. 2021. UniversalKD: Attention-based output-grounded intermediate layer knowledge distillation. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7649–7661, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. *Advances in neural information processing systems*, 32.
Xiaofan Zhang, Zongwei Zhou, Deming Chen, and Yu Emma Wang. 2022. Autodistill: an end-to-end framework to explore and distill hardware-efficient language models. *arXiv preprint arXiv:2201.08539*.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *Proceedings of the IEEE international conference on computer vision*, pages 19–27.
## A Hyperparameters
Table 5 shows the hyperparameters we use for taskagnostic distillation.
| Hyperparameter | Our Model |
|-----------------------|-------------|
| Number of Layers | 6 |
| Hidden Size | 768 |
| FFN inner hidden size | 3072 |
| Attention heads | 12 |
| Attention head size | 64 |
| Learning Rate Decay | Linear |
| Weight Decay | 0.01 |
| Optimizer | AdamW |
| Adam ϵ | 1e-6 |
| Adam β1 | 0.9 |
| Adam β2 | 0.99 |
| Gradient Clipping | 0.0 |
| Warmup Proportion | 6% |
| Peak Learning Rate | 5e-4 |
| Batch size | 1024 |
| Max Steps | 100k |
Table 5: Hyperparameter used for distilling our student model in the pre-training stage.
| Hyperparameter | Search Space |
|------------------|--------------------------|
| Learning Rate | {1e-5, 3e-5, 5e-5, 8e-5} |
| Batch Size | {16, 32} |
Table 6: The hyperparameter space used for fine-tuning our distilled student model on GLUE benchmark tasks.
As the distillation in the pre-training stage is computationally expensive and unstable, we suggest readers to follow our settings to avoid additional costs. For example, we observed training loss divergence when using a higher learning rate
(1e-3).
Table 6 shows the search space of learning rate and batch size for fine-tuning the general-distilled student. We finetune for 10 epochs on each GLUE
task.
For task-specific distillation, we follow the suggested hyperparameters shown in the repository of
| Iteration Steps | Batch Size | Layer Matching | Initialisation | Max Sequence Length | GPU hours | Avg-score | |
|-------------------|--------------|------------------|---------------------------|----------------------------|-------------|--------------------|------|
| DistilBERT | - | 4k | prediction layer | every second teacher layer | 512 | 720h on 16GB V100 | 78.5 |
| TinyBERT | - | 256 | every second hidden layer | random | 128 | 576h on 16GB V100⋆ | 79.9 |
| MiniLM | 400k | 1024 | last hidden layer | random | 512 | - | 81.0 |
| Ours | 100k | 1024 | last hidden layer | random | 128 | 58h on 40GB A100 | 81.3 |
Table 7: Comparison of hyperparameter choices and training time between ours and prior works. Empty entries indicate that the papers do not report those numbers. ⋆: Number according to their GitHub issue answer.
## Roberta (Liu Et Al., 2019). B Comparison To Prior Works
Table 7 compares the settings and computational costs of three prior works: DistilBERT (Sanh et al.,
2019), TinyBERT (Jiao et al., 2020) and MiniLM
(Wang et al., 2020), with our best-performing objective. There are some differences between our settings and theirs, such as layer matching strategies (which teacher layers to transfer), initialisation choices, training steps and batch size. Comparatively, our framework requires less training time and can achieve comparable or better results. Our training takes 58 GPU hours on A100 compared to 720 GPU hours on V100 for training DistilBERT
(taking into consideration that an A100 GPU is about twice as fast as a V100).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
appendix A
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4
✓ B1. Did you cite the creators of artifacts you used?
4,5 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
sedova-roth-2023-actc | {ACTC}: Active Threshold Calibration for Cold-Start Knowledge Graph Completion | https://aclanthology.org/2023.acl-short.158 | Self-supervised knowledge-graph completion (KGC) relies on estimating a scoring model over (entity, relation, entity)-tuples, for example, by embedding an initial knowledge graph. Prediction quality can be improved by calibrating the scoring model, typically by adjusting the prediction thresholds using manually annotated examples. In this paper, we attempt for the first time cold-start calibration for KGC, where no annotated examples exist initially for calibration, and only a limited number of tuples can be selected for annotation. Our new method ACTC finds good per-relation thresholds efficiently based on a limited set of annotated tuples. Additionally to a few annotated tuples, ACTC also leverages unlabeled tuples by estimating their correctness with Logistic Regression or Gaussian Process classifiers. We also experiment with different methods for selecting candidate tuples for annotation: density-based and random selection. Experiments with five scoring models and an oracle annotator show an improvement of 7{\%} points when using ACTC in the challenging setting with an annotation budget of only 10 tuples, and an average improvement of 4{\%} points over different budgets. |
## Actc: Active Threshold Calibration For Cold-Start Knowledge Graph Completion
Anastasiia Sedova1,2and **Benjamin Roth**1,3 1 Research Group Data Mining and Machine Learning, University of Vienna, Austria 2 UniVie Doctoral School Computer Science, University of Vienna, Austria 3 Faculty of Philological and Cultural Studies, University of Vienna, Austria
{anastasiia.sedova, benjamin.roth}@univie.ac.at
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Self-supervised knowledge-graph completion
(KGC) relies on estimating a scoring model over (entity, relation, entity)-tuples, for example, by embedding an initial knowledge graph.
Prediction quality can be improved by calibrating the scoring model, typically by adjusting the prediction thresholds using manually annotated examples. In this paper, we attempt for the first time *cold-start* calibration for KGC,
where no annotated examples exist initially for calibration, and only a limited number of tuples can be selected for annotation.
Our new method **ACTC** finds good per-relation thresholds efficiently based on a limited set of annotated tuples. Additionally to a few annotated tuples, ACTC also leverages unlabeled tuples by estimating their correctness with Logistic Regression or Gaussian Process classifiers. We also experiment with different methods for selecting candidate tuples for annotation: density-based and random selection. Experiments with five scoring models and an oracle annotator show an improvement of 7%
points when using ACTC in the challenging setting with an annotation budget of only 10 tuples, and an average improvement of 4% points over different budgets.
## 1 Introduction
Knowledge graphs (KG) organize knowledge about the world as a graph where entities (nodes) are connected by different relations (edges). The knowledge-graph completion (KGC) task aims at adding new information in the form of (entity, relation, entity) triples to the knowledge graph.
The main objective is assigning to each triple a *plausibility score*, which defines how likely this triple belongs to the underlying knowledge base. These scores are usually predicted by the knowledge graph embedding (KGE) models. However, most KGC approaches do not make any binary decision and provide a ranking, not classification, which does not allow one to use them as-is to populate the KGs (Speranskaya et al.,
2020). To transform the scores into *predictions*
(i.e., how probable is it that this triple should be included in the KG), *decision thresholds* need to be estimated. Then, all triples with a plausibility score above the threshold are classified as positive and included in the KG; the others are predicted to be negatives and not added to the KG. Since the initial KG includes only positive samples and thus cannot be used for threshold calibration, the calibration is usually performed on a manually annotated set of positive and negative tuples (decision set). However, manual annotation is costly and limited, and, as most knowledge bases include 1853 dozens (Ellis et al., 2018), hundreds (Toutanova and Chen, 2015) or even thousands (Auer et al.,
2007) of different relation types, obtaining a sufficient amount of labeled samples for each relation may be challenging. This raises a question:
How to efficiently solve the cold-start thresholds calibration problem with *minimal human input*?
We propose a new method for Active Threshold Calibration **ACTC**1, which estimates the relation thresholds by leveraging unlabeled data additionally to human-annotated data. In contrast to already existing methods (Safavi and Koutra, 2020; Speranskaya et al., 2020) that use only the annotated samples, ACTC labels additional samples automatically with a trained predictor (Logistic Regression or Gaussian Process model) estimated on the KGE
model scores and available annotations. A graphical illustration of ACTC is provided in Figure 1.
Our main contributions are:
- We are the first to study threshold tuning in a budget-constrained environment. This setting is more realistic and challenging in contrast to the previous works where large validation sets have been used for threshold estimation.
- We propose actively selecting examples for manual annotation, which is also a novel approach for the KGC setting.
- We leverage the unlabeled data to have more labels at a low cost without increasing the annotation budget, which is also a novel approach for the KGC setting.
Experiments on several datasets and with different KGE models demonstrate the efficiency of ACTC for different amounts of available annotated samples, even for as little as one.
## 2 Related Work
Knowledge graph embedding methods (Dettmers et al., 2017; Trouillon et al., 2016; Bordes et al.,
2013; Nickel et al., 2011) have been originally evaluated on ranking metrics, not on the actual task of triple classification, which would be necessary for KGC. More recent works have acknowledged this problem by creating data sets for evaluating KGC (instead of ranking) and proposed simple 1The code for ACTC can be found here:
https://github.com/anasedova/ACTC
algorithms for finding prediction thresholds from annotated triples (Speranskaya et al., 2020; Safavi and Koutra, 2020). In our work, we study the setting where only a limited amount of such annotations can be provided, experiment with different selection strategies of samples for annotation, and analyze how to use them best. Ostapuk et al. (2019)
have studied active learning for selecting triples for training a scoring model for KG triples, but their method cannot perform the crucial step of calibration. They consequently only evaluate on ranking metrics, not measuring actual link prediction quality. In contrast, our approach focuses on selecting much fewer samples for optimal *calibration* of a scoring model (using positive, negative, and unlabeled samples).
## 3 Actc: Active Threshold Calibration
ACTC consists of three parts: selection of samples for manual annotation, automatic labeling of additional samples, and estimating the per-relation thresholds based on all available labels (manual and automatic ones).
The first step is selecting unlabeled samples for human annotation. In ACTC this can be done in two ways. One option is a *random* sampling from the set of all candidate tuples (**ACTC***rndm*; the pseudocodes can be found in Algorithm 1). However, not all annotations are equally helpful and informative for estimation. To select the representative and informative samples that the system can profit the most from, especially with a small annotation budget, we also introduce *density-based* selection **ACTC***dens* inspired by the density-based selective sampling method in active learning (Agarwal et al., 2020; Zhu et al., 2008) (the pseudocode can be found in Algorithm 2 in Appendix A). The sample density is measured by summing the squared distances between this sample's score (predicted by the KGE model) and the scores of other samples in the unlabeled dataset. The samples with the highest density are selected for human annotation.
In a constrained-budget setting with a limited amount of manual annotations available, there are sometimes only a few samples annotated for some relations and not even one for others. To mitigate this negative effect and to obtain good thresholds even with limited manual supervision, ACTC labels more samples (in addition to the manual annotations) with a classifier trained on the manually annotated samples to predict the labels based on
Algorithm 1 ACT C*rndm* algorithm Input: unlabeled dataset X , annotation budget size l, minimal decision set size n, KGE model M, classifier C : R → [0, 1]
Output: set of per-relation thresholds T
\# Step 1: samples selection for human annotation 1: T ← a set of per-relational thresholds 2: X*gold* ← randomly selected l samples from X 3: manually annotate X*gold* with y*gold* labels 4: for relation r do 5: X*gold*r ← samples from X*gold* with relation r 6: y*gold*r ← manual labels for X*gold*r 7: scores*gold*r ← KGE model scores for X*gold*r 8: lr ← |X*gold*r|
\# Step 2: automatically label additional samples 9: if *n > l*r **then**
10: Train a classifier Cr on scores*gold*r and y*gold*r 11: X*auto*r ← rand. selected n − lr samples from X
12: scores*auto*r ← KGE model scores for X*auto*r 13: Predict yautor = Cr(scores*auto*r) 14: Xdec = (X*gold*r
, y*gold*r
)S(Xautor, y*auto*r)
15: **else**
16: Xdec = (X*gold*r
, y*gold*r
)
![2_image_0.png](2_image_0.png)
22: τ ← τi 24: *T.append*(τ )
the KGE model scores. We experiment with two
![2_image_1.png](2_image_1.png)
classifiers: Logistic Regression (**ACTC-LR**) and Gaussian Processes (**ACTC-GP**). The amount of automatically labeled samples depends on hyperparameter n, which reflects the minimal amount of samples needed for estimating each threshold (see ablation study of different n values in Section 5).
If the number of samples annotated for a relation r (lr) is larger or equal to n, only these lr annotated samples are used for threshold estimation. If the amount of manually annotated samples is insufficient (i.e., less than n), the additional n − lr samples are randomly selected from the dataset and labeled by a LR or GP classifier. The automatically labeled and manually annotated samples build a per-relation threshold decision set, which contains at least n samples for a relation r with (manual or predicted) labels. The threshold for relation r is later optimized on this decision set.
The final part of the algorithm is the estimation of the relation-specific thresholds. Each sample score from the decision set is tried out as a potential threshold; the relation-specific thresholds that maximize the local accuracy (calculated for this decision set) are selected.
## 4 Experiments
We evaluate our method on two KGC benchmark datasets extracted from Wikidata and augmented with manually verified negative samples: CoDEx-s and CoDEx-m2(Safavi and Koutra, 2020). Some details on their organization are provided in Appendix B. The KGE models are trained on the training sets3. The ACTC algorithm is applied on the validation sets: the gold validation labels are taken as an oracle (*manual annotations*; in an interactive setting they would be presented to human annotators on-the-fly); the remaining samples are used unlabeled. The test set is not exploited during ACTC training and serves solely for testing purposes. The dataset statistics are provided in Table 1. We run our experiments with four KGE models: ComplEx (Trouillon et al., 2016), ConvE (Dettmers et al., 2017), TransE (Bordes et al., 2013), RESCAL (Nickel et al., 2011). More information is provided in Appendix C.
Data #Train #Val #Test #Ent #Rel CoDEx-S 32,888 3,654 3,656 2,034 42 CoDEx-M 185,584 20,620 20,622 17,050 51
Table 1: Datasets statistics. The training sets contain only positive triples. The ratio of positive to negative samples in validation and test sets is 1:1.
## 4.1 Baselines
ACTC is compared to three baselines. The first baseline **LocalOpt (Acc)** optimizes the per-relation thresholds towards the accuracy: for each relation, the threshold is selected from the embedding scores assigned to the samples with manual annotations that contain this relation, so that the *local* accuracy
(i.e., accuracy, which is calculated only for these samples) is maximized (Safavi and Koutra, 2020).
We also modified this approach into **LocalOpt (F1)**
by changing the maximization metric to the local F1 score. The third baseline is **GlobalOpt**, where the thresholds are selected by iterative search over a manually defined grid (Speranskaya et al., 2020).
The best thresholds are selected based on the *global* F1 score calculated for the whole dataset4. In all baselines, the samples for manual annotation are selected randomly.
2The third CoDEx dataset, CoDEx-L, is not used in our experiments as it does not provide negative samples.
3We use the trained models provided by dataset authors.
4Labels for samples that include relations for which thresholds have not yet been estimated are calculated using the default threshold of 0.5.
CoDEx-s CoDEx-m Avg
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
![3_image_2.png](3_image_2.png)
ComplEx ConvE TransE RESCAL ComplEx ConvE TransE RESCAL
Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 LocalOpt (Acc) 70 70 72 72 69 68 74 73 72 70 68 66 65 64 68 67 70 69
(Safavi and Koutra, 2020) ±3 ±3 ±3 ±2 ±3 ±3 ±2 ±2 ±2 ±2 ±3 ±2 ±3 ±3 ±3 ±2 LocalOpt (F1) 67 69 69 70 65 67 70 71 70 69 66 66 63 64 66 67 67 68
±3 ±3 ±3 ±2 ±3 ±3 ±2 ±2 ±2 ±2 ±2 ±2 ±3 ±3 ±3 ±2 GlobalOpt (F1) 70 74 74 77 68 71 76 79 73 75 68 70 65 68 68 71 70 73
(Speranskaya et al., 2020) ±2 ±2 ±1 ±2 ±2 ±2 ±1 ±1 ±1 ±2 ±1 ±2 ±2 ±2 ±1 ±2 ACT C − LR*dens*72 72 **77 78** 69 71 80 81 78 77 72 71 64 65 72 70 73 73
±3 ±2 ±1 ±1 ±2 ±2 ±1 ±1 ±0 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ACT C − GP*dens*72 72 76 78 69 71 80 80 78 77 72 70 64 65 73 71 73 73
±3 ±2 ±1 ±1 ±1 ±2 ±1 ±1 ±0 ±0 ±1 ±1 ±2 ±2 ±2 ±1 ACT C − LR*rndm*74 74 77 77 **73 72** 79 79 78 78 72 72 69 69 73 73 **74 74**
±3 ±2 ±2 ±2 ±3 ±3 ±1 ±1 ±1 ±1 ±2 ±2 ±3 ±2 ±2 ±2 ACT C − GP*rndm*74 74 77 77 73 72 **81 81** 77 77 71 71 67 66 72 71 **74 74**
±3 ±2 ±2 ±2 ±3 ±3 ±1 ±1 ±1 ±1 ±2 ±2 ±3 ±3 ±2 ±2
## 4.2 Results
We ran the experiments for the following number of manually annotated samples: 1, 2, 5, 10, 20, 50, 100, 200, 500, and 1000. Experimental setup details are provided in Appendix E. Table 2 provides the result averaging all experiments (here and further, n = 500 for a fair comparison; see Section 5 for analyze of n value), and our method ACTC
outperforms the baselines in every tried setting as well as on average. Figure 2a also demonstrates the improvement of ACT C*rndm* over the baselines for every tried amount of manually annotated samples on the example of CoDEx-s dataset; the exact numbers of experiments with different budgets are provided in Appendix F. The density-based selection, on the other hand, achieves considerably better results when only few manually annotated samples are available (see Figure 2b). Indeed, choosing representative samples from the highly connected clusters can be especially useful in the case of lacking annotation. LR*dense*, which selects points from regions of high density, can be helpful for small annotation budgets since it selects samples that are similar to other samples. In contrast, when having
![3_image_3.png](3_image_3.png)
![3_image_4.png](3_image_4.png)
a sufficient annotation budget and after selecting a certain number of samples, dense regions are already sufficiently covered, and LR*rndm* provides a more unbiased sample from the entire distribution.
## 5 Ablation Study
A more detailed ablation study of different ACTC
settings is provided in Appendix D.
Global Thresholds. All methods described above calibrate the *per-relation thresholds*. Another option is to define a *uniform (uni)* threshold, which works as a generic threshold for all
![3_image_5.png](3_image_5.png)
tuples regardless the relations involved. We implemented it as ACT C − LRuni method, where the additional samples are automatically labeled and used to build a decision dataset together with the manually annotated ones - in the same way as done for the relation-specific version, but only once for the whole dataset (thus, significantly reducing the computational costs). We also applied the LocalOpt(Acc) and LocalOpt(F1) baselines in the uniform setting. Figure 3 demonstrates the results obtained with the Conve KGE model and random selection mechanism on the CodEX-s dataset.
Although the universal versions generally perform worse than the relation-specific, ACT Cuni still outperforms the universal baselines and even relationspecific ones for a small annotation budget.
Different n **values.** An important parameter in ACLC is n, the minimal sufficient amount of (manually or automatically) labeled samples needed to calibrate the threshold. The ablation study of different n values is provided in Figure 4 on the example of ACT C−LR*dens* setting, averaged across all annotation budgets. ACTC performs as a quite stable method towards the n values. Even a configuration with a minimum value of n = 5 outperforms baselines with a small annotation budget or even with quite large one (e.g. for RESCAL).
![4_image_0.png](4_image_0.png)
## 6 Conclusion
In this work, we explored for the first time the problem of cold-start calibration of scoring models for knowledge graph completion. Our new method for active threshold calibration ACTC provides different strategies of selecting the samples for manual annotation and automatically labels additional tuples with Logistic Regression and Gaussian Processes classifiers trained on the manually annotated data. Experiments on datasets with oracle positive and negative triple annotations, and several KGE
models, demonstrate the efficiency of our method and the considerable increase in the classification performance even for tiny annotation budgets.
## 7 Limitations
A potential limitation of our experiments is the use of oracle validation labels instead of human manual annotation as in the real-world setting. However, all validation sets we used in our experiments were collected based on the manually defined seed set of entities and relations, carefully cleaned and augmented with manually labeled negative samples.
Moreover, we chose this more easy-to-implement setting to make our results easily reproducible and comparable with future work.
Another limitation of experiments that use established data sets and focus on isolated aspects of knowledge-graph construction is their detachment from the real-world scenarios. Indeed, in reality knowledge graph completion is done in a much more complicated environment, that involves a variety of stakeholders and aspects, such as data verification, requirements consideration, user management and so on. Nevertheless, we do believe that our method, even if studied initially in isolation, can be useful as one component in real world knowledge graph construction.
## 8 Ethics Statement
Generally, the knowledge graphs used in the experiments are biased towards the North American cultural background, and so are evaluations and predictions made on them. As a consequence, the testing that we conducted in our experiments might not reflect the completion performance for other cultural backgrounds. Due to the high costs of additional oracle annotation, we could not conduct our analysis on more diverse knowledge graphs. However, we have used the most established and benchmark dataset with calibration annotations, CoDEx, which has been collected with significant human supervision. That gives us hope that our results will be as reliable and trustworthy as possible.
While our method can lead to better and more helpful predictions from knowledge graphs, we cannot guarantee that these predictions are perfect and can be trusted as the sole basis for decisionmaking, especially in life-critical applications (e.g.
healthcare).
## Acknowledgement
This research has been funded by the Vienna Science and Technology Fund
(WWTF)[10.47379/VRG19008] and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) RO 5127/2-1.
## References
Sharat Agarwal, Himanshu Arora, Saket Anand, and Chetan Arora. 2020. Contextual diversity for active learning. In *Computer Vision - ECCV 2020*, pages 137–153, Cham. Springer International Publishing.
Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007.
Dbpedia: A nucleus for a web of open data. In The Semantic Web, pages 722–735, Berlin, Heidelberg.
Springer Berlin Heidelberg.
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.
Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2017. Convolutional 2d knowledge graph embeddings. *CoRR*, abs/1707.01476.
Joe Ellis, Jeremy Getman, and Stephanie Strassel. 2018.
TAC KBP English Entity Linking - Comprehensive Training and Evaluation Data 2009-2013.
Budiman Minasny and Alex. B. McBratney. 2005. The matérn function as a general model for soil variograms. *Geoderma*, 128(3):192–207. Pedometrics 2003.
Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, page 809–816, Madison, WI, USA. Omnipress.
Natalia Ostapuk, Jie Yang, and Philippe CudreMauroux. 2019. Activelink: Deep active learning for link prediction in knowledge graphs. In The World Wide Web Conference, WWW '19, page 1398–1408, New York, NY, USA. Association for Computing Machinery.
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. *Journal of machine learning research*, 12(Oct):2825–2830.
Tara Safavi and Danai Koutra. 2020. CoDEx: A Comprehensive Knowledge Graph Completion Benchmark. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 8328–8350, Online. Association for Computational Linguistics.
Marina Speranskaya, Martin Schmitt, and Benjamin Roth. 2020. Ranking vs. classifying: Measuring knowledge base completion quality. In *Conference* on Automated Knowledge Base Construction, AKBC
2020, Virtual, June 22-24, 2020.
Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66, Beijing, China. Association for Computational Linguistics.
Théo Trouillon, Johannes Welbl, Sebastian Riedel, Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *Proceedings of The 33rd International Conference on Machine Learning*, volume 48 of *Proceedings of Machine Learning Research*, pages 2071–2080, New York, New York, USA. PMLR.
Jingbo Zhu, Huizhen Wang, Tianshun Yao, and Benjamin K Tsou. 2008. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 1137–1144, Manchester, UK. Coling 2008 Organizing Committee.
## A Actcdens **Pseudocode**
Algorithm 2 ACT C*dens* algorithm Input: unlabeled dataset X , annotation budget size l, minimal decision set size n, KGE model M, classifier C : R → [0, 1]
Output: set of per-relation thresholds T
\# Step 1: samples selection for human annotation 1: T ← a set of per-relational thresholds 2: for i = 0, 1, ..., *|X |* do 3: densityxi =P*|X |* j=0(scorej − *score*i)
2 4: X*gold* ← top l samples with maximal *density*xi 5: manually annotate X*gold* with y*gold* labels 6: [the rest is the same as in ACT C*rndm*, see Alg. 1]
\# Step 2: automatically label additional samples 7: [same as Step 2 in ACT C*rndm*, see Alg. 1]
\# Step 3: estimate per-relation threshold τr 8: [same as Step 3 in ACT C*rndm*, see Alg. 1]
## B Codex Datasets
In our experiments, we use benchmark CoDEx datasets (Safavi and Koutra, 2020). The datasets were collected based on the Wikidata in the following way: a seed set of entities and relations for 13 domains (medicine, science, sport, etc) was defined and used as queries to Wikidata in order to retrieve the entities, relations, and triples. After additional postprocessing (e.g. removal of inverse relations),
the retrieved data was used to construct 3 datasets:
CoDEx-S, CoDEx-M, and CoDEx-L. For the first two datasets, the authors additionally constructed hard negative samples (by annotating manually the candidate triples which were generated using a pretrained embedding model), which allows us to use them in our experiments.
- An example of positive triple: *(Senegal, part* of, West Africa).
- An example of negative triple: *(Senegal, part* of, Middle East).
## C Embedding Models
We use four knowledge graph embedding models.
This section highlights their main properties and provides their scoring functions.
ComplEX (Trouillon et al., 2016) uses complexnumbered embeddings and diagonal relation embedding matrix to score triples; the scoring function is defined as s(*h, r, t*) = e T h diag (rr) et.
ConvE (Dettmers et al., 2017) represents a neural approach to KGE scoring and exploits the nonlinearities: s(*h, r, t*) = f(vec(f([eh; r] ∗ ω)(W)t.
TransE (Bordes et al., 2013) is an example of translation KGE models, where the relations are tackled as translations between entities; the embeddings are scored with s(*h, r, t*) =
− ∥eh + rr − et∥p
.
RESCAL (Nickel et al., 2011) treats the entities as vectors and relation types as matrices and scores entities and relation embeddings with the following scoring function: s(*h, r, t*) = e T
hRret.
These models were selected, first, following the previous works (Safavi and Koutra, 2020; Speranskaya et al., 2020), and, second, to demonstrate the performance of our method using the different KGE approaches: linear (ComplEX and RESCAL), translational (TransE), and neural (ConvE).
## D Ablation Study
Optimization towards F1 score. Just as we converted the LocalOpt (Acc) baseline from Safavi and Koutra (2020) to a LocalOpt(F1) setting, we also converted ACTC into ACTC(F1). The only difference is the metric, which the thresholds maximize:
instead of accuracy, the threshold that provides the best F1 scores are looked for. Table 3 is an extended result table, which provides the ACTC(F1) numbers together with the standard ACTC (optimizing towards accuracy) and baselines. As can be seen, there is no dramatic change in ACTC performance; naturally enough, the F1 test score for ACTC(F1)
experiments is slightly better than the F1 test score
| CoDEx-s | CoDEx-m | Avg | | | | | | | | | | | |
|----------------------------|-----------|--------|--------|---------|-------|--------|--------|-------|-------|-------|----|----|-------|
| ComplEx | ConvE | TransE | RESCAL | ComplEx | ConvE | TransE | RESCAL | | | | | | |
| Acc F1 | AccF1 | AccF1 | Acc F1 | Acc F1 | AccF1 | AccF1 | Acc | F1 | AccF1 | | | | |
| LocalOpt (Acc) | 70 | 70 | 72 72 | 69 68 | 74 | 73 | 72 | 70 | 68 66 | 65 64 | 68 | 67 | 70 69 |
| (Safavi and Koutra, 2020) | ±3 | ±3 | ±3 ±2 | ±3 ±3 | ±2 | ±2 | ±2 | ±2 | ±3 ±2 | ±3 ±3 | ±3 | ±2 | |
| LocalOpt (F1) | 67 | 69 | 69 70 | 65 67 | 70 | 71 | 70 | 69 | 66 66 | 63 64 | 66 | 67 | 67 68 |
| ±3 | ±3 | ±3 ±2 | ±3 ±3 | ±2 | ±2 | ±2 | ±2 | ±2 ±2 | ±3 ±3 | ±3 | ±2 | | |
| GlobalOpt (F1) | 70 | 74 | 74 77 | 68 71 | 76 | 79 | 73 | 75 | 68 70 | 65 68 | 68 | 71 | 70 73 |
| (Speranskaya et al., 2020) | ±2 | ±2 | ±1 ±2 | ±2 ±2 | ±1 | ±1 | ±1 | ±2 | ±1 ±2 | ±2 ±2 | ±1 | ±2 | |
| ACT C − LRdens | 72 | 72 | 77 78 | 69 71 | 80 | 81 | 78 | 77 | 72 71 | 64 65 | 72 | 70 | 73 73 |
| ±3 | ±2 | ±1 ±1 | ±2 ±0 | ±1 | ±1 | ±0 | ±1 | ±1 ±1 | ±1 ±1 | ±1 | ±1 | | |
| ACT C − GPdens | 72 | 72 | 76 78 | 69 71 | 80 | 80 | 78 | 77 | 72 70 | 64 65 | 73 | 71 | 73 73 |
| ±3 | ±2 | ±1 ±1 | ±1 ±2 | ±1 | ±1 | ±0 | ±0 | ±1 ±1 | ±2 ±2 | ±2 | ±1 | | |
| ACT C − LRrndm | 74 | 74 | 77 77 | 73 72 | 79 | 79 | 78 | 78 | 72 72 | 69 69 | 73 | 73 | 74 74 |
| ±3 | ±2 | ±2 ±2 | ±3 ±3 | ±1 | ±1 | ±1 | ±1 | ±2 ±2 | ±3 ±2 | ±2 | ±2 | | |
| ACT C − GPrndm | 74 | 74 | 77 77 | 73 72 | 81 | 81 | 77 | 77 | 71 71 | 67 66 | 72 | 71 | 74 74 |
| ±3 | ±2 | ±2 ±2 | ±3 ±3 | ±1 | ±1 | ±1 | ±1 | ±2 ±2 | ±3 ±3 | ±2 | ±2 | | |
| ACT C − LRdens(F1) | 72 | 72 | 73 75 | 63 66 | 78 | 79 | 78 | 77 | 72 72 | 64 66 | 72 | 71 | 72 73 |
| ±3 | ±2 | ±0 ±0 | ±0 ±1 | ±0 | ±1 | ±1 | ±1 | ±1 ±2 | ±2 ±1 | ±3 | ±2 | | |
| ACT C − GPdens(F1) | 72 | 73 | 76 78 | 68 70 | 79 | 80 | 78 | 77 | 71 71 | 64 66 | 71 | 73 | 72 74 |
| ±2 | ±1 | ±1 ±1 | ±1 ±2 | ±1 | ±1 | ±1 | ±2 | ±2 ±1 | ±1 ±1 | ±1 | ±3 | | |
| ACT C − LRrndm(F1) | 73 | 74 | 77 78 | 72 74 | 79 | 80 | 76 | 75 | 69 70 | 66 67 | 70 | 70 | 73 74 |
| ±3 | ±2 | ±2 ±1 | ±3 ±2 | ±1 | ±1 | ±2 | ±1 | ±2 ±2 | ±3 ±2 | ±2 | ±2 | | |
| ACT C − GPrndm(F1) | 74 | 74 | 77 77 | 72 73 | 79 | 79 | 77 | 77 | 70 71 | 67 68 | 71 | 72 | 73 74 |
| ±1 | ±2 | ±2 ±2 | ±3 ±2 | ±1 | ±1 | ±3 | ±2 | ±1 ±2 | ±1 ±3 | ±1 | ±1 | | |
for experiments where thresholds were selected based on accuracy value.
Estimate All Samples Apart from the automatic labeling of *additional* samples discussed in Section 3 (i.e., the additional samples are labeled in case of insufficient manual annotations so that the size of the decision set built from manually annotated and automatically labeled samples equals n),
we also experimented with annotating all samples.
All samples that were not manually labeled are automatically labeled with a classifier. However, the performance was slightly better only for the middle budgets (i.e., for the settings with 5, 10, and 20 manually annotated samples) and became considerably worse for large budgets (i.e., 100, 200, etc), especially in the denstity_selection setting.
Based on that, we can conclude that a lot of (automatically) labeled additional data is not what the model profits the most; the redundant labels (which are also not gold and potentially contain mistakes)
only amplify the errors and lead to worse algorithm performance.
Hard VS Soft Labels. The classifier's predictions can be either directly used as real-valued *soft* labels or transformed to the *hard* ones by selecting the class with the maximum probability. In most of our experiments, the performance of soft and hard labels was practically indistinguishable
(yet with a slight advantage of the latter). All the results provided in this paper were obtained with hard automatic labels.
## E Experimental Setting
As no validation data is available in our setting, the ACTC method does not require any hyperparameter tuning. We did not use a GPU for our experiments; one ACTC run takes, on average, 2 minutes. All results are reproducible with a seed value 12345.
ACTC does not imply any restrictions on the classifier architecture. We experimented with two classifiers: Logistic Regression classifier and Gaussian Processes classifier. For both of them, we used a Scikit-learn implementation (Pedregosa et al.,
2011). The Logistic Regression classifier was used in the default Scikit-learn setting, with L2 penalty term and inverse of regularization strength equals 100. In the Gaussian Processes classifier, we experimented with the following kernels:
- squared exponential **RBF kernel** with length_*scale* = 10
- its generalized and smoothed version, **Matérn**
kernel, with length_*scale* = 0.1
- a mixture of different RBF kernels, **RationalQuadratic** kernel, with length_*scale* =
0.1
All the results for Gaussian Processes classifier provided in this paper are obtained with the Matérn kernel (Minasny and McBratney, 2005) using the following kernel function:
$k\left(x_i,x_j\right)=\dfrac{1}{\Gamma(\nu)2^{\nu-1}}\left(\dfrac{\sqrt{2\nu}}{l}d\left(x_i,x_j\right)\right)^\nu*$ $$*K_\nu\left(\dfrac{\sqrt{2\nu}}{l}d\left(x_i,x_j\right)\right)$$ where $K_\nu$ is a Bessel function and $\Gamma$ is a Gamma function.
function.
## F Results For Different Annotation Budgets
Tables 4, 5, and 6 demonstrate the performance of the different ACTC settings for different annotation budgets (1, 10, and 50, respectively). The results are averaged over all settings; each setting was repeated 100 times. Table 4 demonstrates how useful and profitable the density-selection methods are in a lower budget setting. However, the nonbiased random selection works better with more manually annotated samples (e.g., 50).
| CoDEx-s | CoDEx-m | Avg | | | | | | | | | | | | |
|----------------------------|-----------|--------|--------|---------|-------|--------|--------|--------|-------|-----------|--------|----|-------|-------|
| ComplEx | ConvE | TransE | RESCAL | ComplEx | ConvE | TransE | RESCAL | | | | | | | |
| Acc | F1 | Acc F1 | Acc F1 | Acc F1 | Acc | F1 | Acc F1 | Acc F1 | Acc | F1 | Acc F1 | | | |
| LocalOpt (Acc)1 | 60 | 58 | 65 65 | 60 57 | 68 | 66 | 66 | 63 | 61 59 | 55 | 50 | 62 | 58 | 62 60 |
| (Safavi and Koutra, 2020) | ±1 | ±2 | ±1 ±1 | ±1 ±1 | ±1 | ±2 | ±1 | ±2 | ±1 ±1 | ±0 | ±2 | ±1 | ±2 | |
| LocalOpt (F1)1 | 60 | 58 | 65 65 | 60 57 | 68 | 66 | 66 | 63 | 61 59 | 55 | 50 | 62 | 58 | 62 60 |
| ±1 | ±2 | ±1 ±1 | ±1 ±1 | ±1 | ±2 | ±1 | ±2 | ±1 ±1 | ±0 | ±2 | ±1 | ±2 | | |
| GlobalOpt (F1)1 | 61 | 67 | 67 72 | 57 65 | 70 | 75 | 65 | 72 | 60 66 | 55 | 62 | 61 | 67 | 62 68 |
| (Speranskaya et al., 2020) | ±0 | (1.0) | ±0 ±0 | ±0 ±1 | ±0 | ±0 | ±1 | ±0 | ±0 ±0 | ±0 (1.0) | ±0 | ±0 | | |
| ACT C − LR1 dens | 67 | 68 | 76 77 | 65 66 | 78 | 79 | 71 | 75 | 69 66 | 57 | 65 | 68 | 62 | 69 70 |
| ±0 | ±0 | ±0 ±0 | ±0 ±0 | ±0 | ±0 | ±0 | ±0 | ±0 ±0 | ±0 | ±0 | ±0 | ±0 | | |
| 59 | 47 | 72 76 | 61 59 | 50 | 67 | 76 | 76 | 71 70 | 58 | 66 | 68 | 62 | 64 65 | |
| ACT C − GP1 dens | ±0 | ±0 | ±0 ±0 | ±1 ±1 | ±0 | ±0 | ±0 | ±0 | ±0 ±0 | ±0 | ±0 | ±0 | ±0 | |
| 63 | 60 | 70 69 | 61 58 | 76 | 75 | 69 | 67 | 63 62 | 57 | 52 | 63 | 60 | 67 63 | |
| ACT C − LR1 rndm | ±0 | ±1 | ±1 ±1 | ±1 ±1 | ±1 | ±1 | ±0 | ±2 | ±0 ±1 | ±0 | ±2 | ±0 | ±1 | |
| 63 | 61 | 71 70 | 61 59 | 76 | 76 | 74 | 73 | 64 65 | 56 | 64 | 65 | 64 | 66 67 | |
| ACT C − GP1 rndm | ±1 | ±1 | ±0 ±1 | ±1 ±1 | ±1 | ±1 | ±0 | ±0 | ±0 ±0 | ±0 (0.02) | ±0 | ±0 | | |
Table 4: ACTC results for l = 1, n = 500, averaged across 100 tries for each experiment and reported with the standard error of the mean
CoDEx-s CoDEx-m Avg
ComplEx ConvE TransE RESCAL ComplEx ConvE TransE RESCAL Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1
LocalOpt (Acc)10 62 62 65 64 60 59 67 66 67 65 63 59 57 57 63 60 63 62
(Safavi and Koutra, 2020) ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±0 ±1 ±1 ±1
LocalOpt (F1)10 57 61 61 62 55 59 63 64 65 63 60 60 55 58 60 60 60 61
±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1 ±1
GlobalOpt (F1)10 66 71 71 74 65 67 73 76 70 73 65 68 61 66 64 68 67 70
(Speranskaya et al., 2020) ±0 ±0 ±0 ±1 ±1 ±1 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±1 ±0 ±0
ACT C − LR10
dens
70 73 74 76 66 67 79 80 77 77 71 70 61 61 73 72 **71 72**
±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0
ACT C − GP10
dens
64 71 73 76 68 65 78 80 77 77 72 70 61 61 73 72 **71 72**
±0 ±0 ±0 ±0 ±1 ±1 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0
ACT C − LR10
rndm
70 70 74 73 68 66 77 77 74 73 67 66 62 62 67 66 70 70
±0 ±1 ±0 ±1 ±0 ±1 ±0 ±1 ±0 ±1 ±0 ±1 ±0 ±1 ±0 ±1
ACT C − GP10
rndm
71 70 73 73 68 65 77 77 75 74 68 67 62 61 68 67 70 70
±0 ±1 ±0 ±1 ±1 ±1 ±0 ±1 ±0 ±1 ±0 ±1 ±1 ±1 ±1 ±1
Table 5: ACTC results for l = 10, n = 500, averaged across 100 tries for each experiment and reported with the standard error of the mean
CoDEx-s CoDEx-m Avg
ComplEx ConvE TransE RESCAL ComplEx ConvE TransE RESCAL Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1
LocalOpt (Acc)50 72 73 73 74 71 71 74 74 73 72 70 69 67 68 71 70 71 71
(Safavi and Koutra, 2020) ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0
LocalOpt (F1)50 65 69 66 70 64 68 67 71 69 71 66 68 64 67 67 69 66 69
±1 ±0 ±1 ±0 ±1 ±0 ±1 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0
GlobalOpt (F1)50 73 76 75 78 71 73 77 79 74 76 70 72 67 71 69 72 72 75
(Speranskaya et al., 2020) ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0
ACT C − LR50
dens76 76 78 79 72 72 80 81 79 79 72 72 63 64 73 73 74 75
±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0
ACT C − GP50
dens75 78 78 80 77 78 77 78 79 78 72 72 64 64 74 74 75 75
±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0
ACT C − LR10
rndm76 78 79 79 76 77 80 80 78 78 71 71 69 70 73 73 **75 76**
±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0
ACT C − GP10
rndm75 78 79 80 77 78 80 80 78 78 72 71 69 70 73 74 **75 76**
±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0 ±0
Table 6: ACTC results for l = 50, n = 500, averaged across 100 tries for each experiment and reported with the standard error of the mean
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✓ A2. Did you discuss any potential risks of your work?
Section 8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 1-6
✓ B1. Did you cite the creators of artifacts you used?
Sections 1, 2, 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 1 (footnote)
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 8
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 7, Appendix B
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 8
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix E
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix E
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Tables 1, 3, 4, 5, 6, Figures 2, 3, 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix E
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
cheng-etal-2023-task | Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering | https://aclanthology.org/2023.acl-short.159 | Given its effectiveness on knowledge-intensive natural language processing tasks, dense retrieval models have become increasingly popular. Specifically, the de-facto architecture for open-domain question answering uses two isomorphic encoders that are initialized from the same pretrained model but separately parameterized for questions and passages. This biencoder architecture is parameter-inefficient in that there is no parameter sharing between encoders. Further, recent studies show that such dense retrievers underperform BM25 in various settings. We thus propose a new architecture, Task-Aware Specialization for dEnse Retrieval (TASER), which enables parameter sharing by interleaving shared and specialized blocks in a single encoder. Our experiments on five question answering datasets show that TASER can achieve superior accuracy, surpassing BM25, while using about 60{\%} of the parameters as bi-encoder dense retrievers. In out-of-domain evaluations, TASER is also empirically more robust than bi-encoder dense retrievers. Our code is available at \url{https://github.com/microsoft/taser}. | # Task-Aware Specialization For Efficient And Robust Dense Retrieval For Open-Domain Question Answering
Hao Cheng♠ Hao Fang♣ Xiaodong Liu♠ **Jianfeng Gao**♠
♠ Microsoft Research ♣ Microsoft Semantic Machines
{chehao,hafang,xiaodl,jfgao}@microsoft.com
## Abstract
Given its effectiveness on knowledge-intensive natural language processing tasks, dense retrieval models have become increasingly popular. Specifically, the *de-facto* architecture for open-domain question answering uses two isomorphic encoders that are initialized from the same pretrained model but separately parameterized for questions and passages. This biencoder architecture is parameter-inefficient in that there is no parameter sharing between encoders. Further, recent studies show that such dense retrievers underperform BM25 in various settings. We thus propose a new architecture, Task-Aware Specialization for dEnse Retrieval (TASER), which enables parameter sharing by interleaving shared and specialized blocks in a single encoder. Our experiments on five question answering datasets show that TASER can achieve superior accuracy, surpassing BM25, while using about 60% of the parameters as bi-encoder dense retrievers. In out-of-domain evaluations, TASER is also empirically more robust than bi-encoder dense retrievers. Our code is available at https:
//github.com/microsoft/taser.
## 1 Introduction
Empowered by learnable neural representations built upon pretrained language models, the dense retrieval framework has become increasingly popular for fetching external knowledge in various natural language processing tasks (Lee et al., 2019; Guu et al., 2020; Lewis et al., 2020). For opendomain question answering (ODQA), the *de-facto* dense retriever is the bi-encoder architecture (Lee et al., 2019; Karpukhin et al., 2020), consisting of a question encoder and a passage encoder. Typically, the two encoders are isomorphic but separately parameterized, as they are initialized from the same pretrained model and then fine-tuned on the task.
Despite of its popularity, this bi-encoder architecture with fully decoupled parameterization has some open issues. First, from the efficiency perspective, the bi-encoder parameterization apparently results in scaling bottleneck for both training and inference. Second, empirical results from recent studies show that such bi-encoder dense retrievers underperform its sparse counterpart BM25
(Robertson and Walker, 1994) in various settings.
For example, both Lee et al. (2019) and Karpukhin et al. (2020) suggest the inferior performance on SQuAD (Rajpurkar et al., 2016) is partially due to the high lexical overlap between questions and passages, which gives BM25 a clear advantage. Sciavolino et al. (2021) also find that bi-encoder dense retrievers are more sensitive to distribution shift than BM25, resulting in poor generalization on questions with rare entities.
In this paper, we develop Task-Aware Specialization for dEnse Retrieval, TASER, as a more parameter-efficient and robust architecture. Instead of using two isomorphic and fully decoupled Transformer (Vaswani et al., 2017) encoders, TASER
interleaves shared encoder blocks with specialized ones in a single encoder, motivated by recent success in using Mixture-of-Experts (MoE) to scale up Transformer (Fedus et al., 2021). For the shared encoder block, the entire network is used to encode both questions and passages. For the specialized encoder block, some sub-networks are task-specific and activated only for certain encoding tasks. To choose among task-specific sub-networks, TASER
uses an input-dependent routing mechanism, *i.e.,*
questions and passages are passed through separate dedicated sub-networks.
We carry out both in-domain and out-of-domain evaluation for TASER. For the in-domain evaluation, we use five popular ODQA datasets. Our best model outperforms BM25 and existing biencoder dense retrievers, while using much less parameters. It is worth noting that TASER can effectively close the performance gap on SQuAD
between dense retrievers and BM25. One interest1864 ing finding from our experiments is that excluding SQuAD from the multi-set training is unnecessary, which was a suggestion made by Karpukhin et al. (2020) and adopted by most follow-up work.
Our out-of-domain evaluation experiments use EntityQuestions (Sciavolino et al., 2021) and BEIR
(Thakur et al., 2021). Consistent improvements over the doubly parameterized bi-encoder dense retriever are observed in these zero-shot evaluations as well. Our code is available at https:
//github.com/microsoft/taser.
## 2 Background
In this section, we provide necessary background about the bi-encoder architecture for dense passage retrieval which is widely used in ODQA (Lee et al.,
2019; Karpukhin et al., 2020) and is the primary baseline model in our experiments.
The bi-encoder architecture consists of a question encoder and a passage encoder, both of which are usually Transformer encoders (Vaswani et al.,
2017). A Transformer encoder is built up with a stack of Transformer blocks. Each block consists of a multi-head self-attention (MHA) sub-layer and a feed-forward network (FFN) sub-layer, with residual connections (He et al., 2016) and layernormalization (Ba et al., 2016) applied to both sublayers. Given an input vector h ∈ R
d, the FFN
sub-layer produces an output vector as following
$$\mathbf{\tau}_{i}+\mathbf{b_{1}}\}+\mathbf{b_{2}},$$
$$\mathbf{F}\mathbf{E}\mathbf{N}(\mathbf{h})=\mathbf{W}$$
## Ffn(H) = W2 Max{0, W1H + B1} + B2, (1)
where W1 ∈ R
m×d, W2 ∈ R
d×m, b1 ∈ R
m, and b2 ∈ R
dare learnable parameters. For a sequence of N tokens, each Transformer block produces N corresponding vectors, together with a vector for the special prefix token [CLS] which can be used as the representation of the sequence. We refer readers to (Vaswani et al., 2017) for other details about Transformer. Typically the question encoder and passage encoder are initialized from a pretrained language model such as BERT (Devlin et al., 2019), but they are parameterized separately, i.e., their parameters would differ after training.
The bi-encoder model independently encodes questions and passages into d-dimension vectors, using the final output vectors for [CLS] from the corresponding encoders, denoted as q ∈ R
d and p ∈ R
d, respectively. The relevance between a question and a passage can then be measured in the vector space using dot product, *i.e.,*
sim(q, p) = q Tp. During training, the model is
![1_image_0.png](1_image_0.png)
optimized based on a contrastive learning objective,
$$L_{s i m}=-\frac{\exp(\mathrm{sim}({\bf q},{\bf p^{+}}))}{\sum_{{\bf p^{\prime}}\in{\mathcal{P}}\cup\{{\bf p^{+}}\}}\exp(\mathrm{sim}({\bf q},{\bf p^{\prime}}))},\quad(2)$$
where p
+ is the relevant (positive) passage for the given question, and P is the set of irrelevant (negative) passages. During inference, all passages are pre-converted into vectors using the passage encoder. Then, each incoming question is encoded using the question encoder, and a top-K list of most relevant passages are retrieved based on their relevance scores with respect to the question.
Although the bi-encoder dense retrieval architecture has achieved impressive results in ODQA,
few work has attempted to improve its parameter efficiency. Further, compared to the spare vector space model BM25 (Robertson and Walker, 1994),
such bi-encoder dense retrievers sometimes suffer from inferior generalization performance, *e.g.,*
when the training data is extremely biased (Lebret et al., 2016; Karpukhin et al., 2020) or when there is a distribution shift (Sciavolino et al., 2021). In this paper, we conjecture that the unstable generalization performance is partially related to the unnecessary number of learnable parameters in the model. Therefore, we develop a task-aware specialization architecture for dense retrieval with parameter sharing between the question and passage encoders, which turns out to improve both parameter efficiency and generalization performance.
## 3 Proposed Model: Taser
As shown in Fig. 1, TASER interleaves shared Transformer blocks with specialized ones. The shared Transformer block is identical to the Transformer block used in the bi-encoder architecture, but the entire block is shared for both questions and passages. In the specialized block, we apply MoE-style task-aware specialization to the FFN
sub-layer, following (Fedus et al., 2021), where the router always routes the input to a single expert FFN sub-layer. In our experiments, we use a simple yet effective routing mechanism, which uses an expert sub-layer (Q-FFN) for questions and another
(P-FFN) for passages. The router determines the expert FFN sub-layer based on whether the input is a question or a passage. Other routing mechanisms are discussed in Appendix A.
TASER uses one specialized Transformer block after every T shared Transformer blocks in the stack, starting with a shared one at the bottom. Our preliminary study indicates that the model performance is not sensitive to the choice of T, so we use T = 2 for experiments in this paper.
Similar to the bi-encoder architecture, TASER
is trained using the contrastive learning objective Lsim defined in Equation 2. Specifically, the objective needs to use a set of negative passages P
for each question. Following Xiong et al. (2020)
and Qu et al. (2021), we construct P via hard negatives mining (Appendix B). Our experiments use the *multi-set* training paradigm, *i.e.,* the model is trained by combining data from multiple datasets to obtain a model that works well across the board.
## 4 Experiments 4.1 In-Domain Evaluation
We carry out in-domain evaluations on five ODQA
datasets: NaturalQuestions (NQ; Kwiatkowski et al., 2019a), TriviaQA (TQ; Joshi et al., 2017),
WebQuestions (WQ; Berant et al., 2013), CuratedTrec (CT; Baudiš and Šedivý, 2015), and SQuAD (Rajpurkar et al., 2016). All data splits and the Wikipedia collection for retrieval used in our experiments are the same as Karpukhin et al.
(2020). The top-K retrieval accuracy (R@K) is used for evaluation, which evaluates whether any gold answer string is contained in the top K retrieved passages.
Besides BERT-base, coCondenser-Wiki (Gao and Callan, 2022) is also used to initialize TASER
models. We further present results of hybrid models that linearly combine the dense retrieval scores with the BM25 scores. See Appendix D for details. Evaluation results are summarized in Ta-
| NQ | TQ | WQ | CT | SQuAD | |
|------------------------------------|------|------|------|---------|------|
| BM25(1) | 62.9 | 76.4 | 62.4 | 80.7 | 71.1 |
| Multi-Set Training (without SQuAD) | | | | | |
| DPR(1) | 79.5 | 78.9 | 75.0 | 88.8 | 52.0 |
| DPRBM25 (1) | 82.6 | 82.6 | 77.3 | 90.1 | 75.1 |
| xMoCo(2) | 82.5 | 80.1 | 78.2 | 89.4 | 55.9 |
| (3) | 83.0 | 82.6 | 76.0 | 89.9 | 73.0 |
| SPARWiki SPARPAQ (4) | 82.7 | 82.5 | 76.3 | 90.3 | 72.9 |
| Multi-Set Training (with SQuAD) | | | | | |
| DPR† | 80.9 | 79.6 | 74.0 | 88.0 | 63.1 |
| DPR⋄ | 82.5 | 81.8 | 77.8 | 91.2 | 67.0 |
| DPR⋆ | 83.7 | 82.6 | 78.9 | 91.6 | 68.0 |
| TASER⋄ | 83.6 | 82.0 | 77.9 | 91.1 | 69.7 |
| TASER⋆ | 84.9 | 83.4 | 78.9 | 90.8 | 72.9 |
| TASER⋆ BM25 | 85.0 | 84.0 | 79.6 | 92.1 | 78.0 |
ble 1.
1 Note that the last five models Table 1 are trained with the same hard negatives mining.
All prior work excludes SQuAD from the multiset training, as suggested by Karpukhin et al.
(2020). We instead train models using all five datasets. Specifically, we observe that this would not hurt the overall performance, and it actually significantly improves the performance on SQuAD,
comparing DPR(1) with DPR†.
Comparing models initialized from BERT-base, TASER⋄significantly outperforms xMoCo (Yang et al., 2018) and is slightly better than DPR⋄, using around 60% parameters. SPAR (Chen et al., 2022)
is also initialized from BERT-base, but it augments DPR with another dense lexical model trained on either Wikipedia or PAQ (Lewis et al., 2021), which doubles the model sizes (Table A3). TASER⋄is mostly on par with SPAR-Wiki and SPAR-PAQ,
except on SQuAD, but its model size is about a quarter of SPAR.
Gao and Callan (2022) has shown the coCodenser model outperforms DPR models initialized from BERT-base in the single-set training setting. Here, we show that using coCondenser-Wiki for initialization is also beneficial for TASER under the multi-set setting, especially for SQuAD where 1We also report R@100 scores in Table A2 and corresponding model sizes in Table A3.
| R@20 | nDCG@10 | | | | |
|----------|-----------|------|------|------|------|
| EQ | AA | DBP | FEV | HQA | |
| BM25 | 71.2 | 31.5 | 31.3 | 75.3 | 60.3 |
| DPRMulti | 56.7 | 17.5 | 26.3 | 56.2 | 39.1 |
| TASER⋄ | 64.7 | 32.8 | 31.4 | 59.6 | 50.7 |
| TASER⋆ | 66.7 | 30.5 | 31.6 | 58.8 | 54.5 |
R@20 is improved by 3.2 points. Notably, SQuAD
is the only dataset among the five where DPR underperforms BM25, due to its higher lexical overlap between questions and passages. Nevertheless, TASER⋆surpasses BM25 on all five datasets, and they are either on-par or better than state-of-theart dense-only retriever models, demonstrating its superior parameter efficiency.
Consistent with previous work, combining BM25 with dense models can further boost the performance, particularly on SQuAD. However, the improvement is more pronounced on DPR as compared to TASER⋆, indicating that TASER⋆is able to capture more lexical overlap features. Finally, TASER⋆BM25 sets new state-of-the-art performance on all five ODQA datasets.
We also compare the computation time needed for one epoch of training and validation. The baseline DRP model takes approximately 15 minutes, whereas TASER takes 11 minutes (26% improvement), both measured using 16 V100-32G GPUs.
## 4.2 Out-Of-Domain Evaluation
We use two benchmarks to evaluate the out-ofdomain generalization ability of TASER⋄and TASER⋆from Table 1 . EntityQuestions (EQ; Sciavolino et al., 2021) is used to measure the model sensitivity to entity distributions, as DPR is found to perform poorly on entity-centric questions containing rare entities. BEIR (Thakur et al., 2021) is used to study the model generalization ability in other genres of information retrieval tasks. Specifically, we focus on four datasets from BEIR where DPR underperforms BM25, *i.e.,* ArguAna (AA;
Wachsmuth et al., 2018), DBPedia (DBP; Hasibi et al., 2017), FEVER (FEV; Thorne et al., 2018),
and HotpotQA (HQA; Yang et al., 2018). Results are summarized in Table 2. For EntityQuestions, we report R@20 scores following Sciavolino et al.
(2021).2 For BEIR datasets, nDCG@10 scores are used following Thakur et al. (2021).
On EntityQuestions, both TASER⋄and TASER⋆
outperform the doubly parameterized DPRMulti
(Karpukhin et al., 2020), with TASER⋆ being slightly better. Similar to the in-domain evaluation results, TASER can effectively reduce the performance gap between the dense retrievers and BM25.
These results further support our hypothesis that more parameter sharing can improve the model robustness for dense retrievers.
On BEIR datasets, we also observe that TASER
models consistently improve over DPRMulti across the board. Notably, TASER⋄and TASER⋆can actually match the performance of BM25 on ArguAna and DBpedia. Interestingly, coCondenser pre-training has mixed results here, *i.e.,* TASER⋆
is only better than TASER⋄ on HotpotQA and on par or worse on other datasets.
## 5 Related Work
Recent seminal work on dense retrieval demonstrates its effectiveness using Transformer-based bi-encoder models by either continual pre-training with an inverse cloze task (Lee et al., 2019) or careful fine-tuning (Karpukhin et al., 2020). One line of follow-up work improves dense retrieval models via various continual pre-training approaches
(Guu et al., 2020; Chang et al., 2020; Izacard et al.,
2021; Gao and Callan, 2022; Oguz et al. ˘ , 2021).
Better contrastive learning objectives are also introduced (Xiong et al., 2020; Qu et al., 2021; Yang et al., 2021). Motivated by the success of augmenting dense models with sparse models, Chen et al.
(2022) combine the dense retriever with a dense lexical model that mimics sparse retrievers. All above work focus on improving the accuracy of biencoder dense retrievers, whereas our work tackles the parameter efficiency issue.
Unlike most bi-encoder dense retrievers which measure the similarity between a question and a passage using their corresponding [CLS]vectors, ColBERT (Khattab and Zaharia, 2020) develops a late-interaction paradigm and measures the similarity via a MaxSim operator that computes the maximum similarity between a token in a sequence and all tokens in the other sequence. Such architecture has shown promising results in ODQA (Khattab et al., 2021) and the BEIR benchmark (Santhanam 2The R@20 scores are averaged over all relations. More evaluation metrics are reported in Table A4.
et al., 2022). Our work instead focus on the improvement on the underlying text encoders, and the MaxSim operator introduced by ColBERT can be applied on top of TASER.
Xiong et al. (2021) use the BERT-Siamese architecture for dense retrieval, where all Transformer blocks are shared. Compared with this architecture, TASER is a more effective and general way to increase the parameter efficiency, by interleaving specialized Transformer blocks with shared ones.
## 6 Conclusion
We propose a new parameterization framework, TASER, for improving the efficiency and robustness of dense retrieval for ODQA. It interleaves shared encoder blocks with specialized ones in a single encoder where some sub-networks are task-specific. As the specialized sub-networks are sparsely activated, TASER can provide better parameter efficiency with almost no additional computation cost. Experiments show that TASER substantially outperforms existing fully supervised biencoder dense retrievers on both in-domain and out-of-domain generalization.
## 7 Limitations
In this section, we point out several limitations in this work.
First, our in-domain evaluation experiments focus on passage retrieval for ODQA. While the dense retriever is mostly successful in ODQA, it can be also used in other types of retrieval tasks which may have different input and output format.
For example, the KILT benchmark (Petroni et al.,
2021) provides several knowledge-intensive tasks other than ODQA. The performance of TASER
models trained on such retrieval tasks remain unknown.
Second, compared with traditional sparse vector models like TF-IDF and BM25, the cost of training is an inherent issue of dense retrievers. Although TASER significantly reduce the number of model parameters, the training cost is still high.
Third, in our experiments, we show that the learned routing does not outperform the deterministic routing. This may suggest a better architecture and/or training algorithms for learned routing is needed to fully unleash the power of MoE.
Last, as observed in §4.2, there is still a gap between TASER and BM25 in out-of-domain evaluation. Therefore, how to close this gap will remain a critical topic for future work on dense retrievers.
## References
Jimmy Lei Ba, Jami Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. *arXiv:1607.06459*
[stat.ML].
Petr Baudiš and Jan Šedivý. 2015. Modeling of the question answering task in the yodaqa system. In Proceedings of the 6th International Conference on Experimental IR Meets Multilinguality, Multimodality, and Interaction - Volume 9283, CLEF'15, page 222–228, Berlin, Heidelberg. Springer-Verlag.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA. Association for Computational Linguistics.
Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In *Proc.*
International Conference on Learning Representations (ICLR).
Xilun Chen, Kushal Lakhotia, Barlas Oguz, Anchit ˘
Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, and Wen-tau Yih. 2022.
Salient phrase aware dense retrieval: Can a dense retriever imitate a sparse one? arXiv:2110.06918
[cs.CL].
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
William Fedus, Barret Zoph, and Noam Shazeer.
2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.
arXiv:2101.03961 [cs.LG].
Luyu Gao and Jamie Callan. 2022. Unsupervised corpus aware language model pre-training for dense passage retrieval. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2843–2853, Dublin, Ireland. Association for Computational Linguistics.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In *Proceedings of the* 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3929–3938. PMLR.
Faegheh Hasibi, Fedor Nikolaev, Chenyan Xiong, Krisztian Balog, Svein Erik Bratsberg, Alexander Kotov, and Jamie Callan. 2017. Dbpedia-entity v2: A test collection for entity search. In *Proceedings of the* 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1265–1268, Shinjuku, Tokyo, Japan.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *Proc. IEEE Conference on Computer Vision* and Pattern Recognition (CVPR), pages 770–778.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense information retrieval with contrastive learning. *arXiv:*
2112.09118 [cs.IR].
Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax.
arXiv:1611.01144 [stat.ML].
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics.
Omar Khattab, Christopher Potts, and Matei Zaharia.
2021. Relevance-guided supervision for OpenQA
with ColBERT. *Transactions of the Association for* Computational Linguistics, 9:929–944.
Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and effective passage search via contextualized late interaction over BERT. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '20, page 39–48, New York, NY, USA. Association for Computing Machinery.
Diederik Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. *arXiv preprint* arXiv:1412.6980.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019a. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:453–466.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019b. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466.
Rémi Lebret, David Grangier, and Michael Auli. 2016.
Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213, Austin, Texas. Association for Computational Linguistics.
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova.
2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096. Association for Computational Linguistics.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020.
Retrieval-augmented generation for knowledgeintensive nlp tasks. In *Advances in Neural Information Processing Systems*, volume 33, pages 9459–
9474. Curran Associates, Inc.
Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. PAQ: 65 million probably-asked questions and what you can do with them. *Transactions of the Association for Computational Linguistics*, 9:1098–1115.
Xuegang Ma, Kai Sun, Ronak Pradeep, and Jimmy Lin.
2021. A replication study of dense passage retriever.
arXiv:2104.05740 [cs.CL].
Barlas Oguz, Kushal Lakhotia, Anchit Gupta, Patrick ˘
Lewis, Vladimir Karpukhin, Aleksandra Piktus, Xilun Chen, Sebastian Riedel, Wen-tau Yih, Sonal Gupta, and Yashar Mehdad. 2021. Domainmatched pre-training tasks for dense retrieval.
arXiv:2107.13602 [cs.CL].
Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523–2544, Online.
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for opendomain question answering. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835–5847, Online. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Stephen E. Robertson and Stephen Walker. 1994. Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval. In Proc.
Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 232–241, Dublin, Ireland.
Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. ColBERTv2: Effective and efficient retrieval via lightweight late interaction. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3715–3734, Seattle, United States. Association for Computational Linguistics.
Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. 2021. Simple entity-centric questions challenge dense retrievers. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6138–6148, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR:
A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018.
FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers),
pages 809–819, New Orleans, Louisiana.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proc. Advances in Neural Information* Processing Systems (NeurIPS), volume 30.
Henning Wachsmuth, Shahbaz Syed, and Benno Stein.
2018. Retrieval of the best counterargument without prior topic knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 241–251, Melbourne, Australia.
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval.
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In *Proc. International Conference on Learning Representations (ICLR)*.
Nan Yang, Furu Wei, Binxing Jiao, Daxing Jiang, and Linjun Yang. 2021. xMoCo: Cross momentum contrastive learning for open-domain question answering.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6120–6129, Online. Association for Computational Linguistics.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2369–2380, Brussels, Belgium.
## A More Routing Mechanisms
In the paper, only input-dependent routing is considered. Here, we provide a more comprehensive study of routing mechanisms. In particular, we introduce three routing mechanisms: the deterministic routing (Det-R) which is used in our main experiments, the sequence-based routing
(Seq-R), and the token-based routing (Tok-R).
Both Seq-R and Tok-R are learned jointly with the task-specific objective.
Specifically, Det-R is the input-dependent routing studied in the main paper where two expert FFN
sub-layers are needed for ODQA retrieval, one for questions and one for passages. In this case, the router determines the expert FFN sub-layer based on whether the input is a question or a passage.
For Seq-R and Tok-R, the router uses a parameterized routing function R(u) = GumbelSoftmax(Au + c), (3)
where GumbelSoftmax (Jang et al., 2016) outputs a I-dimensional one-hot vector based on the linear projection parameterized by A ∈ R
d×Iand c ∈ R
I, I is the number of expert FFN sub-layers in the specialized Transformer block, and u ∈ R
d is the input of the routing function. Here, the routing function is jointly learned with all other parameters using the discrete reparameterization trick.
For Seq-R, routing is performed at the sequence level, and all tokens in a sequence share the same u, which is the FFN input vector h[CLS] representing the special prefix token [CLS]. For Tok-R, the router independently routes each token, *i.e.,* for the j-th token in the sequence, u is set to the corresponding FFN input vector hj .
For Seq-R and Tok-R, to avoid routing all inputs to the same expert FFN sub-layer, we further apply the entropic regularization
$$L_{e n t}=-\sum_{i=1}^{I}P(i)\log P(i).$$
where P(i) = Softmax(Ah+c)iis the probability of the i-th expert FFN sub-layer being selected.
Hence, the joint training objective is
$$L_{j o i n t}=L_{s i m}+\beta L_{e n t},$$
Ljoint = Lsim + βLent, (5)
where β is a scalar hyperparameter. In our work, we fix β = 0.01.
Also, all specialized Transformer blocks use the same number of expert FFN sub-layers for simplicity.
Model I # Params **Dev Test**
DPR - 218M - 78.4
TASERShared 1 109M 78.2 79.3
TASERDet-R 2 128M **79.2 80.7**
TASERSeq-R 2 128M **79.2** 80.6
TASERSeq-R 4 166M 78.4 80.1
TASERToK-R 2 128M 78.5 79.8
TASERToK-R 4 166M 78.5 79.8
DPR†- 218M - 81.3
TASERDet-R
† 2 128M **82.4 83.7**
## B Hard Negative Mining
Recall that in Equation 2 the objective Lsim needs to use a set of negative passages P for each question. There are several ways to construct P. In
(Karpukhin et al., 2020), the best setting uses two negative passages per question: one is the top passage retrieved by BM25 which does not contain the answer but match most question tokens, and the other is chosen from the gold positive passages for other questions in the same mini-batch. Recent work shows that mining harder negative examples with iterative training can lead to better performance (Xiong et al., 2020; Qu et al., 2021).
Hence, in this paper, we also train TASER with hard negatives mining. Specifically, we first train a TASER model with negative passages P1 same as Karpukhin et al. (2020). Then, we use this model to construct P2 by retrieving top-100 ranked passages for each question excluding the gold passage. In the single-set training, we combine P1 and P2 to train the final model. In the multi-set training, only use P2 is used to train the final model for efficiency consideration.
$$\quad(4)$$
$\eqref{eq:walpha}$.
## C Comparing Taser **Variants**
In this part, we compare different TASER variants discussed in §A by evaluating their performance on NQ under the single-set training setting. We use the bi-encoder dense passage retriever (DPR) from
(Karpukhin et al., 2020) as our baseline. All models
| NQ | TriviaQA | WebQ | TREC | SQuAD | | | | | | |
|------------------------------------|------------|--------|--------|---------|------|------|------|------|------|------|
| Model | @20 | @100 | @20 | @100 | @20 | @100 | @20 | @100 | @20 | @100 |
| BM25(1) | 62.9 | 78.3 | 76.4 | 83.2 | 62.4 | 75.5 | 80.7 | 89.9 | 71.1 | 81.8 |
| Single-Set Training | | | | | | | | | | |
| DPR(2) | 78.4 | 85.4 | 79.4 | 85.0 | 73.2 | 81.4 | 79.8 | 89.1 | 63.2 | 77.2 |
| DPR-PAQ(3) | 84.7 | 89.2 | - | - | - | - | - | - | - | - |
| coCondenser(4) | 84.3 | 89.0 | 83.2 | 87.3 | - | - | - | - | - | - |
| Multi-Set Training (without SQuAD) | | | | | | | | | | |
| DPR(1) | 79.5 | 86.1 | 78.9 | 84.8 | 75.0 | 83.0 | 88.8 | 93.4 | 52.0 | 67.7 |
| DPR(1) BM25 | 82.6 | 88.6 | 82.6 | 86.5 | 77.3 | 84.7 | 90.1 | 95.0 | 75.1 | 84.4 |
| xMoCo(5) | 82.5 | 86.3 | 80.1 | 85.7 | 78.2 | 84.8 | 89.4 | 94.1 | 55.9 | 70.1 |
| SPAR-Wiki(6) | 83.0 | 88.8 | 82.6 | 86.7 | 76.0 | 84.4 | 89.9 | 95.2 | 73.0 | 83.6 |
| SPAR-PAQ(6) | 82.7 | 88.6 | 82.5 | 86.9 | 76.3 | 85.2 | 90.3 | 95.4 | 72.9 | 83.7 |
| Multi-Set Training (with SQuAD) | | | | | | | | | | |
| DPR† | 80.9 | 86.8 | 79.6 | 85.0 | 74.0 | 83.4 | 88.0 | 94.1 | 63.1 | 77.2 |
| DPR⋄ | 82.5 | 88.0 | 81.8 | 86.4 | 77.8 | 84.7 | 91.2 | 95.5 | 67.1 | 79.8 |
| DPR⋆ | 83.7 | 88.7 | 82.6 | 86.7 | 78.9 | 85.3 | 91.6 | 95.1 | 68.0 | 80.2 |
| TASER⋄ | 83.6 | 88.6 | 82.0 | 86.6 | 77.9 | 85.4 | 91.1 | 95.7 | 69.7 | 81.2 |
| TASER⋄ BM25 | 83.8 | 88.6 | 83.3 | 87.1 | 78.7 | 85.7 | 91.6 | 95.8 | 77.2 | 86.0 |
| TASER⋆ | 84.9 | 89.2 | 83.4 | 87.1 | 78.9 | 85.4 | 90.8 | 96.0 | 72.9 | 83.4 |
| TASER⋆ BM25 | 85.0 | 89.2 | 84.0 | 87.5 | 79.6 | 85.8 | 92.1 | 96.0 | 78.0 | 87.0 |
| Model | Num. Parameters |
|---------------------|-------------------|
| DPR | 218M |
| coCodenser | 218M |
| xMoCo | 218M |
| SPAR-Wiki; SPAR-PAQ | 436M |
| DPR-PAQ | 710M |
| TASER⋄ ; TASER⋆ | 128M |
Table A3: Number of parameters for models reported in Table A2.
including DPR are initialized from the BERT-base
(Devlin et al., 2019).3 All TASER models are finetuned up to 40 epochs with Adam (Kingma and Ba, 2014) using a learning rate chosen from {3e −
5, 5e − 5}. Model selection is performed on the development set following (Karpukhin et al., 2020).
3Without further specification, we only consider the uncased version throughout the paper.
## Results Are Summarized In Table A1.
TASERShared is a variant without any task-aware specialization, *i.e.,* there is a single expert FFN sublayer in the specialized Transformer block and the router is a no-op. As shown in Table A1, it outperforms DPR while using only 50% parameters.
Task-aware specialization brings extra improvements, with little increase in model size. Comparing the two learned routing mechanisms, Seq-R
achieves slightly better results than Tok-R, indicating specializing FFNs based on sequence-level features such as sequence types is more effective for ODQA dense retrieval. This is consistent with the positive results for Det-R, which consists of two expert FFNs specialized for questions and passages, respectively. We also find that adding more expert FFNs does not necessarily bring extra gains, and I = 2 is sufficient for NQ. Consistent with the results on DPR, the hard negatives mining described in §B can further boost TASERDet-R per-
| Macro R@20 | Micro R@20 | Micro R@100 | |
|--------------|--------------|---------------|------|
| BM25 | 71.2 | 70.8 | 79.2 |
| DPRMulti | 56.7 | 56.6 | 70.1 |
| TASER⋄ | 64.7 | 64.3 | 76.2 |
| TASER⋆ | 66.7 | 66.2 | 77.9 |
formance by 3.0 points in test set R@20. Since Det-R achieves the best R@20, our subsequent experiments focus on this simple and effective specialization strategy. In the remainder of the paper, we drop the subscript and simply use TASER to denote models using Det-R.
## D Details About In-Domain Evaluations
All TASER models are fine-tuned up to 40 epochs with Adam (Kingma and Ba, 2014) using a learning rate chosen from {3e − 5, 5e − 5}. In our experiments, hard negatives are mined from NQ, TriviaQA and WebQ. We combine NQ and TriviaQA
development sets for model selection.
We also present results of hybrid models that linearly combine the dense retrieval scores with the BM25 scores, sim(q, p) + α · BM25(q, p). (6)
We search the weight α in the range [0.5, 2.0] with an interval of 0.1 based on the combined development set mentioned above. Unlike (Ma et al.,
2021), we use a single α for all five datasets instead of dataset-specified weights so that the resulting hybrid retriever still complies with the multi-set setting in a strict sense. The same normalization techniques described in (Ma et al., 2021) is used.
Similar to (Karpukhin et al., 2020; Ma et al., 2021),
we separately retrieve K′candidates from TASER
and BM25, and then retain the top K based on the hybrid scores, though we use a smaller K′ = 100.
We used 16 V100-32GB GPUs and it took 9 hours to train our models.
## E Dataset Licenses And Intended Use
All datasets used in our experiments are English datasets. The datasets used in this paper are released under the following licenses.
- NaturalQuestions (Kwiatkowski et al., 2019b):
CC-BY-SA 3.0 License
- TriviaQA (Joshi et al., 2017): noncommercial research purposes only
- WebQuestions (Berant et al., 2013): CC-BY
4.0 License.
- SQuAD (Rajpurkar et al., 2016): CC-BY-SA
4.0 License
- EntityQuestions (Sciavolino et al., 2021):
MIT License
- ArguAna (Wachsmuth et al., 2018): not specified
- DBPedia (Hasibi et al., 2017): not specified
- FEVER (Thorne et al., 2018): license terms described on the applicable Wikipedia article pages, and CC BY-SA 3.0 License
- HotpotQA (Yang et al., 2018): CC BY-SA 4.0
Our use is consistent with their intended use.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.
✓ B1. Did you cite the creators of artifacts you used?
Section 4.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix E.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix E.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
These are widely used datasets for benchmarking.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix E.
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We followed the exact same setting as Karpukhin et al., (2020) and explicitly mentioned this in the paper.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Table A1 and Table A3. Appendix D.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix D.
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Our results are from a single run and it is transparent from the description.
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We will release the model and code.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |