{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:02:59.125029Z" }, "title": "Compositional Generalization for Kinship Prediction through Data Augmentation", "authors": [ { "first": "Kangda", "middle": [], "last": "Wei", "suffix": "", "affiliation": { "laboratory": "", "institution": "UNC Chapel Hill", "location": {} }, "email": "kangda@live.unc.edu" }, { "first": "Sayan", "middle": [], "last": "Ghosh", "suffix": "", "affiliation": { "laboratory": "", "institution": "UNC Chapel Hill", "location": {} }, "email": "sayghosh@cs.unc.edu" }, { "first": "Shashank", "middle": [], "last": "Srivastava", "suffix": "", "affiliation": { "laboratory": "", "institution": "UNC Chapel Hill", "location": {} }, "email": "ssrivastava@cs.unc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Transformer-based models have shown promising performance in numerous NLP tasks. However, recent work has shown the limitation of such models in showing compositional generalization, which requires models to generalize to novel compositions of known concepts. In this work, we explore two strategies for compositional generalization on the task of kinship prediction from stories: (1) data augmentation and (2) predicting and using intermediate structured representation (in form of kinship graphs). Our experiments show that data augmentation boosts generalization performance by around 20% on average relative to a baseline model from prior work not using these strategies. However, predicting and using intermediate kinship graphs leads to a deterioration in the generalization of kinship prediction by around 50% on average relative to models that only leverage data augmentation.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Transformer-based models have shown promising performance in numerous NLP tasks. However, recent work has shown the limitation of such models in showing compositional generalization, which requires models to generalize to novel compositions of known concepts. In this work, we explore two strategies for compositional generalization on the task of kinship prediction from stories: (1) data augmentation and (2) predicting and using intermediate structured representation (in form of kinship graphs). Our experiments show that data augmentation boosts generalization performance by around 20% on average relative to a baseline model from prior work not using these strategies. However, predicting and using intermediate kinship graphs leads to a deterioration in the generalization of kinship prediction by around 50% on average relative to models that only leverage data augmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transformer-based large language models (Vaswani et al., 2017) have achieved state-ofthe-art results on numerous NLP tasks such as question answering, reading comprehension, relational reasoning, etc.", "cite_spans": [ { "start": 40, "end": 62, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "that require both syntactic and semantic understanding of language. However, recent works (Bahdanau et al., 2018; Lake and Baroni, 2018; Gururangan et al., 2018; Kaushik and Lipton, 2018) have shown that these transformer-based models have their limitations when it comes to tasks that require compositional generalization as they often perform surface-level reasoning instead of understanding the underlying concepts and learning to generalize and reason over them. On the other hand, neural models that encode the structure of the data (such as Graph Attention Networks (Veli\u010dkovi\u0107 et al., 2017) ) instead of consuming it in an unstructured format Figure 1 : To improve the compositional generalization of models for the task of kinship prediction between a pair of queried entities (e.g. predicting the relation r 12 given the entities e 1 and e 2 ) from a story (S) we present two strategies (1) data augmentation and (2) predicting and using intermediate structured representation in form of kinship graphs. For data augmentation (first strategy), we utilize the existing ground truth graph (G) to generate more pairs of target relations and query entities (such as predicting r 13 using e 1 and e 3 ) that do not need compositional inference to obtain the answer. In our second strategy, using our augmented data we predict an intermediate kinship graph and reason over it jointly with the story to predict the relation between the queried pair of entities.", "cite_spans": [ { "start": 90, "end": 113, "text": "(Bahdanau et al., 2018;", "ref_id": "BIBREF1" }, { "start": 114, "end": 136, "text": "Lake and Baroni, 2018;", "ref_id": "BIBREF5" }, { "start": 137, "end": 161, "text": "Gururangan et al., 2018;", "ref_id": "BIBREF3" }, { "start": 162, "end": 187, "text": "Kaushik and Lipton, 2018)", "ref_id": "BIBREF4" }, { "start": 572, "end": 597, "text": "(Veli\u010dkovi\u0107 et al., 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 650, "end": 658, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "show better compositional generalization (Sinha et al., 2019) .", "cite_spans": [ { "start": 41, "end": 61, "text": "(Sinha et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we explore two strategies to improve the compositional generalization of models for the task of kinship prediction from stories. In our first strategy, we explore the utility of data augmentation towards compositional generalization. Recent works have shown data augmentation to be an effective strategy in improving model performance on different NLP tasks such as Neural Machine Translation (Fernando and Ranathunga, 2022) , semantic parsing (Yang et al., 2022) , and text summarization (Wan and Bansal, 2022) . Our data augmentation strategy focuses on improving a model's ability to extract relations that are explicitly mentioned in the text. In our second strategy, we explore the utility of predicting an intermediate structured representation of the story (as a kinship graph) and then jointly reasoning over it along with the story text for the task of kinship prediction. Figure 1 provides an example of this task and also illustrates the two strategies. The strategies are explained in detail in \u00a73.", "cite_spans": [ { "start": 407, "end": 438, "text": "(Fernando and Ranathunga, 2022)", "ref_id": "BIBREF2" }, { "start": 458, "end": 477, "text": "(Yang et al., 2022)", "ref_id": "BIBREF14" }, { "start": 503, "end": 525, "text": "(Wan and Bansal, 2022)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 896, "end": 902, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate the utility of our strategies on a kinship prediction benchmark, CLUTRR (Sinha et al., 2019) . Overall, we find data augmentation is helpful and boosts the generalization performance (accuracy of predicting correct relation) by around 20% on average relative to a baseline not using these strategies. However, using intermediate kinship graphs deteriorates generalization performance by almost 50% as compared to the model that only uses data augmentation. Our code is available at: https://github.com/ WeiKangda/data-aug-clutrr. Figure 2: SSD model illustration: first obtain the graph embedding and text embedding separately using R-GCN and RoBERTa respectively, then adding the embeddings together and feeding through a classification layer to get the final output.", "cite_spans": [ { "start": 84, "end": 104, "text": "(Sinha et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Each example in CLUTTR (Sinha et al., 2019 ) is a tuple of the form (S, G, e 1 , e 2 ), where S represents the story/passage describing the entities (fictional characters) and relations between them, G represents the kinship graph, e 1 and e 2 represent the pair of query entities (whose relationship is being queried). To aid clarity on these notations, we have illustrated the values of (S, G, e 1 , e 2 ) corresponding to our running example in Figure 1 . Further, each kinship graph can be considered to be a collection of entity nodes (E) and relation edges (R) (as illustrated in Figure 1) , where E = (e 1 , e 2 , e 3 ) and R = (r 12 , r 13 , r 32 ). Note that the kinship graph mentions only the relationships clearly stated in the story. For example, in Figure 1 , the entity pairs (e 1 , e 3 ) and (e 3 , e 2 ) are explicitly mentioned in story S. The learning task is to predict the relationship between the two query entities. This is framed as a classification task over 20 possible relationship types in the dataset. The number of composition operations/steps required to infer the relationship between the query entities is denoted by k. For example, in figure 1, k = 2 for inferring the relationship between e 1 and e 2 as there are 2 composition operations needed to get the final result.", "cite_spans": [ { "start": 23, "end": 42, "text": "(Sinha et al., 2019", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 448, "end": 456, "text": "Figure 1", "ref_id": null }, { "start": 586, "end": 595, "text": "Figure 1)", "ref_id": null }, { "start": 763, "end": 771, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Problem Setup", "sec_num": "2" }, { "text": "In this work, we empirically evaluate the utility of data augmentation and intermediate structured representations towards compositional generalization for the task of kinship prediction from a story. Next, we formally describe our model, SSD, where SSD stands for Systematic Compositional Generalization with Symbolic Representation and Data Augmentation for Kinship Prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Setup", "sec_num": "2" }, { "text": "We first describe our base model followed by a description of two strategies explored in this work -(1) data augmentation and (2) predicting and using intermediate kinship graphs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "Our base model, SSD (base) is adapted from the RoBERTa-based (Liu et al., 2019) baseline presented in Sinha et al. (2019) . However, different from Sinha et al. (2019) we allow finetuning of the RoBERTa transformer layers. Grounding in the running example, given S, e 1 , and e 2 , SSD (base) predicts the relation r 12 between e 1 and e 2 using the following three steps: 1. Obtaining story representation: This is the", "cite_spans": [ { "start": 61, "end": 79, "text": "(Liu et al., 2019)", "ref_id": null }, { "start": 102, "end": 121, "text": "Sinha et al. (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "[CLS] representation by doing a forward pass of RoBERTa on the story, S. 2. Obtaining entity representations: During training, each entity (such as e 1 , e 2 , etc.) is replaced by a unique number in the story (following Sinha et al. (2019) ). We obtain the representation for each entity by averaging the tokens from the last transformer layer of RoBERTa corresponding to the positions where the entity appeared in the story. 3. Classifier for predicting relation: This is a multiclass classification task (with total number of classes as the number of relationships possible in the dataset) using a linear classifier that takes as input the concatenation of representations of the story and two query entities.", "cite_spans": [ { "start": 221, "end": 240, "text": "Sinha et al. (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "For each example in our training set, we augment the training set further by considering the pairs of entities for which the relation is explicitly mentioned in the story thus requiring no composition operations. We illustrate this data augmentation procedure using our running example in figure 1. We add the query entity pairs (e 1 , e 3 ) and (e 2 , e 3 ) in the training set as the relationships for these pair of entities are explicitly mentioned in the story.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "3.1" }, { "text": "To predict the relation between the pair of entities mentioned in a query, the model has to operate in two stages, (1) extracting the relations mentioned explicitly in the story and (2) performing compositional reasoning over the extracted relations. This data augmentation procedure helps to ensure that the model becomes better at extracting the relations that are mentioned explicitly in the story, thus not propagating any error from the relation extraction stage to the compositional reasoning stage for predicting the target relation between the queried pair of entities. This model is denoted as SSD (data aug) henceforth. For inference using SSD (data aug) one needs to provide all the pairs of query entities whose relations can be extracted directly from the text of the story in addition to the actual pair of query entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "3.1" }, { "text": "Prior work has found models using structured representation of stories in form of kinship graphs perform better than transformer models trained only on stories for this task. However, it is unreasonable to assume that we will always have access to gold kinship graphs for the task of kinship prediction from narratives or stories during inference. Hence, we empirically evaluate the utility of predicting an intermediate kinship graph and then jointly reasoning over the predicted graph and the input story to predict the relation between the queried pair of entities. We illustrate our strategy using the running example in figure 1. We form the intermediate kinship graph, G by predicting the relations between the entities whose relations are explicitly mentioned in the story. We predict the relations to form this intermediate graph by using a linear layer over representations of the story and the pair of query entities obtained using a RoBERTa model. Next, we obtain two representations of the target relation based on (1) text: using linear layer over representations of the story and the pair of query entities obtained using a RoBERTa model and (2) graph: using linear layer over representations of kinship graph and query entities obtained using R-GCN (Schlichtkrull et al., 2017) (see Appendix for details). We concatenate these two target relation representations and use another linear layer to predict the target relation. This model is denoted as SSD (graph) henceforth.", "cite_spans": [ { "start": 1264, "end": 1292, "text": "(Schlichtkrull et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Intermediate Kinship Graphs", "sec_num": "3.2" }, { "text": "Similar to SSD (data aug), for inference using SSD (graph) one needs to provide all the pairs of query entities whose relations can be extracted directly from the text of the story in addition to the actual pair of query entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intermediate Kinship Graphs", "sec_num": "3.2" }, { "text": "All models are trained using cross-entropy loss. Every model is trained with 40 epochs and a learning rate of 5e-6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "We consider the RoBERTa-based model in Sinha et al. (2019) as our baseline. Note that in the baseline the transformer layers of RoBERTa are not finetuned. For all our experiments we report the accuracy of predicting the relation between the queried pair of entities. Further, following Sinha et al. (2019), we report the accuracy over multiple test sets, where each test set is characterized by k, the number of composition operations/steps required to find the relation between the queried pair of entities. For example, in figure 1, the number of composition steps (k) is 2. In test sets of CLUTRR, k varies from 2 to 10. Figure 3 shows the accuracy of different variants of SSD on the test sets of CLUTRR. We consider two settings, where SSD is trained on data with (1) k = 2, 3 and (2) k = 2, 3, 4 Irrespective of the training data complexity (in terms of k), we observe that SSD (data aug) outperforms baseline. Notably, we see improvements even when k = 10 during test showing the utility of data augmentation for improving the generalizability of the models.", "cite_spans": [ { "start": 39, "end": 58, "text": "Sinha et al. (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 624, "end": 632, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Baseline and Evaluation Metrics", "sec_num": "4.1" }, { "text": "While data augmentation shows promise, we do not see any improvements when predicting and reasoning jointly over the intermediate kinship graph. Rather, the performance of the models drop significantly when we predict the relation conditioned on the story and the intermediate kinship graph. This is counter-intuitive as we hypothesized the intermediate kinship graph (which is structured) would aid the model further in making compositions. As one of the possible reasons for this, we hypothesize that our method of fusing representations from two modalities, story and graph, might be sub-optimal that results in the failure. Future work can explicitly look into devising better techniques for this fusion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating compositional generalization", "sec_num": "4.2" }, { "text": "Generalization with noisy inputs: We also evaluate the models with noisy train and noisy test sets of CLUTRR following Sinha et al. (2019) . We explore the following three noisy data settings shown in Figure 4 :", "cite_spans": [ { "start": 119, "end": 138, "text": "Sinha et al. (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 201, "end": 209, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Evaluating compositional generalization", "sec_num": "4.2" }, { "text": "\u2022 Supporting facts: There are two reasoning paths that can lead to the correct answer p c and p n . These two paths has the same beginning and ending nodes but p c is shorter than p n (smaller k). \u2022 Irrelevant facts: p n , the path that contains the irrelevant facts, shares the same beginning node with p c which leads to the correct answer. p n can be seen as a branch of the graph that doesn't lead to the correct answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating compositional generalization", "sec_num": "4.2" }, { "text": "\u2022 Disconnected facts: p n , which is the path that contains the disconnected facts, can be treated as another graph that is disconnected from the main story that contains the reasoning path p c , which leads to the correct answer. Table 1 shows the result of different SSD variants when evaluated on the noisy test sets. The model performance decreases as the number of deduction steps required (k) increases, which is consistent with other experiments' results. We can also notice that the models, SSD (base) and SSD (graph), tend to perform better with graphs that contain supporting facts, irrelevant facts, and disconnected facts compared to graphs that are free of noise but require the same number of composition operations (k) to predict the target relation. This shows that SSD is good at identifying useful and relevant information from the graph and extra information from the noisy inputs improves the models' performance.", "cite_spans": [], "ref_spans": [ { "start": 231, "end": 238, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Supporting Facts Irrelevant Facts Disconnected Facts", "sec_num": null }, { "text": "For data augmentation and also for predicting the intermediate kinship graphs we need additional annotation to identify entity pairs whose relationship is explicitly mentioned in the text. While there can be heuristic approaches to estimate such entity pairs (for example, set of all distinct entity pairs that appear in the same sentence), in this work we re-purpose the gold kinship graphs to get this annotation. Realistically, having gold kinship graphs Table 1 : Testing SSD (base) and SSD (graph) performance when training on story graphs with or without noisy inputs. The integer after symbol . represents the number of steps required to infer the relationship between the query entities, which is k as mentioned section 2.1, and the integer before the symbol . has the following meaning provided by the original CLUTRR paper (Sinha et al., 2019) : 1=free of noise; 2=with supporting facts; 3 = with irrelevant facts; 4 = with disconnected facts. Figure 5: Comparison of model performance when additional supervision (through data augmentation and intermediate kinship graphs) is only available for 1% and 10% of the data and the rest is trained without additional supervision.", "cite_spans": [ { "start": 833, "end": 853, "text": "(Sinha et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 458, "end": 465, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Varying the amount of additional annotation", "sec_num": "4.3" }, { "text": "for all the training data might not be feasible. In this section we empirically explore how much performance improvement we would achieve if we had access to only 1% (and 10%) of gold kinship graphs to obtain the additional annotation of entity pairs for data augmentation. Figure 5 our assumption is reasonable as the performance of only allowing additional supervision for 10% of the training data achieves decent accuracy.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 282, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Varying the amount of additional annotation", "sec_num": "4.3" }, { "text": "Next, we study the effect of reducing the size of training dataset and evaluate the effectiveness of our strategies under this setting. We reduce the training data size gradually by an order of 10 and form two smaller training splits with sizes around 1000 and 100 samples. Figure 6 and Figure 7 shows the results of our proposed model on the standard (no-noise) CLUTRR test datasets as we reduce the overall size of our training datasets with k=2,3 and k=2,3,4 respectively. We find that even in low data regime data augmentation leads to improvements.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 282, "text": "Figure 6", "ref_id": null }, { "start": 287, "end": 295, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Low data regime", "sec_num": "4.4" }, { "text": "In this paper, we present SSD to empirically evaluate the utility of two strategies (1) data augmentation and (2) predicting and using intermediate kinship graphs, towards compositional generalization of transformer-based models for the task of kinship prediction from a story. While data augmentation boosts the performance of our model, using intermediate kinship graphs leads to a downfall in the overall performance. Data augmentation is fruitful even when additional supervision in form of ground-truth kinship graphs is present for a limited set of examples. Future work can explore better methods to fuse the information from the intermediate kinship graph and the story instead of simple concatenation as done in this work. Figure 6 : Low data regime performance of settings for RoBERTa when trained on k=2,3. Use of augmented data from the ground-truth kinship graph boosts accuracy even when the overall size of the training data is reduced. Figure 7: Low data regime performance of settings for RoBERTa when trained on k=2,3,4. Use of augmented data from the ground-truth kinship graph boosts accuracy even when the overall size of the training data is reduced.", "cite_spans": [], "ref_spans": [ { "start": 732, "end": 740, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "A Description of models used to encode and reason over the intermediate kinship graph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix", "sec_num": null }, { "text": "The formula for Relational-Graph Conventional Networks we used is:where W l 0 h l i gives special treatment to self connection, r represents the relation type, j represents the neighbor nodes of node i with relation r, and W l r is the projection matrix for each relation type. In our setting, we have three R-GCN (Schlichtkrull et al., 2017) layers. h is the hidden representation of an entity in the graph and r is a kinship-relation type that belongs to set R, which contains all possible relations.", "cite_spans": [ { "start": 314, "end": 342, "text": "(Schlichtkrull et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A.1 R-GCN", "sec_num": null }, { "text": "We utilize highway connections (Srivastava et al., 2015) between R-GCN (Schlichtkrull et al., 2017) layers:where W hw is a linear layer, and denotes element-wise multiplication. h l i is the entity representation of the nodes in the graph from the previous layer, and\u0125 i is the entity representation of the node in the graph acquire by passing h l i to a R-GCN (Schlichtkrull et al., 2017) layer.", "cite_spans": [ { "start": 31, "end": 56, "text": "(Srivastava et al., 2015)", "ref_id": null }, { "start": 71, "end": 99, "text": "(Schlichtkrull et al., 2017)", "ref_id": null }, { "start": 361, "end": 389, "text": "(Schlichtkrull et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A.2 Highway Connection", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Systematic generalization: what is required and can it be learned? arXiv preprint", "authors": [ { "first": "Shikhar", "middle": [], "last": "References Dzmitry Bahdanau", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Murty", "suffix": "" }, { "first": "Thien", "middle": [], "last": "Noukhovitch", "suffix": "" }, { "first": "Harm", "middle": [], "last": "Huu Nguyen", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "De Vries", "suffix": "" }, { "first": "", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.12889" ] }, "num": null, "urls": [], "raw_text": "References Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. 2018. Systematic generaliza- tion: what is required and can it be learned? arXiv preprint arXiv:1811.12889.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Data augmentation to address out-of-vocabulary problem in low-resource sinhala-english neural machine translation", "authors": [ { "first": "Aloka", "middle": [], "last": "Fernando", "suffix": "" }, { "first": "Surangika", "middle": [], "last": "Ranathunga", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.48550/ARXIV.2205.08722" ] }, "num": null, "urls": [], "raw_text": "Aloka Fernando and Surangika Ranathunga. 2022. Data augmentation to address out-of-vocabulary problem in low-resource sinhala-english neural ma- chine translation.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Annotation artifacts in natural language inference data", "authors": [ { "first": "Swabha", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Levy", "suffix": "" }, { "first": "", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.02324" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "How much reading does reading comprehension require? a critical investigation of popular benchmarks", "authors": [ { "first": "Divyansh", "middle": [], "last": "Kaushik", "suffix": "" }, { "first": "", "middle": [], "last": "Zachary C Lipton", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.04926" ] }, "num": null, "urls": [], "raw_text": "Divyansh Kaushik and Zachary C Lipton. 2018. How much reading does reading comprehension re- quire? a critical investigation of popular bench- marks. arXiv preprint arXiv:1808.04926.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks", "authors": [ { "first": "Brenden", "middle": [], "last": "Lake", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "2873--2882", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In In- ternational conference on machine learning, pages 2873-2882. PMLR.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Modeling relational data with graph convolutional networks", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.48550/ARXIV.1703.06103" ] }, "num": null, "urls": [], "raw_text": "Modeling relational data with graph convolu- tional networks.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "CLUTRR: A diagnostic benchmark for inductive reasoning from text", "authors": [ { "first": "Koustuv", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Shagun", "middle": [], "last": "Sodhani", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "William", "middle": [ "L" ], "last": "Hamilton", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4506--4515", "other_ids": { "DOI": [ "10.18653/v1/D19-1458" ] }, "num": null, "urls": [], "raw_text": "Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L. Hamilton. 2019. CLUTRR: A diagnostic benchmark for inductive reasoning from text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4506-4515, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Graph attention networks", "authors": [ { "first": "Petar", "middle": [], "last": "Veli\u010dkovi\u0107", "suffix": "" }, { "first": "Guillem", "middle": [], "last": "Cucurull", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Casanova", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Lio", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.10903" ] }, "num": null, "urls": [], "raw_text": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Factpegasus: Factuality-aware pre-training and fine-tuning for abstractive summarization", "authors": [ { "first": "David", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.48550/ARXIV.2205.07830" ] }, "num": null, "urls": [], "raw_text": "David Wan and Mohit Bansal. 2022. Factpegasus: Factuality-aware pre-training and fine-tuning for ab- stractive summarization.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Addressing resource and privacy constraints in semantic parsing through data augmentation", "authors": [ { "first": "Kevin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Subhro", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.48550/ARXIV.2205.08675" ] }, "num": null, "urls": [], "raw_text": "Kevin Yang, Olivia Deng, Charles Chen, Richard Shin, Subhro Roy, and Benjamin Van Durme. 2022. Ad- dressing resource and privacy constraints in seman- tic parsing through data augmentation.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "compositional generalization performance of different models when trained on k = 2, 3 and k = 2, 3, 4. Our presented strategies boost accuracy even when the number of composition steps (k) is 10.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Categories of Noisy Inputs. The query is finding the relationship between entity A and entity C.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": ") with 0%, 1%, 10%, and 100% additional annotation of k=2, k=3, and k=4", "num": null, "uris": null, "type_str": "figure" }, "FIGREF5": { "text": "using 10% data of k=2, k=3, and k=4 RoBERTa SSD(base) SSD(data aug)", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "text": "Jenny and her father Rob went to the park. Rob's sister Lisa was also happy to see her home, back from college.", "type_str": "table", "content": "
Story (S)r 1 3 :f a t h e r e 3Jenny Ge 1r 12 :?e 2
TaskS (e 1 , e 2 ) +r 12Robr 32 : sisterLisa
Data AugmentationS+(e 1 , e 2 ) (e 1 , e 3 ) (e 3 , e 2 )r 12 r 32 r 13
Intermediate Kinship graphS+(e 1 , e 2 ) (e 1 , e 3 ) (e 3 , e 2 )e 3e 1e 2r 12
", "html": null } } } }