{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:02:56.788318Z"
},
"title": "Uncovering Surprising Event Boundaries in Narratives",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "zhilinw@uw.edu"
},
{
"first": "Anna",
"middle": [],
"last": "Jafarpour",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "annaja@uw.edu"
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "msap@cs.washington.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "When reading stories, people can naturally identify sentences in which a new event starts, i.e., event boundaries, using their knowledge of how events typically unfold, but a computational model to detect event boundaries is not yet available. We characterize and detect sentences with expected or surprising event boundaries in an annotated corpus of short diary-like stories, using a model that combines commonsense knowledge and narrative flow features with a RoBERTa classifier. Our results show that, while commonsense and narrative features can help improve performance overall, detecting event boundaries that are more subjective remains challenging for our model. We also find that sentences marking surprising event boundaries are less likely to be causally related to the preceding sentence, but are more likely to express emotional reactions of story characters, compared to sentences with no event boundary.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "When reading stories, people can naturally identify sentences in which a new event starts, i.e., event boundaries, using their knowledge of how events typically unfold, but a computational model to detect event boundaries is not yet available. We characterize and detect sentences with expected or surprising event boundaries in an annotated corpus of short diary-like stories, using a model that combines commonsense knowledge and narrative flow features with a RoBERTa classifier. Our results show that, while commonsense and narrative features can help improve performance overall, detecting event boundaries that are more subjective remains challenging for our model. We also find that sentences marking surprising event boundaries are less likely to be causally related to the preceding sentence, but are more likely to express emotional reactions of story characters, compared to sentences with no event boundary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When people read stories, they can easily detect the start of new events through changes in circumstances or in narrative development, i.e., event boundaries (Zacks et al., 2007; Bruni et al., 2014; Foster and Keane, 2015; Jafarpour et al., 2019b) . These event boundaries can be expected or surprising. For example, in the story in Figure 1 based on crowdsourced annotation, \"getting along with a dog who does not generally like new people\" marks a surprising new event, while \"their playing fetch together for a long time\" is an expected new event.",
"cite_spans": [
{
"start": 158,
"end": 178,
"text": "(Zacks et al., 2007;",
"ref_id": "BIBREF52"
},
{
"start": 179,
"end": 198,
"text": "Bruni et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 199,
"end": 222,
"text": "Foster and Keane, 2015;",
"ref_id": "BIBREF9"
},
{
"start": 223,
"end": 247,
"text": "Jafarpour et al., 2019b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 333,
"end": 341,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We aim to study whether machines can detect these surprising or expected event boundaries, using commonsense knowledge and narrative flow features. Characterizing features that are informative in detecting event boundaries can help determine how humans apply expectations on event relationships (Schank and Abelson, 1977; Kurby and Zacks, 2009; Radvansky et al., 2014; \u00dcnal Figure 1: Example story with sentences that contain either a surprising event boundary, no event boundary or an expected event boundary respectively. The annotations of reader perception are from the Hippocorpus dataset (Sap et al., 2022) . Zacks, 2020) . Furthermore, detection of sentences with event boundaries can also be useful when generating engaging stories with a good amount of surprises. (Yao et al., 2019; Rashkin et al., 2020; Ghazarian et al., 2021) .",
"cite_spans": [
{
"start": 295,
"end": 321,
"text": "(Schank and Abelson, 1977;",
"ref_id": "BIBREF40"
},
{
"start": 322,
"end": 344,
"text": "Kurby and Zacks, 2009;",
"ref_id": "BIBREF16"
},
{
"start": 345,
"end": 368,
"text": "Radvansky et al., 2014;",
"ref_id": "BIBREF30"
},
{
"start": 369,
"end": 373,
"text": "\u00dcnal",
"ref_id": "BIBREF45"
},
{
"start": 594,
"end": 612,
"text": "(Sap et al., 2022)",
"ref_id": "BIBREF37"
},
{
"start": 615,
"end": 627,
"text": "Zacks, 2020)",
"ref_id": "BIBREF51"
},
{
"start": 773,
"end": 791,
"text": "(Yao et al., 2019;",
"ref_id": "BIBREF50"
},
{
"start": 792,
"end": 813,
"text": "Rashkin et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 814,
"end": 837,
"text": "Ghazarian et al., 2021)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To differentiate sentences with surprising event boundaries, expected event boundaries, and no event boundaries, we train a classifier using 3925 story sentences with human annotation of event boundaries from diary-like stories about people's everyday lives (Sap et al., 2022) . We extract various commonsense and narrative features on relationships between sentences of a story, which can predict the type of event boundaries. Commonsense features include the likelihood that adjacent sentences are linked by commonsense relations from the knowledge graphs Atomic (Sap et al., 2019a) and Glucose (Mostafazadeh et al., 2020) . Narrative features include Realis (Sims et al., 2019) that identifies the number of event-related words in a sentence, Sequentiality (Radford et al., 2019; Sap et al., 2022) based on the probability of generating a sentence with varying context and SimGen (Rosset, 2020), which measures the similarity between a sentence and the sentence that is most likely to 1 be generated given the previous sentence. We then combine the prediction based on these features with the prediction from a RoBERTa classifier (Liu et al., 2019) , to form overall predictions.",
"cite_spans": [
{
"start": 258,
"end": 276,
"text": "(Sap et al., 2022)",
"ref_id": "BIBREF37"
},
{
"start": 565,
"end": 584,
"text": "(Sap et al., 2019a)",
"ref_id": "BIBREF38"
},
{
"start": 597,
"end": 624,
"text": "(Mostafazadeh et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 661,
"end": 680,
"text": "(Sims et al., 2019)",
"ref_id": "BIBREF42"
},
{
"start": 760,
"end": 782,
"text": "(Radford et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 783,
"end": 800,
"text": "Sap et al., 2022)",
"ref_id": "BIBREF37"
},
{
"start": 1133,
"end": 1151,
"text": "(Liu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate the performance of the classification model by measuring F1 of the predictions and compare various configurations of the model to a baseline RoBERTa model. We find that integrating narrative and commonsense features with RoBERTa leads to a significant improvement (+2.2% F1) over a simple RoBERTa classifier. There are also individual differences on the subjective judgment of which sentences contain a surprising or an expected event boundary, that is reflected in the detection model's performance. The performance of our model increases with increasing agreement across the human annotators. Additionally, by interpreting the trained parameters of our model, we find that the absence of causal links between sentences is a strong predictor of surprising event boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To further analyze how surprising event boundaries relate to deviation from commonsense understanding, we compare the performance of the classification model on the related task of ROC Story Cloze Test (Mostafazadeh et al., 2016) . This task concerns whether the ending sentence of a story follows/violates commonsense based on earlier sentences, which can be linked to whether sentences are expected or surprising. Our model performs significantly higher on the ROC Story Cloze Test (87.9% F1 vs 78.0% F1 on our task), showing that surprising event boundaries go beyond merely violating commonsense and therefore can be seen as more challenging to detect. Together, our results suggests that while detecting surprising event boundaries remains a challenging task for machines, a promising direction lies in utilizing commonsense knowledge and narrative features to augment language models.",
"cite_spans": [
{
"start": 202,
"end": 229,
"text": "(Mostafazadeh et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Events have been widely studied in Natural Language Processing. They have often been represented in highly structured formats with wordspecific triggers and arguments (Walker et al., 2006; Li et al., 2013; Chen et al., 2017; Sims et al., 2019; Mostafazadeh et al., 2020; Ahmad et al., 2021) or as Subject-Verb-Object-style (SVO) tuples extracted from syntactic parses (Chambers and Jurafsky, 2008; Martin et al., 2018; Rashkin et al., 2018; Sap et al., 2019a) . In narratives, events are represented as a continuous flow with multiple boundaries marking new events (Zacks et al., 2007; Graesser et al., 1981; Kurby and Zacks, 2008; Zacks, 2020) ; however, we lack a model to detect the boundary events that mark the meaningful segmentation of a continuous story into discrete events.",
"cite_spans": [
{
"start": 167,
"end": 188,
"text": "(Walker et al., 2006;",
"ref_id": null
},
{
"start": 189,
"end": 205,
"text": "Li et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 206,
"end": 224,
"text": "Chen et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 225,
"end": 243,
"text": "Sims et al., 2019;",
"ref_id": "BIBREF42"
},
{
"start": 244,
"end": 270,
"text": "Mostafazadeh et al., 2020;",
"ref_id": "BIBREF23"
},
{
"start": 271,
"end": 290,
"text": "Ahmad et al., 2021)",
"ref_id": "BIBREF0"
},
{
"start": 368,
"end": 397,
"text": "(Chambers and Jurafsky, 2008;",
"ref_id": "BIBREF5"
},
{
"start": 398,
"end": 418,
"text": "Martin et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 419,
"end": 440,
"text": "Rashkin et al., 2018;",
"ref_id": "BIBREF32"
},
{
"start": 441,
"end": 459,
"text": "Sap et al., 2019a)",
"ref_id": "BIBREF38"
},
{
"start": 565,
"end": 585,
"text": "(Zacks et al., 2007;",
"ref_id": "BIBREF52"
},
{
"start": 586,
"end": 608,
"text": "Graesser et al., 1981;",
"ref_id": "BIBREF13"
},
{
"start": 609,
"end": 631,
"text": "Kurby and Zacks, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 632,
"end": 644,
"text": "Zacks, 2020)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Boundary Detection Task",
"sec_num": "2"
},
{
"text": "In this work, we study stories from a cognitive angle to detect event boundaries. Such event boundaries relate to our narrative schema understanding (Schank and Abelson, 1977; Chambers and Jurafsky, 2008; Ryan, 2010) , commonsense knowledge (Sap et al., 2019a; Mostafazadeh et al., 2020) and world knowledge (Nematzadeh et al., 2018; Bisk et al., 2020) . Existing work has studied on salient (i.e. important/most report-able ) event boundaries within a story (Ouyang and McKeown, 2015; Otake et al., 2020; Wilmot and Keller, 2021) . However, missing from literature is how salient event boundary can either be surprising or expected based on the knowledge of how a flow of events should unfold. For example, events can be surprising when they deviate from commonsense in terms of what people would predict (e.g., if someone won something, they should not be sad; Sap et al., 2019a) . Surprising events can also be low likelihood events (Foster and Keane, 2015) such as seeing someone wear shorts outside in winter, or due to a rapid shift in emotional valence between events (Wilson and Gilbert, 2008 ) such as seeing a protagonist being defeated. Importantly, there are individual differences in how humans segment narratives into events (Jafarpour et al., 2019a) .",
"cite_spans": [
{
"start": 149,
"end": 175,
"text": "(Schank and Abelson, 1977;",
"ref_id": "BIBREF40"
},
{
"start": 176,
"end": 204,
"text": "Chambers and Jurafsky, 2008;",
"ref_id": "BIBREF5"
},
{
"start": 205,
"end": 216,
"text": "Ryan, 2010)",
"ref_id": "BIBREF35"
},
{
"start": 241,
"end": 260,
"text": "(Sap et al., 2019a;",
"ref_id": "BIBREF38"
},
{
"start": 261,
"end": 287,
"text": "Mostafazadeh et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 308,
"end": 333,
"text": "(Nematzadeh et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 334,
"end": 352,
"text": "Bisk et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 459,
"end": 485,
"text": "(Ouyang and McKeown, 2015;",
"ref_id": "BIBREF26"
},
{
"start": 486,
"end": 505,
"text": "Otake et al., 2020;",
"ref_id": "BIBREF25"
},
{
"start": 506,
"end": 530,
"text": "Wilmot and Keller, 2021)",
"ref_id": "BIBREF47"
},
{
"start": 863,
"end": 881,
"text": "Sap et al., 2019a)",
"ref_id": "BIBREF38"
},
{
"start": 1075,
"end": 1100,
"text": "(Wilson and Gilbert, 2008",
"ref_id": "BIBREF48"
},
{
"start": 1239,
"end": 1264,
"text": "(Jafarpour et al., 2019a)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Boundary Detection Task",
"sec_num": "2"
},
{
"text": "We tackle event boundary detection as a threeway classification task that involves distinguishing surprising but plausible event boundaries in story sentences from expected event boundaries and no event boundaries. To mirror how humans read stories, we predict the event boundary label for a sentence using all of its preceding sentences in the story, as well as the general story topic as context. Surprising event boundaries are novel events that are unexpected given their context, such as a dog getting along with someone despite not typically liking new people. Expected event boundaries are novel events that are not surprising, such as a person playing a new game with a dog for a long time given that they like each other. In contrast, sentences with no event boundary typically continue or elaborate on the preceding event, such as a person liking a dog given that they get along with the dog (Figure 1 ). 3 Event-annotated Data",
"cite_spans": [],
"ref_spans": [
{
"start": 902,
"end": 911,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Event Boundary Detection Task",
"sec_num": "2"
},
{
"text": "We use the English-based Event-annotated sentences from stories in the Hippocorpus dataset to study event boundaries. This dataset contains 240 diary-like crowdsourced stories about everyday life experiences, each containing an average of 16.4 sentences and are annotated at the sentence level (Sap et al., 2022) . Stories were inspected for the absence of offensive or person-identifying content. For the annotation, eight crowdworkers were shown a story sentence by sentence and were asked to mark whether each sentence contained a new surprising or expected event boundary, or no event boundary at all, based on their subjective judgment (Sap et al., 2022) . Summarized in Table 1 , based on the majoritarian vote, most sentences (57.5%) contain no event boundaries while 16.6% and 13.0% of sentences contains expected and surprising event boundaries, respectively.",
"cite_spans": [
{
"start": 294,
"end": 312,
"text": "(Sap et al., 2022)",
"ref_id": "BIBREF37"
},
{
"start": 641,
"end": 659,
"text": "(Sap et al., 2022)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 676,
"end": 683,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Event Boundary Detection Task",
"sec_num": "2"
},
{
"text": "Due to the inherent subjectivity of the task, aggregating labels into a majority label yields low agreement (e.g., 61.7% for surprising event boundaries; Table 1 ). Therefore, at training time, we use the proportion of annotations for each event boundary type as the label instead of the majority vote, because such distributional information is a better reflection of the inherent disagreement among human judgements (Pavlick and Kwiatkowski, 2019) . At test time, we use the majority vote as a gold label, since measuring performance on distribution modelling is less intuitive to interpret, and subsequently break down performance by agreement level to take disagreements into account.",
"cite_spans": [
{
"start": 418,
"end": 449,
"text": "(Pavlick and Kwiatkowski, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Event Boundary Detection Task",
"sec_num": "2"
},
{
"text": "We first describe informative commonsense and narrative features that we extract for the event boundary detection model. Then, we describe how we integrate these features with a RoBERTa classifier in our model before detailing our experimental setup. Figure 2 depicts an overview of our model.",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 259,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Event Boundary Detection Model",
"sec_num": "4"
},
{
"text": "We select a collection of commonsense features (Atomic and Glucose relations) and narrative flow features (Realis, Sequentiality and SimGen). A model is trained separately from our main model for Atomic relations, Glucose relations and Realis while models for Sequentiality and SimGen are used without further training. Features of story sentences are extracted as input into the main model. Because language modelling alone might not be sufficient to learn such features (Gordon and Van Durme, 2013; Sap et al., 2019a) , we provide the extracted features to the model instead of relying on the language models to learn them implicitly.",
"cite_spans": [
{
"start": 472,
"end": 500,
"text": "(Gordon and Van Durme, 2013;",
"ref_id": "BIBREF12"
},
{
"start": 501,
"end": 519,
"text": "Sap et al., 2019a)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Atomic relations are event relations from a social commonsense knowledge graph containing numerous events that can be related to one another (Sap et al., 2019a) . The event relations in this graph consists of:",
"cite_spans": [
{
"start": 141,
"end": 160,
"text": "(Sap et al., 2019a)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Emotional Reaction, The Effect of an event, Want to do after the event, What Needs to be done before an event, The Intention to do a certain event, What Attributes an event expresses. When an event affects the subject, the feature name is preceded by an x, while if it affects others, it has an o. o only applies to React, Effect and Want. For example, an xWant of a sentence PersonX pays PersonY a compliment is that PersonX will want to chat with PersonY, and an oWant is that PersonY will compliment PersonX back. We use Atomic relations because surprising event boundaries can involve breaches of commonsense understanding (Bosselut et al., 2019; Sap et al., 2019a; Mostafazadeh et al., 2020; Gabriel et al., 2021) . Furthermore, some Atomic relations (xReact and oReact) concern emotional affect and therefore can be used to capture changes in emotional valence, which can cause events to be seen as surprising (Wilson and Gilbert, 2008) . (Left) Our model involves a GRU to combine features from sentence pairs with three feature encoding modes, RoBERTa to consider story sentences and Event Boundary Detector to combine predictions made by the two components. S n and F n refer to sentence n and features n respectively, while P G and P R are predictions made by the GRU and RoBERTa. The output is a probability distribution over no event boundary, expected event boundary and surprising event boundary, which is used to update model parameters together with the label using the Kullback-Leibler Divergence loss function. (Right) Features (Atomic, Glucose, Realis, Sequentiality and SimGen) can be extracted as input into the GRU in three feature encoding modes: SEQUENTIAL (shown in Model Overview), ALLTOCURRENT and PREVIOUSONLY.",
"cite_spans": [
{
"start": 627,
"end": 650,
"text": "(Bosselut et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 651,
"end": 669,
"text": "Sap et al., 2019a;",
"ref_id": "BIBREF38"
},
{
"start": 670,
"end": 696,
"text": "Mostafazadeh et al., 2020;",
"ref_id": "BIBREF23"
},
{
"start": 697,
"end": 718,
"text": "Gabriel et al., 2021)",
"ref_id": "BIBREF10"
},
{
"start": 916,
"end": 942,
"text": "(Wilson and Gilbert, 2008)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "We train an Atomic relation classifier using a RoBERTa-base model (Liu et al., 2019) and the Atomic dataset to classify event-pairs into one of the nine possible relationship labels as well as a None label (to introduce negative samples). We achieved a validation F1 of 77.15%, which is high for a 10-way classification task. We describe training and other experimental details in the Appendix. When making inferences on the Event-annotated dataset, we predict the likelihood that a preceding sentence in a story will be related to the current sentence via each of the nine relationship labels. Because Atomic relations are directed relations (e.g., I ate some cake xEffect I am full is different from I am full xEffect I ate some cake), we also made the reverse inference in case commonsense relations between sentences exist in the reverse direction. Together, 9 forward atomic relation features and 9 reverse features (marked with'-r') are used.",
"cite_spans": [
{
"start": 66,
"end": 84,
"text": "(Liu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Glucose relations are event relations from another commonsense knowledge dataset containing relations between event-pairs in 10 dimensions (Mostafazadeh et al., 2020) . Glucose relation features are used to complement Atomic relation features in its coverage of commonsense relations. Dim-1 to 5 are described below while Dim-6 to 10 are the reverse/passive form of Dim-1 to 5 respectively.",
"cite_spans": [
{
"start": 139,
"end": 166,
"text": "(Mostafazadeh et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Dim-1: Event that causes/enables Dim-2: Emotion/human drive that motivates Dim-3: Change in location that enables Dim-4: State of possession that enables Dim-5: Other attribute that enables Glucose relation classifier was trained on a RoBERTa-base model to classify event-pairs from the Glucose dataset into one of ten possible relation labels as well as a None label. We used the specific version of Glucose events represented in natural language. As a result, we achieved a validation F1 of 80.94%. Training and other experimental details are in the Appendix. During inference on the Event-annotated dataset, we predict and use as features the likelihood that the current sentence will be related to a preceding sentence via each relation label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Realis events are words that serve as triggers (i.e., head words) for structured event representations (Sims et al., 2019) . Realis event words denote concrete events that actually happened, meaning that a higher number of Realis event words suggests greater likelihood of the sentence containing a new event boundary (expected or surprising). We trained a BERT-base model (Devlin et al., 2019) on an annotated corpus of literary novel extracts (Sims et al., 2019) . We achieved a validation F1 of 81.85%, inspired by and on par with Sap et al. (2020) . Then, we use the trained model to make inference on story sentences in the Event-annotated dataset. Finally, we used the number of Realis words in each sentence as a feature. Training and other experimental details are in the Appendix.",
"cite_spans": [
{
"start": 103,
"end": 122,
"text": "(Sims et al., 2019)",
"ref_id": "BIBREF42"
},
{
"start": 373,
"end": 394,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 445,
"end": 464,
"text": "(Sims et al., 2019)",
"ref_id": "BIBREF42"
},
{
"start": 534,
"end": 551,
"text": "Sap et al. (2020)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Sequentiality is a measure of the difference in conditional negative log-likelihood of generating a sentence given the previous sentence or otherwise (Sap et al., 2020 (Sap et al., , 2022 . Sequentiality can be a predictor for unlikely events, which can cause surprise (Foster and Keane, 2015). We use GPT-2 (Radford et al., 2019) to measure this negative loglikelihood since it is a Left-to-Right model, which matches the order in which annotators were shown sentences in a story. NLL of each sentence was obtained in two different contexts. NLL_topic is based on the sentence alone with only the topic as prior context, while NLL_topic+prev uses the previous sentence as additional context to study the link between adjacent sentences. Finally, Sequentiality is obtained by taking their difference. Experimental details are in the Appendix.",
"cite_spans": [
{
"start": 150,
"end": 167,
"text": "(Sap et al., 2020",
"ref_id": "BIBREF36"
},
{
"start": 168,
"end": 187,
"text": "(Sap et al., , 2022",
"ref_id": "BIBREF37"
},
{
"start": 308,
"end": 330,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "N LL topic = \u2212 1 |s i | log p LM (s i | T opic) N LL topic+prev = \u2212 1 |s i | log p LM (s i | T opic, s i\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "SimGen is computed as the cosine similarity between each sentence and the most likely generated sentence given the previous sentence, under a large Left-to-Right language model (specifically, Turing-NLG; Rosset, 2020). Then, we separately converted the original sentence and generated sentence into sentence embeddings using a pre-trained MPnet-base model (Song et al., 2020) . Finally, the generated embeddings and the original embeddings are compared for cosine similarity, which is used as a feature. Experimental details are in the Appendix.",
"cite_spans": [
{
"start": 356,
"end": 375,
"text": "(Song et al., 2020)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "We propose a model to integrate feature-based prediction with language-based prediction of event boundaries, illustrated in Figure 2 (left) . The predictions are independently made with extracted features using a gated recurrent unit (GRU) and with language (i.e., story sentences) using RoBERTa. Then these predictions are combined into a final predicted distribution for the three types of event boundaries. Our model is then trained using the Kullback-Leibler Divergence loss.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 139,
"text": "Figure 2 (left)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "GRU is used to combine features relating the current sentence i to prior sentences in a story. It sequentially considers information concerning prior sentences, which mimics the annotator's procedure of identifying event boundaries as they read one sentence at the time. As seen in Figure 2 (right), we use three feature encoding modes to determine the features that are used as input into the GRU, as inspired by literature on event segmentation (Pettijohn and Radvansky, 2016; Baldassano et al., 2018; Zacks, 2020) . These three modes represent different ways of facilitating information flow between sentences, which can have distinct effects on identifying event boundaries. The first mode, SEQUENTIAL, encodes features from all previous sentences in the story in a recurrent way (1 to 2, 2 to 3 ... i \u2212 1 to i) up until the current sentence i. The second mode, ALL-TOCURRENT, uses features from each of the previous sentences to the current sentence i (1 to i, 2 to i ... i \u2212 1 to i). The third mode, PREVIOUSONLY, (i \u2212 1 to i) only feeds into the GRU the features relating to the previous sentence. For all modes, the dimension of each time step input is K G , representing the total number of distinct features. We then project the final output of the GRU, h G \u2208 R K G , into a 3-dimensional vector space representing the unnormalized probability distribution over event boundary types.",
"cite_spans": [
{
"start": 479,
"end": 503,
"text": "Baldassano et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 504,
"end": 516,
"text": "Zacks, 2020)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [
{
"start": 282,
"end": 290,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "RoBERTa is used to make predictions based on text in story sentences. We use all story sentences up to sentence i inclusive. We then project the hidden state of the first token (also known as CLS token), h R \u2208 R K R , into a 3-dimensional space representing the unnormalized probability distribution over event boundary types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "Combining predictions We combine predictions made by the GRU (P G ) and RoBERTa (P R ) by concatenating their predictions and multiplying it with a linear classifier of size (6, 3) to output logits of size (3). The logits are then normalized using Softmax to give a distribution of the three types of event boundaries (P ). The weights of the linear classifier are initialized by concatenating two identity matrix of size 3 (I 3 ), which serves to perform elementwise addition between the predictions of the GRU and RoBERTa at early stages of the training process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "W := [I 3 ; I 3 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P := Sof tmax(W ([P G ; P R ]))",
"eq_num": "(2)"
}
],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "Loss function We use the Kullback-Leibler Divergence loss function to train the model. We use it over the standard Cross Entropy loss function because our training targets are in the form: proportion of annotations for each type of event boundary (e.g., 0.75, 0.125, 0.125 for no event, expected and surprising respectively). Including such distributional information in our training targets over using the majority annotation only can reflect the inherent disagreement among human judgements (Pavlick and Kwiatkowski, 2019) , which is important to capture for event boundaries given that they are subjective judgements.",
"cite_spans": [
{
"start": 493,
"end": 524,
"text": "(Pavlick and Kwiatkowski, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4.2"
},
{
"text": "We seek to predict the event-boundary annotation for each Hippocorpus story sentence, using preceding sentences in the story as context, as shown in Figure 2 . Additional training and experimental details are available in the Appendix.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 157,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "K-fold Cross-validation Because of the limited size of the dataset (n=3925), we split the dataset in k-folds (k=10), using one fold (n=392) for validation and nine other folds combined for training. From each of the 10 models, we obtained the prediction for the validation set. Together, the validation sets for the 10 models combine to form predictions for the entire dataset, which we use to conduct significance testing in order to compare the performance of models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "GRU was accessed from PyTorch, with K G set to 33 and a hidden dimension of 33.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "RoBERTa RoBERTa-base-uncased with 12layer, 768-hidden (K R ), 12-heads, 110M parameters, 0.1 dropout was used, accessed from Hug-gingFace Transformers library (Wolf et al., 2020) . When more than 10 prior sentences are available in a story, we use only the most recent 10 sentences due to RoBERTa input sequence length limitations.",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "Evaluation Metrics While capturing distributional information of subjective judgement labels (Pavlick and Kwiatkowski, 2019) is important for training, it can also be difficult to interpret for evaluation. Therefore, we decided to predict for the most likely label during evaluation and compare it against the majority label for each sample. Some 511 (13.0%) samples do not have a single majority label (e.g., equal number of expected and surprising annotations) and these samples were excluded. We use micro-averaged F1 as the metric. ",
"cite_spans": [
{
"start": 93,
"end": 124,
"text": "(Pavlick and Kwiatkowski, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "We first quantify the performance of our model in detecting event boundaries, using a coarse-grained performance measure on F1 with respect to majority vote. Then, we investigate how the performance varies based on annotation subjectivity. Finally, we inspect the model parameters to identify commonsense and narrative features that are most informative in detecting event boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Improving prediction of event boundaries As seen in Table 2 , RoBERTa alone performs fairly well in predicting event boundaries (F1 = 75.8%, within 2.2% F1 of our best performing model), but can be further supported by our commonsense and narrative features to improve its performance. In contrast, the commonsense and narrative features alone do not perform as well. 1 Overall, our best performing set-up is the Event Detector (PREVIOUSONLY) with F1 = 78.0%, which is significantly different from RoBERTa alone based on McNemar's test (p <0.05). 2 Its overall strong performance is largely contributed by its strong performance in detecting no event boundaries and expected event boundaries. F1 for no event boundary is higher than both surprising and expected event boundaries, likely because there are more sentences with no event boundaries as seen in Table 1 . The PREVIOUSONLY configuration performs best for 1 We also increased learning rate to 1e-3 for better performance given the absence of RoBERTa predictions in this ablation set-up.",
"cite_spans": [
{
"start": 915,
"end": 916,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 856,
"end": 863,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "2 McNemar's test is used to determine whether samples that have been predicted accurately (or not) by one model overlap with those that have predicted accurately (or not) by another model. no event boundaries and expected event boundaries likely because determining whether the current sentence continues an expected event (or not) requires retaining the latest information in working memory (Jafarpour et al., 2019a) . However, the SEQUEN-TIAL configuration seems to perform the best in predicting surprising event boundaries. Compared to no/expected event boundaries, we hypothesize that predicting surprising event boundaries requires taking into account how the story developed prior to the previous sentence in setting up the context for the current sentence. This finding echoes results by Townsend (2018) that showed that surprising sentences take a long time to read because it requires changing our mental model formed from previous sentences.",
"cite_spans": [
{
"start": 392,
"end": 417,
"text": "(Jafarpour et al., 2019a)",
"ref_id": "BIBREF14"
},
{
"start": 796,
"end": 811,
"text": "Townsend (2018)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "F1 varies with majority agreement Since the annotations were subjective and did not always agree, we further examine our best model's performance (PREVIOUSONLY) with respect to annotation agreement. As shown in Figure 3 , F1 increases with majority label agreement (Pearson's r = 0.953, p < 0.05). Such positive correlations are observed across all event boundary labels (Pearson's r = 0.869-0.994) and is especially strong for surprising event boundaries (Pearson's r = 0.994, p < 0.001). This means that most errors are made on samples that have low agreement among annotators. For example to show this contrast, after \"She and I are very close so it was great to see her marrying someone she loves,\" 7 out of 8 annotators indicated that \"The most memorable moment was when I spilled champagne on my dress before the wedding\" was surprising. On the other hand, after \"It was a hot day in July that our community decided to paint a mural on an intersection for public art,\" only 4 out of 8 annotators indicated that \"I had decided to volunteer to help paint.\" was surprising. The results suggest that our model performance reflects the variability and agreements in human annotations of event boundaries. We hypothesize that the event boundaries with more agreement are based on features that are shared across the annotators, such as commonsense knowledge; therefore, the model performs well in detecting those. Whereas, our model struggles with detecting event boundaries that are more subjective.",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 219,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Predictive features By integrating a separate feature-based classifier, the Event Boudary Detector model allows us to examine the model parameters and determine features that are associated with surprising, expected or no event boundaries. First, we take the average of the GRU classifier weights for each of the 10 cross-validated models. Then, we plot these weights for each label in Figure 4 , and summarize the findings below.",
"cite_spans": [],
"ref_spans": [
{
"start": 386,
"end": 394,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Features that relate to commonsense relations: oEffect, xEffect and Glucose Dim-6 (caused by) are most predictive of expected event boundaries. This can indicate that events that are an effect of/caused by a prior event can be expected by annotators, as also noted by Graesser et al. (1981) . An example of an expected event boundary is \"I told her we could go for coffee sometime.\", as an effect of \"We had a good time together.\" xNeed is least indicative of surprising event boundaries. This is likely because xNeed refers to what the subject needs to do before an activity, which is procedural and unlikely to cause surprise. An example is \"I was grocery shopping a few weeks ago.\" which is needed before \"I had purchased my items and was leaving the store.\"",
"cite_spans": [
{
"start": 268,
"end": 290,
"text": "Graesser et al. (1981)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Features that explain unlikely events Realis is highest for surprising event boundaries, suggesting that surprising event boundaries tend to contain the most concrete event-words. Surprising event boundaries also have the highest likelihood when conditioned on the story topic (NLL_topic) while expected events are highest when conditioned based on the topic and the previous sentence (NLL_topic+prev). This suggests that surprising events are often inline with the story topic but not with the previous sentence. Therefore, the low likelihood of transitioning between the previous and current sentence is a strong predictor of surprising event boundaries, in line with findings by Foster and Keane (2015) on how the difficulty of linking two adjacent events is an important factor in causing surprise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Features that explain changes in emotional valence Compared to sentences that contain no event boundaries, sentences that contain either expected or surprising event boundaries have higher xReact and oReact, which are emotional responses either by the subject or by others to an event. For example, this is the case for the surprising and emotional event boundary \"I remember it was like the 3rd or 4th game when something bad happened..\" This suggests that event boundaries are more likely when a sentence is more emotionally charged, echoing work by Dunsmoor et al. (2018) on how event segmentation is particularly frequent when the emotion of fear is triggered.",
"cite_spans": [
{
"start": 552,
"end": 574,
"text": "Dunsmoor et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "To better understand how surprising event boundaries relate to deviation from commonsense reasoning, we compare our Event Boundary Detection Task to the ROC Story Cloze Test (Mostafazadeh et al., 2016) . This test involves identifying whether a candidate ending sentence follows commonsense (commonsense ending) or deviates from commonsense (nonsense ending) given the first four sentences of a English short story. The ROC Story Cloze Test dataset contains 3142 samples with 1571 commonsense endings and 1571 nonsense endings. 3 We train a separate Event Boundary De-tector model on the ROC Story Cloze Test, using the same experimental setup as for event boundary detection, except the loss function; we use the cross-entropy loss since only one label is available for each sample. 4 overall nonsense commonsense F1 ending F1 ending F1 Table 3 . This indicates that detecting whether a story ending follows commonsense can be effectively approached using RoBERTa alone, setting this task might not be closely related to the Event Boundary Detection Task.",
"cite_spans": [
{
"start": 174,
"end": 201,
"text": "(Mostafazadeh et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 528,
"end": 529,
"text": "3",
"ref_id": null
},
{
"start": 784,
"end": 785,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 838,
"end": 845,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Comparison with Story Cloze Test",
"sec_num": "6"
},
{
"text": "We tackle the task of identifying event boundaries in stories. We propose a model that combines predictions made using commonsense and narrative features with a RoBERTa classifier. We found that integrating commonsense and narrative features can significantly improve the prediction of surprising event boundaries through detecting violations to commonsense relations (especially relating to the absence of causality), low likelihood events, and changes in emotional valence. Our model is capable in detecting event boundaries with high annotator agreement but limited in detecting those with lower agreement. Compared to identifying commonsense and nonsense story endings in Story Cloze Test, our task is found to be only tagentially related. Our results suggest that considering commonsense knowledge and narrative features can be a promising direction towards characterizing and detecting event boundaries in stories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "train our model on the dev portion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We use the Winter 2018 version, which contains a dev and a test set. As in previous work(Schwartz et al., 2017), we",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Training takes 20 minutes on an Nvidia P100 GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We used the train/dev/test splits from the original Atomic dataset (Sap et al., 2019a) . Negative samples are created by matching a Atomic event node to a corresponding tail event node from another sample based on the relationship involved. Sepcifically, negative sampling was performed on groups (['xWant', 'oWant', 'xNeed', ' xIntent'], ['xReact', 'oReact', ' xAttr'], ['xEffect', 'oEffect']) given that the tail event nodes in each group are more similar, creating more discriminating negative samples, as inspired by Sap et al. (2019b) . One negative sample is introduced every nine positive samples, since there are nine labels. We used a learning rate of 1e-4, batch size of 64, 8 epochs and AdamW optimizer. Training took 18 hours on a Nvidia P100 GPU.",
"cite_spans": [
{
"start": 67,
"end": 86,
"text": "(Sap et al., 2019a)",
"ref_id": "BIBREF38"
},
{
"start": 297,
"end": 327,
"text": "(['xWant', 'oWant', 'xNeed', '",
"ref_id": null
},
{
"start": 339,
"end": 361,
"text": "['xReact', 'oReact', '",
"ref_id": null
},
{
"start": 371,
"end": 394,
"text": "['xEffect', 'oEffect'])",
"ref_id": null
},
{
"start": 521,
"end": 539,
"text": "Sap et al. (2019b)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Atomic relations training details",
"sec_num": null
},
{
"text": "Because the Glucose dataset (Mostafazadeh et al., 2020) was not split initially, we randomly split the dataset into train/dev/test splits based on a 80/10/10 ratio. For each sample in Glucose, annotations share similar head event nodes in Dim-1 to 5 and similar tail event nodes in Dim-6 to 10. Therefore, our negative sampling strategy for Dim-1 to 5 involves randomly choosing a tail node from Dim-6 to 10 and vice-versa. As a result, one negative sample is introduced every five samples. During training, we used a learning rate of 1e-4, batch size of 64, 8 epochs and AdamW optimizer. Training took 15 hours on a Nvidia P100 GPU.",
"cite_spans": [
{
"start": 28,
"end": 55,
"text": "(Mostafazadeh et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Glucose relations training details",
"sec_num": null
},
{
"text": "We used the train/dev/test split from the Realis dataset (Sims et al., 2019) . During training, we used the AdamW optimizer, a learning rate of 2e-5, 3 epochs and batch size of 4, as inspired by (Sap et al., 2020) . Training took 1 hour on a Nvidia P100 GPU.",
"cite_spans": [
{
"start": 57,
"end": 76,
"text": "(Sims et al., 2019)",
"ref_id": "BIBREF42"
},
{
"start": 195,
"end": 213,
"text": "(Sap et al., 2020)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Realis training details",
"sec_num": null
},
{
"text": "GPT2-small was accessed from HuggingFace Transformers library and used without further finetuning. It has 125M parameters, a context window of 1024, hidden state dimension of 768, 12 heads and dropout of 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.4 Sequentiality experimental details",
"sec_num": null
},
{
"text": "We used the Turing-NLG model without further fine-tuning. The model has 17B and we used it with top-p sampling (top-p=0.85), temperature=1.0 and max sequence length of 64 tokens. MPnetbase model was accessed from the Sentence-BERT library (Reimers and Gurevych, 2019) and used without further fine-tuning.",
"cite_spans": [
{
"start": 239,
"end": 267,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.5 SimGen experimental details",
"sec_num": null
},
{
"text": "AdamW optimizer was used with \u03b1 = 5 * 10 \u22126 , following a uniform search using F1 as the criterion at intervals of {2.5, 5, 7.5, 10} * 10 n ; \u22126 \u2264 n \u2264 \u22123.Learning rate was linearly decayed (8 epochs) with 100 warm-up steps. Batch size of 16 was used. Validation was done every 0.25 epochs during training.Training each model took around 30 minutes on an Nvidia P100 GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.6 Event Boundary Detection Model training details",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Gate: Graph attention transformer encoder for cross-lingual relation and event extraction",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Wasi Uddin Ahmad",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wasi Uddin Ahmad, Nanyun Peng, and Kai-Wei Chang. 2021. Gate: Graph attention transformer en- coder for cross-lingual relation and event extraction.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Representation of real-world event schemas during narrative perception",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Baldassano",
"suffix": ""
},
{
"first": "Uri",
"middle": [],
"last": "Hasson",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"A"
],
"last": "Norman",
"suffix": ""
}
],
"year": 2018,
"venue": "The Journal of Neuroscience",
"volume": "38",
"issue": "45",
"pages": "9689--9699",
"other_ids": {
"DOI": [
"10.1523/jneurosci.0251-18.2018"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Baldassano, Uri Hasson, and Kenneth A. Norman. 2018. Representation of real-world event schemas during narrative perception. The Journal of Neuroscience, 38(45):9689-9699.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Experience grounds language",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Thomason",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Nisnevich",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Pinto",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8718--8735",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lap- ata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8718-8735, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Comet: Commonsense transformers for automatic knowledge graph construction",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "\u00c7elikyilmaz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli \u00c7elikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for au- tomatic knowledge graph construction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Narrative cognition in interactive systems: Suspense-surprise and the p300 erp component",
"authors": [
{
"first": "Luis",
"middle": [
"Emilio"
],
"last": "Bruni",
"suffix": ""
},
{
"first": "Sarune",
"middle": [],
"last": "Baceviciute",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Arief",
"suffix": ""
}
],
"year": 2014,
"venue": "Interactive Storytelling",
"volume": "",
"issue": "",
"pages": "164--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Emilio Bruni, Sarune Baceviciute, and Mo- hammed Arief. 2014. Narrative cognition in inter- active systems: Suspense-surprise and the p300 erp component. In Interactive Storytelling, pages 164- 175, Cham. Springer International Publishing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised learning of narrative event chains",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "789--797",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Unsuper- vised learning of narrative event chains. In Proceed- ings of ACL-08: HLT, pages 789-797, Columbus, Ohio. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatically labeled data generation for large scale event extraction",
"authors": [
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "409--419",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1038"
]
},
"num": null,
"urls": [],
"raw_text": "Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data genera- tion for large scale event extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 409-419, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Event segmentation protects emotional memories from competing experiences encoded close in time",
"authors": [
{
"first": "Joseph",
"middle": [
"E"
],
"last": "Dunsmoor",
"suffix": ""
},
{
"first": "C",
"middle": [
"W"
],
"last": "Marijn",
"suffix": ""
},
{
"first": "Caroline",
"middle": [
"M"
],
"last": "Kroes",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"D"
],
"last": "Moscatelli",
"suffix": ""
},
{
"first": "Lila",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [
"A"
],
"last": "Davachi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Phelps",
"suffix": ""
}
],
"year": 2018,
"venue": "Nature Human Behaviour",
"volume": "2",
"issue": "4",
"pages": "291--299",
"other_ids": {
"DOI": [
"10.1038/s41562-018-0317-4"
]
},
"num": null,
"urls": [],
"raw_text": "Joseph E. Dunsmoor, Marijn C. W. Kroes, Caroline M. Moscatelli, Michael D. Evans, Lila Davachi, and Elizabeth A. Phelps. 2018. Event segmentation pro- tects emotional memories from competing experi- ences encoded close in time. Nature Human Be- haviour, 2(4):291-299.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Predicting surprise judgments from explanation graphs",
"authors": [
{
"first": "I",
"middle": [],
"last": "Meadhbh",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"T"
],
"last": "Foster",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Keane",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meadhbh I. Foster and Mark T. Keane. 2015. Predict- ing surprise judgments from explanation graphs.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Paragraph-level commonsense transformers with recurrent memory",
"authors": [
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Ronan",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Bras",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "35",
"issue": "",
"pages": "12857--12865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, and Yejin Choi. 2021. Paragraph-level commonsense transformers with recurrent memory. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12857- 12865.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Plot-guided adversarial example construction for evaluating open-domain story generation",
"authors": [
{
"first": "Zixi",
"middle": [],
"last": "Sarik Ghazarian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "S M",
"middle": [],
"last": "Akash",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Aram",
"middle": [],
"last": "Galstyan",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4334--4344",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.343"
]
},
"num": null,
"urls": [],
"raw_text": "Sarik Ghazarian, Zixi Liu, Akash S M, Ralph Weischedel, Aram Galstyan, and Nanyun Peng. 2021. Plot-guided adversarial example construction for evaluating open-domain story generation. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4334-4344, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Reporting bias and knowledge acquisition",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {
"DOI": [
"10.1145/2509558.2509563"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Gordon and Benjamin Van Durme. 2013. Re- porting bias and knowledge acquisition. In Proceed- ings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13, page 25-30, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Incorporating inferences in narrative representations: A study of how and why. Cognitive Psychology",
"authors": [
{
"first": "C",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Graesser",
"suffix": ""
},
{
"first": "Patricia",
"middle": [
"A"
],
"last": "Scott P Robertson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Anderson",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "13",
"issue": "",
"pages": "1--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur C Graesser, Scott P Robertson, and Patricia A Anderson. 1981. Incorporating inferences in narra- tive representations: A study of how and why. Cog- nitive Psychology, 13(1):1-26.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Event segmentation reveals working memory forgetting rate",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Jafarpour",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [
"A"
],
"last": "Buffalo",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Anne Ge",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Jafarpour, Elizabeth A Buffalo, Robert T Knight, and Anne GE Collins. 2019a. Event segmentation reveals working memory forgetting rate. Available at SSRN 3614120.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Medial orbitofrontal cortex, dorsolateral prefrontal cortex, and hippocampus differentially represent the event saliency",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Jafarpour",
"suffix": ""
},
{
"first": "Sandon",
"middle": [],
"last": "Griffin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Jack",
"suffix": ""
},
{
"first": "Robert T",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of cognitive neuroscience",
"volume": "31",
"issue": "6",
"pages": "874--884",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Jafarpour, Sandon Griffin, Jack J Lin, and Robert T Knight. 2019b. Medial orbitofrontal cor- tex, dorsolateral prefrontal cortex, and hippocampus differentially represent the event saliency. Journal of cognitive neuroscience, 31(6):874-884.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Segmentation in the perception and memory of events",
"authors": [
{
"first": "C",
"middle": [
"A"
],
"last": "Kurby",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zacks",
"suffix": ""
}
],
"year": 2009,
"venue": "Trends in cognitive sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CA Kurby and JM Zacks. 2009. Segmentation in the perception and memory of events. Trends in cogni- tive sciences.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Segmentation in the perception and memory of events",
"authors": [
{
"first": "A",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"M"
],
"last": "Kurby",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zacks",
"suffix": ""
}
],
"year": 2008,
"venue": "Trends in cognitive sciences",
"volume": "12",
"issue": "2",
"pages": "72--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher A Kurby and Jeffrey M Zacks. 2008. Seg- mentation in the perception and memory of events. Trends in cognitive sciences, 12(2):72-79.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Joint event extraction via structured prediction with global features",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "73--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global fea- tures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 73-82, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Event representations for automated story generation with deep neural nets",
"authors": [
{
"first": "Lara",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Prithviraj",
"middle": [],
"last": "Ammanabrolu",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Riedl",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lara Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark Riedl. 2018. Event representations for auto- mated story generation with deep neural nets. Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, 32(1).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Note on the sampling error of the difference between correlated proportions or percentages",
"authors": [
{
"first": "Quinn",
"middle": [],
"last": "Mcnemar",
"suffix": ""
}
],
"year": 1947,
"venue": "Psychometrika",
"volume": "12",
"issue": "2",
"pages": "153--157",
"other_ids": {
"DOI": [
"10.1007/bf02295996"
]
},
"num": null,
"urls": [],
"raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A corpus and cloze evaluation for deeper understanding of commonsense stories",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "839--849",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1098"
]
},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "GLUCOSE: GeneraLized and COntextualized story explanations",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Kalyanpur",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Buchanan",
"suffix": ""
},
{
"first": "Lauren",
"middle": [],
"last": "Berkowitz",
"suffix": ""
},
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4569--4586",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.370"
]
},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Aditya Kalyanpur, Lori Moon, David Buchanan, Lauren Berkowitz, Or Biran, and Jennifer Chu-Carroll. 2020. GLUCOSE: GeneraL- ized and COntextualized story explanations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4569-4586, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Evaluating theory of mind in question answering",
"authors": [
{
"first": "Aida",
"middle": [],
"last": "Nematzadeh",
"suffix": ""
},
{
"first": "Kaylee",
"middle": [],
"last": "Burns",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Grant",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Gopnik",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2392--2400",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1261"
]
},
"num": null,
"urls": [],
"raw_text": "Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, and Tom Griffiths. 2018. Evaluating theory of mind in question answering. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2392-2400, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Modeling event salience in narratives via barthes' cardinal functions",
"authors": [
{
"first": "Takaki",
"middle": [],
"last": "Otake",
"suffix": ""
},
{
"first": "Sho",
"middle": [],
"last": "Yokoi",
"suffix": ""
},
{
"first": "Naoya",
"middle": [],
"last": "Inoue",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Tatsuki",
"middle": [],
"last": "Kuribayashi",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1784--1794",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.160"
]
},
"num": null,
"urls": [],
"raw_text": "Takaki Otake, Sho Yokoi, Naoya Inoue, Ryo Takahashi, Tatsuki Kuribayashi, and Kentaro Inui. 2020. Mod- eling event salience in narratives via barthes' car- dinal functions. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 1784-1794, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Modeling reportable events as turning points in narrative",
"authors": [
{
"first": "Jessica",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2149--2158",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1257"
]
},
"num": null,
"urls": [],
"raw_text": "Jessica Ouyang and Kathleen McKeown. 2015. Mod- eling reportable events as turning points in narrative. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 2149-2158, Lisbon, Portugal. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Inherent disagreements in human textual inferences",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "677--694",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00293"
]
},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transac- tions of the Association for Computational Linguis- tics, 7:677-694.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Narrative event boundaries, reading times, and expectation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kyle",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [
"A"
],
"last": "Pettijohn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Radvansky",
"suffix": ""
}
],
"year": 2016,
"venue": "Memory & Cognition",
"volume": "44",
"issue": "7",
"pages": "1064--1075",
"other_ids": {
"DOI": [
"10.3758/s13421-016-0619-6"
]
},
"num": null,
"urls": [],
"raw_text": "Kyle A. Pettijohn and Gabriel A. Radvansky. 2016. Narrative event boundaries, reading times, and ex- pectation. Memory & Cognition, 44(7):1064-1075.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Different kinds of causality in event cognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Andrea",
"middle": [
"K"
],
"last": "Radvansky",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Tamplin",
"suffix": ""
},
{
"first": "Alexis",
"middle": [
"N"
],
"last": "Armendarez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 2014,
"venue": "Discourse Processes",
"volume": "51",
"issue": "7",
"pages": "601--618",
"other_ids": {
"DOI": [
"10.1080/0163853X.2014.903366"
]
},
"num": null,
"urls": [],
"raw_text": "Gabriel A. Radvansky, Andrea K. Tamplin, Joseph Ar- mendarez, and Alexis N. Thompson. 2014. Differ- ent kinds of causality in event cognition. Discourse Processes, 51(7):601-618.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "PlotMachines: Outlineconditioned generation with dynamic plot state tracking",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4274--4295",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.349"
]
},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Asli Celikyilmaz, Yejin Choi, and Jianfeng Gao. 2020. PlotMachines: Outline- conditioned generation with dynamic plot state tracking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 4274-4295, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Event2mind: Commonsense inference on events, intents, and reactions",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Allaway",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2mind: Commonsense inference on events, intents, and re- actions. In ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Turing-nlg: A 17-billionparameter language model by microsoft",
"authors": [
{
"first": "Corby",
"middle": [],
"last": "Rosset",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corby Rosset. 2020. Turing-nlg: A 17-billion- parameter language model by microsoft.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Narratology and cognitive science: A problematic relation",
"authors": [
{
"first": "Marie-Laure",
"middle": [],
"last": "Ryan",
"suffix": ""
}
],
"year": 2010,
"venue": "Style",
"volume": "44",
"issue": "4",
"pages": "469--495",
"other_ids": {
"DOI": [
"http://www.jstor.org/stable/10.5325/style.44.4.469"
]
},
"num": null,
"urls": [],
"raw_text": "Marie-Laure Ryan. 2010. Narratology and cognitive science: A problematic relation. Style, 44(4):469- 495.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Recollection versus imagination: Exploring human memory and cognition via neural language models",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Horvitz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "James",
"middle": [
"W"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pennebaker",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Eric Horvitz, Yejin Choi, Noah A Smith, and James W Pennebaker. 2020. Recollection ver- sus imagination: Exploring human memory and cog- nition via neural language models. In ACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Computational lens on cognition: Study of autobiographical versus imagined stories with largescale language models",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Jafarpour",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "James",
"middle": [
"W"
],
"last": "Pennebaker",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Horvitz",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Anna Jafarpour, Yejin Choi, Noah A. Smith, James W. Pennebaker, and Eric Horvitz. 2022. Computational lens on cognition: Study of autobiographical versus imagined stories with large- scale language models.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Atomic: An atlas of machine commonsense for if-then reasoning",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Allaway",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Roof",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "3027--3035",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33013027"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019a. Atomic: An atlas of machine common- sense for if-then reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):3027- 3035.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Social IQa: Commonsense reasoning about social interactions",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4463--4473",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1454"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019b. Social IQa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4463- 4473, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Scripts, Plans, Goals, and Understanding",
"authors": [
{
"first": "R",
"middle": [
"C"
],
"last": "Schank",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Abelson",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.C. Schank and R. Abelson. 1977. Scripts, Plans, Goals, and Understanding. Hillsdale, NJ: Earlbaum Assoc.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The effect of different writing tasks on linguistic style: A case study of the roc story cloze task",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zilles",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles, Yejin Choi, and Noah A Smith. 2017. The effect of different writing tasks on linguistic style: A case study of the roc story cloze task. In CoNLL.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Literary event detection",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Sims",
"suffix": ""
},
{
"first": "Jong",
"middle": [
"Ho"
],
"last": "Park",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3623--3634",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1353"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Sims, Jong Ho Park, and David Bamman. 2019. Literary event detection. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3623-3634, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Mpnet: Masked and permuted pretraining for language understanding",
"authors": [
{
"first": "Kaitao",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.09297"
]
},
"num": null,
"urls": [],
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2020. Mpnet: Masked and permuted pre- training for language understanding. arXiv preprint arXiv:2004.09297.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Stage salience and situational likelihood in the formation of situation models during sentence comprehension",
"authors": [
{
"first": "David",
"middle": [
"J"
],
"last": "Townsend",
"suffix": ""
}
],
"year": 2018,
"venue": "Lingua",
"volume": "206",
"issue": "",
"pages": "1--20",
"other_ids": {
"DOI": [
"10.1016/j.lingua.2018.01.002"
]
},
"num": null,
"urls": [],
"raw_text": "David J. Townsend. 2018. Stage salience and situa- tional likelihood in the formation of situation models during sentence comprehension. Lingua, 206:1-20.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "From event representation to linguistic meaning",
"authors": [
{
"first": "Ercenur",
"middle": [],
"last": "\u00dcnal",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Papafragou",
"suffix": ""
}
],
"year": 2019,
"venue": "Topics in Cognitive Science",
"volume": "13",
"issue": "1",
"pages": "224--242",
"other_ids": {
"DOI": [
"10.1111/tops.12475"
]
},
"num": null,
"urls": [],
"raw_text": "Ercenur \u00dcnal, Yue Ji, and Anna Papafragou. 2019. From event representation to linguistic meaning. Topics in Cognitive Science, 13(1):224-242.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Memory and knowledge augmented language models for inferring salience in long-form stories",
"authors": [
{
"first": "David",
"middle": [],
"last": "Wilmot",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "851--865",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.65"
]
},
"num": null,
"urls": [],
"raw_text": "David Wilmot and Frank Keller. 2021. Memory and knowledge augmented language models for infer- ring salience in long-form stories. In Proceedings of the 2021 Conference on Empirical Methods in Nat- ural Language Processing, pages 851-865, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Explaining away: A model of affective adaptation",
"authors": [
{
"first": "Timothy",
"middle": [
"D"
],
"last": "Wilson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"T"
],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2008,
"venue": "Perspectives on Psychological Science",
"volume": "3",
"issue": "5",
"pages": "370--386",
"other_ids": {
"DOI": [
"10.1111/j.1745-6924.2008.00085.x"
],
"PMID": [
"26158955"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy D. Wilson and Daniel T. Gilbert. 2008. Ex- plaining away: A model of affective adaptation. Per- spectives on Psychological Science, 3(5):370-386. PMID: 26158955.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Planand-write: Towards better automatic storytelling",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Weischedel",
"middle": [],
"last": "Ralph",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2019,
"venue": "The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Yao, Nanyun Peng, Weischedel Ralph, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Plan- and-write: Towards better automatic storytelling. In The Thirty-Third AAAI Conference on Artificial In- telligence (AAAI-19).",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Event perception and memory",
"authors": [
{
"first": "Jeffrey",
"middle": [
"M"
],
"last": "Zacks",
"suffix": ""
}
],
"year": 2020,
"venue": "Annual Review of Psychology",
"volume": "71",
"issue": "1",
"pages": "165--191",
"other_ids": {
"DOI": [
"10.1146/annurev-psych-010419-051101"
],
"PMID": [
"31905113"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey M. Zacks. 2020. Event perception and mem- ory. Annual Review of Psychology, 71(1):165-191. PMID: 31905113.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Event perception: a mind-brain perspective",
"authors": [
{
"first": "Nicole",
"middle": [
"K"
],
"last": "Jeffrey M Zacks",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Khena",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Swallow",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Todd",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [
"R"
],
"last": "Braver",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reynolds",
"suffix": ""
}
],
"year": 2007,
"venue": "Psychological bulletin",
"volume": "133",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey M Zacks, Nicole K Speer, Khena M Swallow, Todd S Braver, and Jeremy R Reynolds. 2007. Event perception: a mind-brain perspective. Psychologi- cal bulletin, 133(2):273.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Figure 2: (Left) Our model involves a GRU to combine features from sentence pairs with three feature encoding modes, RoBERTa to consider story sentences and Event Boundary Detector to combine predictions made by the two components. S n and F n refer to sentence n and features n respectively, while P G and P R are predictions made by the GRU and RoBERTa. The output is a probability distribution over no event boundary, expected event boundary and surprising event boundary, which is used to update model parameters together with the label using the Kullback-Leibler Divergence loss function. (Right) Features (Atomic, Glucose, Realis, Sequentiality and SimGen) can be extracted as input into the GRU in three feature encoding modes: SEQUENTIAL (shown in Model Overview), ALLTOCURRENT and PREVIOUSONLY.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "F1 by Event Detector (PREVIOUSONLY) against majority agreement, on all 10 folds. * means that Pearson's r is significant at p < 0.05 and ** at p < 0.001.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Feature weights towards each label in GRU component of Event Detector (PREVIOUSONLY)",
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "",
"html": null,
"content": "
: Descriptive Statistics for Event-annotated sen-tences. Majority label refers to the most common an-notation of a sample from 8 independent annotators. If there is a tie between 2 labels, it is categorized as tied. Majority agreement is the proportion of sample annota-tions for the majority label. |
"
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "",
"html": null,
"content": ": Event detection task: Performance of Event Detector compared to baseline model. *: overall F1 sig-nificant different from RoBERTa based on McNemar's test (p <0.05) (McNemar, 1947) |
"
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "ROC Story Cloze Test Performance of Event Detector on ROC Story Cloze Test Our commonsense and narrative features do not seem to significantly improve upon RoBERTa's performance in the ROC Story Cloze Test (+0.2% F1), as observed in",
"html": null,
"content": ""
}
}
}
}