ACL-OCL / Base_JSON /prefixW /json /wnu /2022.wnu-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:03:01.061643Z"
},
"title": "GisPy: A Tool for Measuring Gist Inference Score in Text",
"authors": [
{
"first": "Pedram",
"middle": [],
"last": "Hosseini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Christopher",
"middle": [
"R"
],
"last": "Wolfe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Miami University",
"location": {
"addrLine": "3 Meta AI"
}
},
"email": "[email protected]"
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "David",
"middle": [
"A"
],
"last": "Broniatowski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Decision making theories such as Fuzzy-Trace Theory (FTT) suggest that individuals tend to rely on gist, or bottom-line meaning, in the text when making decisions. In this work, we delineate the process of developing GisPy, an opensource tool in Python for measuring the Gist Inference Score (GIS) in text. Evaluation of GisPy on documents in three benchmarks from the news and scientific text domains demonstrates that scores generated by our tool significantly distinguish low vs. high gist documents.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Decision making theories such as Fuzzy-Trace Theory (FTT) suggest that individuals tend to rely on gist, or bottom-line meaning, in the text when making decisions. In this work, we delineate the process of developing GisPy, an opensource tool in Python for measuring the Gist Inference Score (GIS) in text. Evaluation of GisPy on documents in three benchmarks from the news and scientific text domains demonstrates that scores generated by our tool significantly distinguish low vs. high gist documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "According to Fuzzy-Trace Theory (FTT) (Reyna, 2008 (Reyna, , 2012 , when individuals read text, they encode multiple mental representations of the text in parallel in their mind. These mental representations vary along a continuum ranging from 1) gist to 2) verbatim. While verbatim representations are related to surface-level information, gist represents the bottom-line meaning of the text, given its context. FTT sees the word gist in much the same way as everyday usage, as the essence or main part, the substance or pith of a matter. Gist representations are important to assess because they influence judgments and decision making more than verbatim representations (Reyna, 2021) . Knowing gist helps us measure the capability of a document (e.g., news article, social media post, etc.) in creating a clear and actionable mental representation in readers' mind and the degree to which a document can communicate its message.",
"cite_spans": [
{
"start": 38,
"end": 50,
"text": "(Reyna, 2008",
"ref_id": "BIBREF17"
},
{
"start": 51,
"end": 65,
"text": "(Reyna, , 2012",
"ref_id": "BIBREF18"
},
{
"start": 673,
"end": 686,
"text": "(Reyna, 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The majority of existing Natural Language Processing (NLP) tools and models focus on measuring coherence, cohesion, and readability in text (Graesser et al., 2004; Lapata et al., 2005; Lin et al., 2011; Crossley et al., 2016; Liu et al., 2020; Laban et al., 2021; Duari and Bhatnagar, 2021) . It is worth mentioning that even though coherence promotes gist extraction, these two are not the same. And gist can be viewed as a mechanism that allows coherence apprehension (Glanemann et al., 2016) . To the best of our knowledge, there is no publicly available tool for directly measuring gist in text. Wolfe et al. (2019) ; Dandignac and Wolfe (2020) ; Wolfe et al. (2021) are the only studies that introduced a theoretically motivated method to measure Gist Inference Score (GIS) using a subset of Coh-Metrix indices. Coh-Metrix (Graesser et al., 2004) is a tool for producing linguistic and discourse representations of a text including measures of cohesion and readability. Coh-Metrix, even though useful and inspiring, has several limitations. For example, its public version does not allow batch processing of documents, is only available via a web interface, and its cohesion indices focus on local and overall cohesion (Crossley et al., 2016) . In this work, inspired by Wolfe et al. (2019) and definition of a subset of indices in Coh-Metrix, we develop a new open-source tool to automatically compute GIS for a collection of text documents.",
"cite_spans": [
{
"start": 140,
"end": 163,
"text": "(Graesser et al., 2004;",
"ref_id": "BIBREF5"
},
{
"start": 164,
"end": 184,
"text": "Lapata et al., 2005;",
"ref_id": "BIBREF8"
},
{
"start": 185,
"end": 202,
"text": "Lin et al., 2011;",
"ref_id": "BIBREF9"
},
{
"start": 203,
"end": 225,
"text": "Crossley et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 226,
"end": 243,
"text": "Liu et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 244,
"end": 263,
"text": "Laban et al., 2021;",
"ref_id": "BIBREF7"
},
{
"start": 264,
"end": 290,
"text": "Duari and Bhatnagar, 2021)",
"ref_id": "BIBREF3"
},
{
"start": 470,
"end": 494,
"text": "(Glanemann et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 600,
"end": 619,
"text": "Wolfe et al. (2019)",
"ref_id": "BIBREF21"
},
{
"start": 622,
"end": 648,
"text": "Dandignac and Wolfe (2020)",
"ref_id": "BIBREF2"
},
{
"start": 651,
"end": 670,
"text": "Wolfe et al. (2021)",
"ref_id": "BIBREF22"
},
{
"start": 817,
"end": 851,
"text": "Coh-Metrix (Graesser et al., 2004)",
"ref_id": null
},
{
"start": 1224,
"end": 1247,
"text": "(Crossley et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We leverage the state-of-the-art NLP tools and models such as contextual language model embeddings to further improve the quality of indices in our tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce the first open-source and publicly available tool to measure Gist Inference Score in text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We unify and standardize three benchmarks for measuring gist in text and report improved baselines on these benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 By leveraging the explainability of indices in our tool, we investigate the role of individual indices in producing GIS for low vs. high gist documents across benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we explain how we implement each of the indices in GisPy and compute GIS. We start by explaining common implementation features among indices followed by specific details about each of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "We have taken different approaches in implementing indices for which we need to compute the overlap between words or sentences (e.g., semantic similarity). In particular, these indices are computed in two settings: 1) local and 2) global. In the local setting, we only take into account consecutive/adjacent words/sentences whereas in the global setting, we consider all pairs not just consecutive ones. Moreover, we compute indices one time by separating the paragraphs in text and another time by disregarding the paragraph boundaries. For clarity, we use postfixes listed in Table 1 for these variations.",
"cite_spans": [],
"ref_spans": [
{
"start": 578,
"end": 585,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Local vs. Global Indices",
"sec_num": "2.1"
},
{
"text": "Local ignoring paragraph boundary * _a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postfix Explanation * _1",
"sec_num": null
},
{
"text": "Global ignoring paragraph boundary * _1p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postfix Explanation * _1",
"sec_num": null
},
{
"text": "Local at paragraph-level * _ap",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postfix Explanation * _1",
"sec_num": null
},
{
"text": "Global at paragraph-level We assume every document is broken into paragraphs {P 0 , P 1 , ..., P n }, separated by at least one newline character, each with one or more sentences {S 0,0 , S 0,1 , ..., S i,j } where each sentence has one or more tokens {t 0,0,0 , t 0,0,1 , ..., t i,j,k }. As an example, for a document with two paragraphs each with two and three sentences, respectively:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postfix Explanation * _1",
"sec_num": null
},
{
"text": "P 0 \u2192 {S 0,0 , S 0,1 } P 1 \u2192 {S 1,0 , S 1,1 , S 1,2 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postfix Explanation * _1",
"sec_num": null
},
{
"text": "Where S i,j is the j th sentence of paragraph i, this is how we compute local and global versions of index X -assuming X measures the similarity among sentences and similarity is computed by \u2295:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postfix Explanation * _1",
"sec_num": null
},
{
"text": "X_1 = mean(S 0,0 \u2295 S 0,1 , S 0,1 \u2295 S 1,0 , S 1,0 \u2295 S 1,1 , S 1,1 \u2295 S 1,2 ) X_a = mean(S 0,0 \u2295 S 0,1 , S 0,0 \u2295 S 1,0 , S 0,0 \u2295 S 1,1 , S 0,0 \u2295 S 1,2 , S 0,1 \u2295 S 1,0 , S 0,1 \u2295 S 1,1 , S 0,1 \u2295 S 1,2 , S 1,0 \u2295 S 1,1 , S 1,0 \u2295 S 1,2 , S 1,1 \u2295 S 1,2 ) X_1p = mean(S 0,0 \u2295 S 0,1 , S 1,0 \u2295 S 1,1 , S 1,1 \u2295 S 1,2 ) X_ap = mean(S 0,0 \u2295 S 0,1 , S 1,0 \u2295 S 1,1 , S 1,0 \u2295 S 1,2 , S 1,1 \u2295 S 1,2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postfix Explanation * _1",
"sec_num": null
},
{
"text": "Referential Cohesion: This index (PCREFz in Coh-Metrix 1 ) reflects the overlap of words and ideas across sentences and the entire text. To measure this overlap, we leverage the Sentence Transformers (Reimers and Gurevych, 2019) 2 to compute the embeddings of all sentences in a document using the all-mpnet-base-v2 model. 3 We chose this model since it provides the best quality and has the highest average performance among all the other models introduced by Reimers and Gurevych (2019). Once we computed the embeddings, to measure the overlap across all sentences, we find the cosine similarity between embeddings of every pair of sentences one time at paragraph-level and another time with ignoring the paragraph boundaries. This process We additionally implement a new index based on coreference resolution in paragraphs in a document. In particular, using Stanford CoreNLP's coreference tagger (Manning et al., 2014) through Stanza's wrapper (Qi et al., 2020) , we first find the number of coreference chains (corefChain) to the number of sentences in each paragraph. Then we compute the mean value of all paragraphs as our index and call it CoREF. Deep Cohesion: This dimension reflects the degree to which a text contains causal and intentional connectives. To find the incidence of causal connectives, we first created a list of causal markers in text. In particular, using the intra-and inter-sentence causal cues introduced by Luo et al. 2016, we manually generated a list of regular expression patterns and used these patterns to find the causal connectives in a document. Then we computed the total number of causal connectives to the number of sentences in the document as deep cohesion score. We call this index PCDC. Verb Overlap: Based on FTT, abstract rather than concrete verb overlap across a text might help readers construct gist situation models. Wolfe et al. (2019) use two indices from Coh-Metrix to measure the verb overlaps in text including SMCAUSlsa and SMCAUSwn. Inspired by Coh-Metrix, we make some changes to further improve these indices. In particular, instead of Latent Semantic Analysis (LSA) vectors, we leverage contextualized Pretrained Language Models (PLMs) to get token vector embeddings to later compute the cosine similarity among verbs. Our hypothesis is that since PLMs have encoded contextual knowledge of words in a text, they may be a better choice than LSA for computing the vector representation of verbs in the text. We use spaCy's 4 transformer-based pipeline and the en_core_web_trf model -which is based on roberta-base (Liu et al., 2019)-to compute token vector embeddings and find Part-of-speech (POS) tags. Different forms of this index in GisPy follow the name pattern SMCAUSe_ * where e stands for language model embedding.",
"cite_spans": [
{
"start": 323,
"end": 324,
"text": "3",
"ref_id": null
},
{
"start": 948,
"end": 965,
"text": "(Qi et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GisPy Indices Implementation",
"sec_num": "2.2"
},
{
"text": "To compute the WordNet verb overlap, we first find all synonym sets of verbs in a document in WordNet with POS tag VERB. Then for every pair of verbs, we check whether they belong to the same synonym set in WordNet or not. If yes, we assign score 1 to the verb pair, 0, otherwise. Then we compute the average of 1s to the total number of sentences. Different implementations of this index follow the name pattern SMCAUSwn_ * . Word Concreteness and Imageability: To compute word concreteness and imageability (PCCNC and WRDIMGc in Coh-Metrix) we use two different resources including 1) MRC Psycholinguistic Database Version 2 (Wilson, 1988) , a resource that is used by Coh-Metrix and 2) word concreteness and imageability prediction scores using a supervised method introduced by Ljube\u0161i\u0107 et al. (2018). 5 In each document, first we search tokens in these two resources based on their POS tags. Then we compute the average concreteness and imageability scores of all tokens in the document as the final scores. This process results in four scores in total named: PCCNC_mrc, WRDIMGc_mrc, PCCNC_megahr, WRDIMGc_megahr (two scores for each resource). Hypernymy Nouns & Verbs: This index shows the specificity of a word in a hierarchy. The idea is that words with more levels of hierarchy are less likely to help readers form gist inference than words with fewer levels (Wolfe et al., 2019). To compute this index, we first list all Nouns and Verbs in a document. Then for each word in the list, we find all synonym sets in the WordNet with the same part of speech tag (Noun or Verb). And, we compute the average hypernym path length of all synonym sets of a word. The reason we find all synonym sets of a word instead of only one is that every word can have more than one synonym sets with the same part of speech and there is no way to know which synonym set has the same meaning as the word in the document. As future work, it would be interesting to see how we can find the synonym set that is closest in meaning to a word in context.",
"cite_spans": [
{
"start": 627,
"end": 641,
"text": "(Wilson, 1988)",
"ref_id": "BIBREF20"
},
{
"start": 806,
"end": 807,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GisPy Indices Implementation",
"sec_num": "2.2"
},
{
"text": "Since indices can be on different scales, after computing all indices and before computing GIS which is a linear combination of these indices, we normalize all indices by converting them to z-scores. Then using the formula shown in Figure 2 , we compute the final GIS for every document. 6 Documents with scores greater than zero in the positive direction have higher, and smaller scores than zero in the negative direction have lower levels of gist, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 240,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Computing GIS",
"sec_num": "2.3"
},
{
"text": "To test whether GisPy can correctly group and measure the level of gist in documents, we run our tool on a collection of datasets with known gist levels -low or high. We selected three benchmarks including two introduced by Wolfe et al. 2019and one introduced by Broniatowski et al. 2016to test the quality of scores in our tool. We give more detail about these benchmarks in the following subsections. Before running GisPy, we also run Coh-Metrix on each dataset and compute GIS using the original Coh-Metrix indices. Our goal for doing so is to: 1) make sure we have a reliable gold standard that we can compare GisPy scores with and 2) reproduce the results from Wolfe et al. (2019). Once we computed the GIS score using GisPy, to compare low vs. high gist groups, we compare the mean of their GIS scores. Moreover, we run a Student's t-test with the null hypothesis that there is no difference between the two groups in terms of the level of gist. The goal of running the t-test is to see whether our scores can significantly distinguish groups with lower and higher levels of gist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Also, since for five indices including Referential Cohesion, Verb Overlap based on Embeddings, Verb Overlap using WordNet, Concreteness, and Imageability we have multiple implementations, we compute the final GIS based on all possible combinations of these indices (320 sets of indices for each benchmark). Our goal is to find out what implementation of each index contributes better to distinguishing low vs. high gist documents. In a separate analysis, we also run two robustness tests to ensure our results are not biased by seeing all possible combinations of indices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "This benchmark includes 50 documents in two groups including 1) News Reports and 2) Editorials. Based on Wolfe et al. 2019, compared to News Editorials that provide a more coherent narrative, Reports are more focused on facts. As a result, News Reports tend to have a lower level of gist than Editorials.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "News Reports vs. Editorials",
"sec_num": "3.1.1"
},
{
"text": "This benchmark includes 25 pairs of Methods and Discussion sections (total 50 text documents) from the same peer-reviewed scientific psychology journal articles. Based on Wolfe et al. 2019, while Methods section provides enough detail so that results of an article could be replicated, the Discussion section emphasizes interpretation of results. Hence, Discussion section should produce a higher gist score than Methods. This approach also controls for a number of variables such as author, journal, and topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Journal Article Methods vs. Discussion",
"sec_num": "3.1.2"
},
{
"text": "Disneyland Measles Outbreak Data introduced by Broniatowski et al. (2016) also annotates gist. Documents in this dataset are articles (e.g., news) that are manually annotated by Amazon Mechanical Turk. There are a total of 191 articles with gist annotation among which there are Gist-Yes: 147, Gist-No: 38, and unsure: 6 gist labels. We leave out the unsure labels. Since full text of articles in this dataset were not available and each article only had a URL associated with it, we retrieved the full texts using the provided URLs. For those URLs that were no longer available, we used Wayback Machine to find the most recent image of the URL. At the end, we manually cleaned all articles and fixed the paragraph boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disneyland Measles Outbreak Data",
"sec_num": "3.1.3"
},
{
"text": "Results of running GisPy on three benchmarks are shown in Tables 2, 3, and 4. For each benchmark, we listed the top 10 combinations that most significantly distinguish low vs. high gist documents. As can be seen, for indices that we have paragraph-level vs. non-paragraph-level implementations, in the majority of cases, paragraph-level indices achieve better results. We do not necessarily observe a strong difference between local vs. global implementations. Also, for concreteness and imageability indices, almost all the time we see better performance when we use megahr scores by Ljube\u0161i\u0107 et al. (2018) . We leveraged megahr as a replacement for MRC that was originally used by Coh-Metrix.",
"cite_spans": [
{
"start": 585,
"end": 607,
"text": "Ljube\u0161i\u0107 et al. (2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Comparisons of individuals indices for low vs. high gist documents from the best combination on each benchmark are shown in Figures 3, 4 , and 5. In Table 5 , we also report the comparison of our best results on each benchmark with two other implementations including 1) GIS computed based on the indices of Coh-Metrix, and 2) GIS reported by Wolfe et al. (2019) (For Disney, since we are the first to create a gist benchmark and report a baseline on this dataset, there are no other baselines). As can be seen, on the Reports vs. Editorials and Methods vs. Discussion we achieved performance on par with and slightly better than Coh-Metrix. And we achieved a significantly better distinguishment of low vs. high gist documents than what was reported by Wolfe et al. (2019). And on Disney, GisPy significantly outperformed Coh-Metrix. These results show that we not only could replicate GIS indices, but in contrast to Coh-Metrix, we im- We hope this implementation transparency helps further improvement of these indices.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 136,
"text": "Figures 3, 4",
"ref_id": "FIGREF2"
},
{
"start": 149,
"end": 156,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We did further testing to see whether our results are robust and generalize across all three benchmarks from the news and scientific text genres. Test 1: First, out of all combinations of indices, we separated those that significantly distinguished low and high gist groups in each benchmark resulting in 38, 281, 110 combinations for Report vs. Editorials, Methods vs. Discussion, and Disney benchmarks, respectively. We noticed that all combinations that are statistically significant in terms of t-test in Reports vs. Editorials benchmark are also statistically significant in the other two benchmarks. In other words, there are 38 different combinations of indices that significantly distinguish low and high gist documents in all benchmarks. This confirms the robustness of indices implementation and their generalization across the three benchmarks. Test 2: Second, we ran an extra experiment to ensure our best GIS scores on each benchmark are also robust when we do not know all possible combinations of indices to pick the best one. In particular, for each benchmark, using three different random seeds, we randomly split texts into a train and a test set each with balanced number of low and high gist documents. Then we computed GIS for documents in the train set and chose the best combination of indices that achieved the largest GIS distance between low and high gist groups. Then using that combination we computed GIS for documents in the test set. Results are reported in Tables 6. As can be seen in the table, in all three benchmarks, the best indices combination on the train set also significantly distinguished the low and high gist documents in the test set. This further confirms that our GisPy indices are also robust when tested on unseen documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing Robustness",
"sec_num": "4.1"
},
{
"text": "We also analyzed the individual indices from best combinations on the train set in robustness test 2. These combinations are listed in Table 7 . We noticed that for PCREF and SMCAUSe, in %83 of the experiments, zPCREF_ap and zSMCAUSe_1p are part of the best combination. Also, for these two indices, in %89 of the times we obtained a better result using paragraph-level implementations than when we ignore paragraph boundaries. In other words, we obtain a better result by computing referential cohesion and semantic verb overlap using word embeddings at paragraph-level most of the time. For PCCNC and WRDIMGc, in all experiments with the exception of only one case only for WRDIMGc, scores computed by megahr achieved the best performance. And finally for SMCAUSwn, in %67 of the experiments, the *_a implementation resulted in a better distinguishment between low and high gist documents than the local (*_1) implementation. Also, in only two experiments the paragraph-level SMCAUSwn worked better than its non-paragraph-level implementation. Additionally, we dug a little deeper to understand why there is a difference between local vs. global SMCAUSwn across benchmarks. We noticed that the local indices only perform better in the Methods vs. Discussion dataset. So we took a closer look to understand why this is the case. Interestingly, when we computed the ratio of the number of sentences to the number of paragraphs for all benchmarks, we observed that ratios for Reports vs. Editorials and Disney benchmarks, where global indices achieve a better performance, are 1.89 and 2.04, respectively. And for Methods vs. Discussion where local indices perform better, the ratio is 6.48 which is significantly greater than the other two benchmarks. This may suggest that the density of paragraphs in terms of the number of sentences in each paragraph is one factor we need to keep in mind when selecting what implementation we want to choose for a benchmark. It would be interesting to run this analysis on more documents to see how our observation generalizes across different datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Testing Robustness",
"sec_num": "4.1"
},
{
"text": "Despite achieving significant improvements and solid results from robustness tests on three benchmarks from two domains, there is still great room to further improve the quality of GisPy indices. In this section, we list challenges in the current implementation of GisPy and explain what we think can be a proper next step and direction in addressing them. We hope these insights inspire the community to keep working on this exciting line of research. We did our best to bring three different benchmarks for measuring gist inference score to life by aggregating, standardizing, and making them very easy to use. However, since measuring gist is a relatively newer and less investigated topic compared to readability, coherence, or cohesion, there is still a need for having higher quality benchmarks from different domains. The benchmarks we have tested our tool with are mainly from the news and scientific text domains. It would be interesting to see how our tool can be tuned on not only more documents from these domains but also other genres of text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Next Steps and Future Work",
"sec_num": "5"
},
{
"text": "Also, our PCDC index, even though based on strong causal connective markers, mainly covers the explicit causal relations while not all causal relations are expressed explicitly in text. It would be interesting to think how we can enhance the quality of this index by also including implicit relations and disambiguating causal connectives that can also be non-causal (e.g., temporal markers such as since or after) or leveraging discourse parsers such as DiscoPy (Knaebel, 2021).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Next Steps and Future Work",
"sec_num": "5"
},
{
"text": "We initially hypothesized that utilizing coreference resolution chains (CoREF index) may also help us improve the referential cohesion index. By looking at the most significant combinations of indices in each benchmark, we noticed that CoREF appeared in 0/38, 53/281, 1/110 combinations for Report vs. Editorials, Methods vs. Discussion, and Disney benchmarks, respectively. As a followup, it would be interesting to see how coreference resolution can be leveraged in a different wayindividually or in combination with other implementations of referential cohesion-to further improve this index.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Next Steps and Future Work",
"sec_num": "5"
},
{
"text": "In this work, we introduced GisPy, a new opensource tool for measuring Gist Inference Score (GIS) in text. Evaluation of GisPy and robustness tests on three different benchmarks of low and high gist documents demonstrate that our tool can significantly distinguish documents with different levels of gist. We hope making GisPy publicly available inspires the research community to further improve indices of measuring gist inference in text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "To make comparison of our indices with Coh-Metrix easier, we mainly follow Coh-Metrix indices' names when naming our indices.2 https://github.com/UKPLab/ sentence-transformers 3 Model is available on HuggingFace hub by the name: sentence-transformers/all-mpnet-base-v2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://spacy.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/clarinsi/ megahr-crossling",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To enable computation of weighted combination of indices or calculating GIS in a different way (e.g., by removing some indices,) we have defined a weight variable for each index that can be easily modified and multiplied by its associated index.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Effective vaccine communication during the disneyland measles outbreak",
"authors": [
{
"first": "Karen",
"middle": [
"M"
],
"last": "David A Broniatowski",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hilyard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2016,
"venue": "Vaccine",
"volume": "34",
"issue": "28",
"pages": "3225--3228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A Broniatowski, Karen M Hilyard, and Mark Dredze. 2016. Effective vaccine communication during the disneyland measles outbreak. Vaccine, 34(28):3225-3228.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The tool for the automatic analysis of text cohesion (taaco): Automatic assessment of local, global, and text cohesion",
"authors": [
{
"first": "A",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Kristopher",
"middle": [],
"last": "Crossley",
"suffix": ""
},
{
"first": "Danielle",
"middle": [
"S"
],
"last": "Kyle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Namara",
"suffix": ""
}
],
"year": 2016,
"venue": "Behavior research methods",
"volume": "48",
"issue": "4",
"pages": "1227--1237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott A Crossley, Kristopher Kyle, and Danielle S Mc- Namara. 2016. The tool for the automatic analysis of text cohesion (taaco): Automatic assessment of local, global, and text cohesion. Behavior research methods, 48(4):1227-1237.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Gist inference scores predict gist memory for authentic patient education cancer texts",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Dandignac",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Christopher R Wolfe",
"suffix": ""
}
],
"year": 2020,
"venue": "Patient Education and Counseling",
"volume": "103",
"issue": "8",
"pages": "1562--1567",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Dandignac and Christopher R Wolfe. 2020. Gist inference scores predict gist memory for authen- tic patient education cancer texts. Patient Education and Counseling, 103(8):1562-1567.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ffcd: A fast-and-frugal coherence detection method",
"authors": [
{
"first": "Swagata",
"middle": [],
"last": "Duari",
"suffix": ""
},
{
"first": "Vasudha",
"middle": [],
"last": "Bhatnagar",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swagata Duari and Vasudha Bhatnagar. 2021. Ffcd: A fast-and-frugal coherence detection method. IEEE Access.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Rapid apprehension of the coherence of action scenes",
"authors": [
{
"first": "Reinhild",
"middle": [],
"last": "Glanemann",
"suffix": ""
},
{
"first": "Pienie",
"middle": [],
"last": "Zwitserlood",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "B\u00f6lte",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Dobel",
"suffix": ""
}
],
"year": 2016,
"venue": "Psychonomic bulletin & review",
"volume": "23",
"issue": "5",
"pages": "1566--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhild Glanemann, Pienie Zwitserlood, Jens B\u00f6lte, and Christian Dobel. 2016. Rapid apprehension of the coherence of action scenes. Psychonomic bulletin & review, 23(5):1566-1575.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Coh-metrix: Analysis of text on cohesion and language. Behavior research methods, instruments, & computers",
"authors": [
{
"first": "C",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Danielle",
"middle": [
"S"
],
"last": "Graesser",
"suffix": ""
},
{
"first": "Max",
"middle": [
"M"
],
"last": "Mcnamara",
"suffix": ""
},
{
"first": "Zhiqiang",
"middle": [],
"last": "Louwerse",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cai",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "36",
"issue": "",
"pages": "193--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur C Graesser, Danielle S McNamara, Max M Louwerse, and Zhiqiang Cai. 2004. Coh-metrix: Analysis of text on cohesion and language. Be- havior research methods, instruments, & computers, 36(2):193-202.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "discopy: A neural system for shallow discourse parsing",
"authors": [],
"year": null,
"venue": "Proceedings of the 2nd Workshop on Computational Approaches to Discourse",
"volume": "",
"issue": "",
"pages": "128--133",
"other_ids": {
"DOI": [
"10.18653/v1/2021.codi-main.12"
]
},
"num": null,
"urls": [],
"raw_text": "Ren\u00e9 Knaebel. 2021. discopy: A neural system for shallow discourse parsing. In Proceedings of the 2nd Workshop on Computational Approaches to Dis- course, pages 128-133, Punta Cana, Dominican Re- public and Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Can transformer models measure coherence in text: Re-thinking the shuffle test",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Laban",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Bandarkar",
"suffix": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "1058--1064",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Laban, Luke Dai, Lucas Bandarkar, and Marti A Hearst. 2021. Can transformer models mea- sure coherence in text: Re-thinking the shuffle test. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 2: Short Papers), pages 1058-1064.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic evaluation of text coherence: Models and representations",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2005,
"venue": "IJCAI",
"volume": "5",
"issue": "",
"pages": "1085--1090",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata, Regina Barzilay, et al. 2005. Automatic evaluation of text coherence: Models and represen- tations. In IJCAI, volume 5, pages 1085-1090. Cite- seer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatically evaluating text coherence using discourse relations",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "997--1006",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Au- tomatically evaluating text coherence using discourse relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 997-1006, Port- land, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Evaluating text coherence at sentence and paragraph levels",
"authors": [
{
"first": "Sennan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shuang",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1695--1703",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sennan Liu, Shuang Zeng, and Sujian Li. 2020. Evalu- ating text coherence at sentence and paragraph levels. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1695-1703.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Predicting concreteness and imageability of words within and across languages via word embeddings",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Darja",
"middle": [],
"last": "Fi\u0161er",
"suffix": ""
},
{
"first": "Anita",
"middle": [],
"last": "Peti-Stanti\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "217--222",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3028"
]
},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107, Darja Fi\u0161er, and Anita Peti-Stanti\u0107. 2018. Predicting concreteness and imageability of words within and across languages via word embed- dings. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 217-222, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Commonsense causal reasoning between short texts",
"authors": [
{
"first": "Zhiyi",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kenny",
"suffix": ""
},
{
"first": "Seung-Won",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhongyuan",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyi Luo, Yuchen Sha, Kenny Q Zhu, Seung-won Hwang, and Zhongyuan Wang. 2016. Commonsense causal reasoning between short texts. In Fifteenth International Conference on the Principles of Knowl- edge Representation and Reasoning.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stanza: A python natural language processing toolkit for many human languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.07082"
]
},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. arXiv preprint arXiv:2003.07082.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sentence-bert: Sentence embeddings using siamese bert-networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A theory of medical decision making and health: fuzzy trace theory. Medical decision making",
"authors": [
{
"first": "F",
"middle": [],
"last": "Valerie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reyna",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "28",
"issue": "",
"pages": "850--865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valerie F Reyna. 2008. A theory of medical decision making and health: fuzzy trace theory. Medical deci- sion making, 28(6):850-865.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A new intuitionism: Meaning, memory, and development in fuzzy-trace theory",
"authors": [
{
"first": "F",
"middle": [],
"last": "Valerie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reyna",
"suffix": ""
}
],
"year": 2012,
"venue": "Judgment and Decision making",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valerie F Reyna. 2012. A new intuitionism: Mean- ing, memory, and development in fuzzy-trace theory. Judgment and Decision making.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A scientific theory of gist communication and misinformation resistance, with implications for health, education, and policy. Proceedings of the National Academy of Sciences",
"authors": [
{
"first": "F",
"middle": [],
"last": "Valerie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reyna",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "118",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valerie F Reyna. 2021. A scientific theory of gist com- munication and misinformation resistance, with im- plications for health, education, and policy. Pro- ceedings of the National Academy of Sciences, 118(15):e1912441117.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Mrc psycholinguistic database: Machine-usable dictionary, version 2.00. Behavior research methods, instruments, & computers",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "20",
"issue": "",
"pages": "6--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wilson. 1988. Mrc psycholinguistic database: Machine-usable dictionary, version 2.00. Behavior research methods, instruments, & computers, 20(1):6- 10.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A theoretically motivated method for automatically evaluating texts for gist inferences. Behavior research methods",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Christopher R Wolfe",
"suffix": ""
},
{
"first": "Valerie",
"middle": [
"F"
],
"last": "Dandignac",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reyna",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "51",
"issue": "",
"pages": "2419--2437",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher R Wolfe, Mitchell Dandignac, and Valerie F Reyna. 2019. A theoretically motivated method for automatically evaluating texts for gist inferences. Be- havior research methods, 51(6):2419-2437.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Gist inference scores predict cloze comprehension \"in your own words\" for native, not esl readers",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Christopher R Wolfe",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Dandignac",
"suffix": ""
},
{
"first": "Savannah",
"middle": [
"R"
],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 2021,
"venue": "Health Communication",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher R Wolfe, Mitchell Dandignac, Cynthia Wang, and Savannah R Lowe. 2021. Gist inference scores predict cloze comprehension \"in your own words\" for native, not esl readers. Health Communi- cation, pages 1-8.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Overview of GisPy pipeline. e 1 , ..., e n are contextual embedding of tokens in a sentence.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Gist Inference Score (GIS) formula by Wolfe et al.(2019)results in four indices of referential cohesion including: PCREF_1, PCREF_a, PCREF_1p, PCREF_ap.",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Indices of best GIS on Reports (Low Gist) vs. Editorials (High Gist). All values are z-scores.",
"type_str": "figure",
"num": null
},
"FIGREF3": {
"uris": null,
"text": "Indices of best GIS on Methods (Low Gist) vs. Discussion (High Gist). All values are z-scores. proved indices in a fully open and transparent way.",
"type_str": "figure",
"num": null
},
"FIGREF4": {
"uris": null,
"text": "3: Top 10 GIS scores computed for Methods (Low Gist) vs. Discussion (High Gist). ap: all pairs at paragraph-level, 1p: only consecutive/adjacent pairs at paragraph-level, a: all pairs in entire document, 1: only consecutive/adjacent pairs in entire document. * significant p-value (p \u2264 0.05) Indices of best GIS on Disney Gist=No (Low Gist) vs. Gist=Yes (High Gist). All values are z-scores.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "Local and global index posfixes",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"text": "Top 10 GIS scores computed for Reports (Low Gist) vs. Editorials (High Gist). ap: all pairs at paragraph-level, 1p: only consecutive/adjacent pairs at paragraph-level, a: all pairs in entire document, 1: only consecutive/adjacent pairs in entire document. * significant p-value (p \u2264 0.05)",
"content": "<table><tr><td colspan=\"9\">Indices Combination PCREF SMCAUSe SMCAUSwn PCCNC WRDIMGc Low Gist High Gist Distance t-statistic</td><td>p-value</td></tr><tr><td>ap ap ap a ap a 1p 1p a a</td><td>1p ap 1p 1p ap ap 1p ap 1p ap</td><td>1 1 1p 1 1p 1 1 1 1p 1p</td><td>megahr megahr megahr megahr megahr megahr megahr megahr megahr megahr</td><td>megahr megahr megahr megahr megahr megahr megahr megahr megahr megahr</td><td>-0.282 -0.576 -0.180 -1.203 -0.474 -1.497 -0.159 -0.453 -1.101 -1.395</td><td>4.730 4.414 4.701 3.678 4.386 3.362 4.670 4.355 3.649 3.333</td><td>5.012 4.991 4.881 4.881 4.860 4.860 4.829 4.808 4.750 4.729</td><td>7.188 6.528 7.829 7.424 6.883 6.460 6.989 6.328 7.820 6.594</td><td>* 4 \u00d7 10 \u22129 * 4 \u00d7 10 \u22128 * 4 \u00d7 10 \u221210 * 2 \u00d7 10 \u22129 * 10 \u22128 * 5 \u00d7 10 \u22128 * 8 \u00d7 10 \u22129 * 8 \u00d7 10 \u22128 * 4 \u00d7 10 \u221210 * 3 \u00d7 10 \u22128</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"text": "",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"text": "Top 10 GIS scores computed by GisPy for Gist=No vs. Gist=Yes articles in the Disney dataset.",
"content": "<table><tr><td>ap: all</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF7": {
"text": "Comparison of GIS scores generated by GisPy vs. other methods for all benchmarks. * significant p-value (p \u2264 0.05)",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF9": {
"text": "GisPy GIS scores for train and test sets on all benchmarks. * significant p-value (p \u2264 0.05)",
"content": "<table><tr><td colspan=\"6\">Benchmark (S/P Ratio) PCREF SMCAUSe SMCAUSwn PCCNC WRDIMGc</td></tr><tr><td/><td>ap</td><td>1</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>ap</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td>Reports vs. Editorials</td><td>ap</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td>(1.89)</td><td>ap</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>ap</td><td>1p</td><td>a</td><td>megahr</td><td>mrc</td></tr><tr><td/><td>ap</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>ap</td><td>1p</td><td>1</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>ap</td><td>ap</td><td>1</td><td>megahr</td><td>megahr</td></tr><tr><td>Methods vs. Discussion</td><td>ap</td><td>1p</td><td>1</td><td>megahr</td><td>megahr</td></tr><tr><td>(6.48)</td><td>a</td><td>1p</td><td>1p</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>ap</td><td>ap</td><td>1p</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>ap</td><td>1p</td><td>1</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>ap</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>1p</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td>Disney</td><td>ap</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td>(2.04)</td><td>ap</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>ap</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr><tr><td/><td>1p</td><td>1p</td><td>a</td><td>megahr</td><td>megahr</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF10": {
"text": "Best combinations in robustness Test 2 on the train set for all experiments separated by benchmark.",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}