{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:58:15.321073Z" }, "title": "Unsupervised Abstractive Dialogue Summarization with Word Graphs and POV Conversion", "authors": [ { "first": "Seongmin", "middle": [], "last": "Park", "suffix": "", "affiliation": { "laboratory": "", "institution": "ActionPower", "location": { "settlement": "Seoul", "country": "Republic of Korea" } }, "email": "seongmin.park@actionopwer.kr" }, { "first": "Jihwa", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "ActionPower", "location": { "settlement": "Seoul", "country": "Republic of Korea" } }, "email": "jihwa.lee@actionopwer.kr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We advance the state-of-the-art in unsupervised abstractive dialogue summarization by utilizing multi-sentence compression graphs. Starting from well-founded assumptions about word graphs, we present simple but reliable path-reranking and topic segmentation schemes. Robustness of our method is demonstrated on datasets across multiple domains, including meetings, interviews, movie scripts, and day-today conversations. We also identify possible avenues to augment our heuristicbased system with deep learning. We opensource our code 1 , to provide a strong, reproducible baseline for future research into unsupervised dialogue summarization. bring home the clothes that are hanging outside boris 'll tell brian to take care of that Keywords: 'care', 'clothes', 'home', 'thanks' Megan: Are we going to take a taxi to the opera? Joseph: No, I'll take my car. Megan: Great, more convenient are we going to take a taxi to the opera ? no , joseph 'll take my car", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "We advance the state-of-the-art in unsupervised abstractive dialogue summarization by utilizing multi-sentence compression graphs. Starting from well-founded assumptions about word graphs, we present simple but reliable path-reranking and topic segmentation schemes. Robustness of our method is demonstrated on datasets across multiple domains, including meetings, interviews, movie scripts, and day-today conversations. We also identify possible avenues to augment our heuristicbased system with deep learning. We opensource our code 1 , to provide a strong, reproducible baseline for future research into unsupervised dialogue summarization. bring home the clothes that are hanging outside boris 'll tell brian to take care of that Keywords: 'care', 'clothes', 'home', 'thanks' Megan: Are we going to take a taxi to the opera? Joseph: No, I'll take my car. Megan: Great, more convenient are we going to take a taxi to the opera ? no , joseph 'll take my car", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Compared to traditional text summarization, dialogue summarization introduces a unique challenge: conversion of first-and second-person speech into third-person reported speech. Such discrepancy between the observed text and expected model output puts greater emphasis on abstractive transduction than in traditional summarization tasks. The disorientation is further exacerbated by each of many diverse dialogue types calling for a differing form of transduction -short dialogues require terse abstractions, while meeting transcripts require summaries by agenda.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus, despite the steady emergence of dialogue summarization datasets, the field of dialogue summarization is still bottlenecked by a scarcity of training data. To train a truly robust dialogue summarization model, one requires transcript-summary pairs not only across diverse dialogue domains, but also across multiple dialogue types as well. The lack of diverse annotated summarization data is especially pronounced in low-resourced languages. From such state of the literature, we identify a need for unsupervised dialogue summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 https://github.com/seongminp/graph-dialogue-summary Our method builds upon previous research on unsupervised summarization using word graphs. Starting from the simple assumption that a good summary sentence is at least as informative as any single input sentence, we develop novel schemes for path extraction from word graphs. Our contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We present a novel scheme for path reranking in graph-based summarization. We show that, in practice, simple keyword counting performs better than complex baselines. For longer texts, we present an optional topic segmentation scheme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We introduce a point-of-view (POV) conversion module to convert semi-extractive summaries into fully abstractive summaries. The new module by itself improves all scores on baseline methods, as well as our own.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Finally, We verify our model on datasets beyond those traditionally used in literature to provide a strong baseline for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With just an off-the-shelf part-of-speech (POS) tagger and a list of stopwords, our model can be applied across different types of dialogue summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pioneered by Filippova (2010) , a Multi-Sentence Compression Graph (MSCG) is a graph whose Figure 2 : Construction of word graph. Red nodes and edges denote the selected summary path. Node highlighted in purple (\"Poodles\") is the only non-stopword node included in the k-core subgraph of the word graph. We use nodes from the k-core subgraph as keyword nodes. All original sentences from the unabridged input is present as a possible path from v bos to v eos . Paths that contain more information than those original paths are extracted as summaries.", "cite_spans": [ { "start": 13, "end": 29, "text": "Filippova (2010)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Multi-sentence compression graphs", "sec_num": "2.1" }, { "text": "nodes are words from the input text and edges are coocurrance statistics between adjacent words. During preprocessing, words \"\" (beginningof-sentence) and \"\" (end-of-sentence) are prepended and appended, respectively, to every input sentence. Thus, all sentences from the input are represented in the graph as a single path from the node (v bos ) to the node (v eos ). Overlapping words among sentences will create intersecting paths within MSCG, creating new paths from v bos to v eos , unseen in the original text. Capturing these possibly shorter but informative paths is the key to performant summarization with MSCGs. Ganesan et al. (2010) introduce an abstractive sentence generation method from word graphs to produce opinion summaries. Tixier et al. (2016) show that nodes with maximal neighbors -a concept captured by graph degeneracy -likely belong to important keywords of the document. Shortest paths from v bos to v eos are scored according to how many keyword nodes they contain. Subsequently, a budget-maximization scheme is introduced to find the set of paths that maximizes the score sum within designated word count (Tixier et al., 2017) . We also adopt graph degeneracy to identify keyword nodes in MSCG.", "cite_spans": [ { "start": 645, "end": 666, "text": "Ganesan et al. (2010)", "ref_id": "BIBREF10" }, { "start": 766, "end": 786, "text": "Tixier et al. (2016)", "ref_id": "BIBREF28" }, { "start": 1156, "end": 1177, "text": "(Tixier et al., 2017)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-sentence compression graphs", "sec_num": "2.1" }, { "text": "Aside from MSCGs, unsupervised dialogue summarization usually employ end-to-end neural ar- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Abstractive Dialogue Summarization", "sec_num": "2.2" }, { "text": "In following subsections we outline our proposed summarization process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization strategy", "sec_num": "3" }, { "text": "First, we assemble a word graph G from the input text. We use a modified version of Filippova (2010)'s algorithm for graph construction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "\u2022 Let SW be a set of stopwords and T = s 0 , s 1 , ... be a sequence of sentences in the input text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "\u2022 Decompose all s i \u2208 T into a sequence of POS-tagged words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "s i = (\"bos\", \"meta\"), (w i,0 , pos i,0 ), ..., (w i,n\u22121 , pos i,n\u22121 ), (\"eos\", \"meta\") (1) \u2022 For every (w i,j , pos i,j ) \u2208 s i such that w i / \u2208 SW and s i \u2208 T , add a node v in G. If a node v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "with the same lowercase word w i,k and tag pos i,k such that j = k exists, pair (w i,j , pos i,j ) with v instead of creating a new node. If multiple such matches exist, select the node with maximal overlapping context (w i,j\u22121 and w i,j+1 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "\u2022 Add stopword nodes -(w i,j , pos i,j ) \u2208 s i such that w i,j \u2208 SW and s i \u2208 T -to G with the algorithm described above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "\u2022 For all s i \u2208 T , add a directed edge between node pairs that correspond to subsequent words. Edge weight w between nodes v 1 and v 2 is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "w = f req(v 1 ) + f req(v 2 ) ( s i \u2208T dif f (i, v 1 , v 2 )) \u22121 (2) w = f req(v 1 ) * f req(v 2 ) (3) w = w /w (4) f req(v) is the number of words from original text mapped to node v. dif f (i, v 1 , v 2 ) is the absolute difference in word positions of v 1 and v 2 within s i : dif f (i, v 1 , v 2 ) = |k \u2212 j| (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": ", where w ij and w ik are words in s i that correspond to nodes v 1 and v 2 , respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "In edge weight calculation, w favors edges with strong cooccurrence, while w \u22121 favors edges with greater salience, as measured by word frequency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "It follows from above that only a single node and a single node will exist once the graph is completed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word graph construction", "sec_num": "3.1" }, { "text": "The resulting graph from the previous step is a composition that captures syntactic importance. Traditional approaches utilize centrality measures to identify important nodes within word graphs (Mihalcea and Tarau, 2004; Erkan and Radev, 2004) . In this work we use graph degeneracy to extract keyword nodes. In a k-degenerate word graph, words that belong to k-core nodes of the graph are considered to be keywords. We collect KW , a set of nodes belonging to the k-core subgraph. The k-core of a graph is the maximally degenerate subgraph, with minimum degree of at least k. ", "cite_spans": [ { "start": 221, "end": 243, "text": "Erkan and Radev, 2004)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Keyword extraction", "sec_num": "3.2" }, { "text": "Once keyword nodes are identified, we score every path from v bos to v eos that corresponds to a sentence from the original text. Contrary to previous research into word-graph based summarization, we use a simple keyword coverage score for every path:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path threshold calculation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Score i = |V i \u2229 KW | |KW |", "eq_num": "(6)" } ], "section": "Path threshold calculation", "sec_num": "3.3" }, { "text": ", where V i is the set of all nodes in path p i , a representation of sentence s i \u2208 T , within the word graph. We calculate the path threshold t, the mean score of all sentences in the original text. Later, when summaries are extracted from the word graph, candidates with path score less than t are discarded. We also experimented with setting t as the minimum or maximum of all original path scores, but such configurations yielded inferior summaries influenced by outlier path scores. Our path score function is reminiscent of the diversity reward function in Shang et al. (2018) . However, we use the function as a measure of coverage instead of diversity. More importantly, we utilize the score as means to extract a threshold based on all input sentences, which is significantly different from Shang et al. (2018) 's utilization of the function as a monotonically increasing scorer in submodularity maximization.", "cite_spans": [ { "start": 564, "end": 583, "text": "Shang et al. (2018)", "ref_id": "BIBREF27" }, { "start": 801, "end": 820, "text": "Shang et al. (2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Path threshold calculation", "sec_num": "3.3" }, { "text": "For long texts, we apply an optional topic segmentation step. Our summarization algorithm is separately applied to each segmented text. Similar to path ranking in the next section, topics are determined according to keyword frequency. For every Every transition between sentences is a potential topic boundary. Since each sentence (and corresponding path) has an associated topic coverage vector, we quantify the topic distance d of a sentence with the next as the negative cosine distance of their topic vectors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic segmentation", "sec_num": "3.4" }, { "text": "d i,i+1 = \u2212 c i \u2022 c i+1 c i c i+1 (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic segmentation", "sec_num": "3.4" }, { "text": "If p is a hyperparameter representing the total number of topics, one can segment the original text at p \u2212 1 sentence boundaries with the greatest topic distance. Alternatively, sentence boundaries with topic distance greater than a designated threshold can be selected as topic boundaries. For simplicity, we proceed with the former segmentation setup (top-p boundary) when necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic segmentation", "sec_num": "3.4" }, { "text": "We generate a summary per-speaker. Our construction of the word graph allows fast extraction of sub-graphs containing only nodes pertaining to utterances from a single speaker. For each speaker subgraph, we generate summary sentences as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary path extraction", "sec_num": "3.5" }, { "text": "1. We obtain k shortest paths from v bos to v eos by applying the k-shortest paths algorithm (Yen, 1971) to our word graph.", "cite_spans": [ { "start": 93, "end": 104, "text": "(Yen, 1971)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Summary path extraction", "sec_num": "3.5" }, { "text": "2. Iterating from the shortest path, we collect any paths with keyword coverage score above the threshold calculated in 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary path extraction", "sec_num": "3.5" }, { "text": "3. For each path found, we track the set of encountered keywords in KW . We stop our search if all keywords in KW were encountered, or a pre-defined number of iterations (the search depth) is reached.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary path extraction", "sec_num": "3.5" }, { "text": "A good summary has to be both concise and informative. Intuitively, edge weights of the proposed word graph captures the former, while keyword thresholding prioritizes the latter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary path extraction", "sec_num": "3.5" }, { "text": "Finally, we convert our collected semi-extractive summaries into abstractive reported speech using a rule-based POV conversion module. We describe sentences extracted from our word graph as semiextractive rather than extractive, to recognize the distinction between previously unseen sentences created from pieces of text, and sentences taken verbatim from the original text. Similar to existing extract-then-abstract summarization pipelines (Mao et al., 2021; , our method hinges on the assumption that the extractive pathreranking step will optimize for summary content, while the succeeding abstractive POV-conversion step will do so for summary style. FewSum (Bra\u017einskas et al., 2020) also applies POV conversion in a few-shot summarization setting. FewSum conditions the summary generator to produce sentences in targeted styles, which is achieved by nudging the decoder to generate pronouns appropriate for each designated tone.", "cite_spans": [ { "start": 442, "end": 460, "text": "(Mao et al., 2021;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "POV conversion", "sec_num": "3.6" }, { "text": "Popular literature has established that defining an all-encompassing set of rules for indirect speech conversion is infeasible (Partee, 1973; Li, 2011) . In fact, the English grammar is mostly descriptive rather than prescriptive -no set of official rules dictated by a single governing authority exists. Even so, rule based POV conversion does provide a strong baseline compared to state-of-the-art techniques, such as end-to-end Transformer networks (Lee et al., 2020) . In this study, we limit our scope to rule-based conversion because only the rule-based system among all tested methods in Lee et al. (2020) confers to the unsupervised nature of this paper. We encourage further research into integrating more advanced reported speech conversion techniques into the abstractive summarization pipeline.", "cite_spans": [ { "start": 127, "end": 141, "text": "(Partee, 1973;", "ref_id": "BIBREF25" }, { "start": 142, "end": 151, "text": "Li, 2011)", "ref_id": "BIBREF17" }, { "start": 452, "end": 470, "text": "(Lee et al., 2020)", "ref_id": "BIBREF16" }, { "start": 595, "end": 612, "text": "Lee et al. (2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "POV conversion", "sec_num": "3.6" }, { "text": "In this work, we apply four conversion rules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV conversion", "sec_num": "3.6" }, { "text": "1. Change pronouns from first person to third person.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV conversion", "sec_num": "3.6" }, { "text": "2. Change modal verbs can, may, and must to could, might, and had to, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV conversion", "sec_num": "3.6" }, { "text": "3. Convert questions into a pre-defined template: asks .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV conversion", "sec_num": "3.6" }, { "text": "4. Fix subject-verb agreement after applying rules above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV conversion", "sec_num": "3.6" }, { "text": "We notably omit prepend rules suggested in (Lee et al., 2020) , because the input domain of our summarization system is unbounded, unlike with taskoriented spoken commands for virtual assistants. We also leave tense conversion for future research.", "cite_spans": [ { "start": 43, "end": 61, "text": "(Lee et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "POV conversion", "sec_num": "3.6" }, { "text": "We test our model on dialogue summarization datasets across multiple domains: McCowan et al., 2005) , ICSI (Janin et al., 2003) 2. Day-to-day conversations: DialogSum (Chen et al., 2021b) , SAMSum (Gliwa et al., 2019) 3. Interview: MediaSum (Zhu et al., 2021) 4. Screenplay: SummScreen (Chen et al., 2021a) 5. Debate: ADS (Fabbri et al., 2021) Table 1 provides detailed statistics and descriptions for each dataset.", "cite_spans": [ { "start": 78, "end": 99, "text": "McCowan et al., 2005)", "ref_id": "BIBREF22" }, { "start": 107, "end": 127, "text": "(Janin et al., 2003)", "ref_id": "BIBREF14" }, { "start": 167, "end": 187, "text": "(Chen et al., 2021b)", "ref_id": "BIBREF4" }, { "start": 197, "end": 217, "text": "(Gliwa et al., 2019)", "ref_id": "BIBREF11" }, { "start": 241, "end": 259, "text": "(Zhu et al., 2021)", "ref_id": "BIBREF32" }, { "start": 286, "end": 306, "text": "(Chen et al., 2021a)", "ref_id": null } ], "ref_spans": [ { "start": 344, "end": 351, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "1. Meetings: AMI (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "For AMI and ICSI, we conduct several ablation experiments with different components of our model omitted: semi-extractive summarization without POV conversion is compared with fully-abstractive summarization with POV conversion; utilization of pre-segmented text provided by Shang et al. (2018) is compared with application of topic segmentation suggested in this paper.", "cite_spans": [ { "start": 275, "end": 294, "text": "Shang et al. (2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "For meeting summaries, we compare our method with previous research on unsupervised dialogue summarization. Along with Filippova (2010), Shang et al. (2018) , and Fu et al. (2021), we select Boudin and Morin (2013) and Mehdad et al. (2013) as our baselines. All but Fu et al. (2021) are word graph-based summarizers.", "cite_spans": [ { "start": 137, "end": 156, "text": "Shang et al. (2018)", "ref_id": "BIBREF27" }, { "start": 191, "end": 214, "text": "Boudin and Morin (2013)", "ref_id": "BIBREF0" }, { "start": 219, "end": 239, "text": "Mehdad et al. (2013)", "ref_id": "BIBREF23" }, { "start": 266, "end": 282, "text": "Fu et al. (2021)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "For all other categories, we choose LEAD-3 as our unsupervised baseline. LEAD-3 selects the Table 3 : Results on day-to-day, interview, screenplay, and debate summarization datasets. All reported scores are F-1 measures. In our method, topic segmentation is applied to datasets with average transcription length greater than 5,000 characters (MediaSum, SummScreen), and POV conversion is applied to all datasets.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "first three sentences of a document as the summary. Because summary distributions in several document types tend to be front-heavy (Grenander et al., 2019; Zhu et al., 2021) , LEAD-3 provides a competitive extractive baseline with negligible computational burden.", "cite_spans": [ { "start": 131, "end": 155, "text": "(Grenander et al., 2019;", "ref_id": "BIBREF12" }, { "start": 156, "end": 173, "text": "Zhu et al., 2021)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "We evaluate the quality of generated system summaries against reference summaries using standard ROUGE scores (Lin, 2004) . Specifically, we use ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) scores that respectively measure unigram, bigram, and longest common subsequence coverage. Table 2 records experimental results on AMI and ISCI datasets. In all categories, our method or a baseline augmented with our POV conversion module outperforms previous state-of-the-art.", "cite_spans": [ { "start": 110, "end": 121, "text": "(Lin, 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 281, "end": 288, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "Our proposed path-reranking without POV conversion yields semi-extractive output summaries competitive with abstractive summarization baselines. Segmenting raw transcripts into topic groups with our method generally yields higher F -measures than using pre-segmented transcripts in semi-extractive summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of suggested path reranking", "sec_num": "5.1.1" }, { "text": "Summarizing pre-segmented dialogue transcripts results in higher R2, while applying our topic segmentation method results in higher R1 and RL. This observation is in line with our method's emphasis on keyword extraction, in contrast to keyphrase extraction seen in several baselines (Boudin and Morin, 2013; Shang et al., 2018) . Models that preserve token adjacency achieve higher R2, while models that preserve token presence achieve higher R1. RL additionally penalizes for wrong token order, but token order in extracted summaries tend to be well-preserved in word graphbased summarization schemes.", "cite_spans": [ { "start": 283, "end": 307, "text": "(Boudin and Morin, 2013;", "ref_id": "BIBREF0" }, { "start": 308, "end": 327, "text": "Shang et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Effect of topic segmentation", "sec_num": "5.1.2" }, { "text": "Our POV conversion module improves benchmark scores on all tested baselines, as well as on our own system. It is only natural that a conversion module that translates text from semi-extractive to abstractive will raise scores on abstractive benchmarks. However, applying our POV module to already abstractive summarization systems resulted in higher scores in all cases. We attribute this to the fact that previous abstractive summarization systems do not generate sufficiently reportive summaries; past research either emphasize other linguistic aspects like hyponym conversion (Shang et al., 2018) , or treat POV conversion as a byproduct of an end-to-end summarization pipeline (Fu et al., 2021) .", "cite_spans": [ { "start": 579, "end": 599, "text": "(Shang et al., 2018)", "ref_id": "BIBREF27" }, { "start": 681, "end": 698, "text": "(Fu et al., 2021)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Effect of POV conversion module", "sec_num": "5.1.3" }, { "text": "5.2 Day-to-day, interview, screenplay, and debate summarization", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of POV conversion module", "sec_num": "5.1.3" }, { "text": "Our method outperforms the LEAD-3 baseline on most benchmarks (Table 3) . The model shows consistent performance across multiple domains in R1 and RL, but shows greater inconsistency in R2. Variance in the latter metric can be attributed, as in 5.1.2, to our model's tendency to optimize for single keywords rather than keyphrases. Robustness of our model, as measured by consistency of ROUGE measures across multiple datasets, is shown in Figure 4 . Notably, our method falters in the MediaSum benchmark. Compared to other benchmarks, Me-diaSum's reference summaries display heavy positional bias towards the beginning of its transcripts, which benefits the LEAD-3 approach. It also is the only dataset in which references summaries are Transcript Summary Maya: Bring home the clothes that are hanging outside Maya: All of them should be dry already and it looks like it's going to rain Boris: I'm not home right now Boris: I'll tell Brian to take care of that Maya: Fine, thanks (Gliwa et al., 2019) . Figure 4 : Normalized standard deviation (also called coefficient of variance) of R1, R2, and RL scores across all datasets. Normalized standard deviation is calculated as \u03c3/x, where \u03c3 is the standard deviation andx is the mean. not generated for the purpose of summary evaluation, but are scraped from source news providers. Reference summaries for MediaSum utilize less reported speech compared to other datasets, and thus our POV module fails to boost the precision of summaries generated by our model.", "cite_spans": [ { "start": 981, "end": 1001, "text": "(Gliwa et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 62, "end": 71, "text": "(Table 3)", "ref_id": null }, { "start": 440, "end": 448, "text": "Figure 4", "ref_id": null }, { "start": 1004, "end": 1012, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Effect of POV conversion module", "sec_num": "5.1.3" }, { "text": "This paper improves upon previous work on multisentence compression graphs for summarization. We find that simpler and more adaptive path reranking schemes can boost summarization quality. We also demonstrate a promising possibility for integrating point-of-view conversion into summarization pipelines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 6.1 Improving MSCG summarization", "sec_num": "6" }, { "text": "Compared to previous research, our model is still insufficient in keyphrase or bigram preservation. This phenomenon is captured by inconsistent R2 scores across benchmarks. We believe incorporating findings from keyphrase-based summarizers (Riedhammer et al., 2010; Boudin and Morin, 2013) can mitigate such shortcomings.", "cite_spans": [ { "start": 240, "end": 265, "text": "(Riedhammer et al., 2010;", "ref_id": "BIBREF26" }, { "start": 266, "end": 289, "text": "Boudin and Morin, 2013)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion 6.1 Improving MSCG summarization", "sec_num": "6" }, { "text": "While our methods demonstrate improved benchmark results, its mostly heuristic nature leaves much room for enhancement through integration of statistical models. POV conversion in particular can benefit from deep learning-based approaches (Lee et al., 2020) . With recent advances in unsupervised sequence to sequence transduction (Li et al., 2020; He et al., 2020) , we expect further research into more advanced POV conversion techniques will improve unsupervised dialogue summarization.", "cite_spans": [ { "start": 239, "end": 257, "text": "(Lee et al., 2020)", "ref_id": "BIBREF16" }, { "start": 331, "end": 348, "text": "(Li et al., 2020;", "ref_id": "BIBREF18" }, { "start": 349, "end": 365, "text": "He et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Avenues for future research", "sec_num": "6.2" }, { "text": "Another possibility to augment our research with deep learning is through employing graph networks (Cui et al., 2020) for representing MSCGs. With graph networks, each word node and edge can be represented as a contextualized vector. Such schemes will enable a more flexible and interpolatable manipulation of syntax captured by traditional word graphs.", "cite_spans": [ { "start": 99, "end": 117, "text": "(Cui et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Avenues for future research", "sec_num": "6.2" }, { "text": "One notable shortcoming of our system is the generation of summaries that lack grammatical coherence or fluency (Table 4) . We intentionally leave out complex path filters that gauge linguistic validity or factual correctness. We only minimally inspect our summaries to check for inclusion of verb nodes, as in Filippova (2010) . Our system can be easily augmented with such additional filters, which we leave for future work.", "cite_spans": [ { "start": 311, "end": 327, "text": "Filippova (2010)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 112, "end": 121, "text": "(Table 4)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Avenues for future research", "sec_num": "6.2" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Keyphrase extraction for n-best reranking in multi-sentence compression", "authors": [ { "first": "Florian", "middle": [], "last": "Boudin", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Morin", "suffix": "" } ], "year": 2013, "venue": "North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian Boudin and Emmanuel Morin. 2013. Keyphrase extraction for n-best reranking in multi-sentence compression. In North American Chapter of the Association for Computational Linguistics (NAACL).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Generating sentences from a continuous space", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "10--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Con- ference on Computational Natural Language Learn- ing, pages 10-21, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Few-shot learning for opinion summarization", "authors": [ { "first": "Arthur", "middle": [], "last": "Bra\u017einskas", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4119--4135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur Bra\u017einskas, Mirella Lapata, and Ivan Titov. 2020. Few-shot learning for opinion summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4119-4135, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "and Kevin Gimpel. 2021a. Summscreen: A dataset for abstractive screenplay summarization", "authors": [ { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zewei", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.07091" ] }, "num": null, "urls": [], "raw_text": "Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2021a. Summscreen: A dataset for ab- stractive screenplay summarization. arXiv preprint arXiv:2104.07091.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Dialogsum challenge: Summarizing real-life scenario dialogues", "authors": [ { "first": "Yulong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 14th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "308--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yulong Chen, Yang Liu, and Yue Zhang. 2021b. Di- alogsum challenge: Summarizing real-life scenario dialogues. In Proceedings of the 14th International Conference on Natural Language Generation, pages 308-313.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Enhancing extractive text summarization with topic-aware graph neural networks", "authors": [ { "first": "Peng", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Le", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Yuanchao", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5360--5371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Cui, Le Hu, and Yuanchao Liu. 2020. Enhanc- ing extractive text summarization with topic-aware graph neural networks. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 5360-5371, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Lexrank: Graph-based lexical centrality as salience in text summarization", "authors": [ { "first": "G\u00fcnes", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "", "middle": [], "last": "Dragomir R Radev", "suffix": "" } ], "year": 2004, "venue": "Journal of artificial intelligence research", "volume": "22", "issue": "", "pages": "457--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence re- search, 22:457-479.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "ConvoSumm: Conversation summarization benchmark and improved abstractive summarization with argument mining", "authors": [ { "first": "Alexander", "middle": [], "last": "Fabbri", "suffix": "" }, { "first": "Faiaz", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Imad", "middle": [], "last": "Rizvi", "suffix": "" }, { "first": "Borui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "6866--6880", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, and Dragomir Radev. 2021. ConvoSumm: Conversation summa- rization benchmark and improved abstractive sum- marization with argument mining. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6866-6880, Online. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Multi-sentence compression: Finding shortest paths in word graphs", "authors": [ { "first": "Katja", "middle": [], "last": "Filippova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "322--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceed- ings of the 23rd International Conference on Compu- tational Linguistics (Coling 2010), pages 322-330, Beijing, China. Coling 2010 Organizing Committee.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Repsum: Unsupervised dialogue summarization based on replacement strategy", "authors": [ { "first": "Xiyan", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Yating", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tianyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaozhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Changlong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhenglu", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "6042--6051", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiyan Fu, Yating Zhang, Tianyi Wang, Xiaozhong Liu, Changlong Sun, and Zhenglu Yang. 2021. Repsum: Unsupervised dialogue summarization based on re- placement strategy. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 6042-6051.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Opinosis: A graph based approach to abstractive summarization of highly redundant opinions", "authors": [ { "first": "Kavita", "middle": [], "last": "Ganesan", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "340--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstrac- tive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 340-348, Beijing, China. Coling 2010 Organizing Committee.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SAMSum corpus: A human-annotated dialogue dataset for abstractive summarization", "authors": [ { "first": "Bogdan", "middle": [], "last": "Gliwa", "suffix": "" }, { "first": "Iwona", "middle": [], "last": "Mochol", "suffix": "" }, { "first": "Maciej", "middle": [], "last": "Biesek", "suffix": "" }, { "first": "Aleksander", "middle": [], "last": "Wawer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on New Frontiers in Summarization", "volume": "", "issue": "", "pages": "70--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A human-annotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70-79, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Countering the effects of lead bias in news summarization via multi-stage training and auxiliary losses", "authors": [ { "first": "Matt", "middle": [], "last": "Grenander", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Jackie Chi Kit", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Annie", "middle": [], "last": "Louis", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "6019--6024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, and Annie Louis. 2019. Countering the effects of lead bias in news summarization via multi-stage training and auxiliary losses. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6019-6024, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A probabilistic formulation of unsupervised text style transfer", "authors": [ { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Xinyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junxian He, Xinyi Wang, Graham Neubig, and Taylor Berg-Kirkpatrick. 2020. A probabilistic formulation of unsupervised text style transfer. In International Conference on Learning Representations.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The icsi meeting corpus", "authors": [ { "first": "Adam", "middle": [], "last": "Janin", "suffix": "" }, { "first": "Don", "middle": [], "last": "Baron", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Edwards", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Ellis", "suffix": "" }, { "first": "David", "middle": [], "last": "Gelbart", "suffix": "" }, { "first": "Nelson", "middle": [], "last": "Morgan", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Peskin", "suffix": "" }, { "first": "Thilo", "middle": [], "last": "Pfau", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2003, "venue": "2003 IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. 2003. The icsi meeting corpus. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03)., volume 1, pages I-I. IEEE.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Autoencoding variational bayes", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2014, "venue": "2nd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto- encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Con- ference Track Proceedings.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Converting the point of view of messages spoken to virtual assistants", "authors": [ { "first": "Gunhee", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Zu", "suffix": "" }, { "first": "", "middle": [], "last": "Sai Srujana", "suffix": "" }, { "first": "Dennis", "middle": [], "last": "Buddi", "suffix": "" }, { "first": "Purva", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "", "middle": [], "last": "Fitzgerald", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "154--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gunhee Lee, Vera Zu, Sai Srujana Buddi, Dennis Liang, Purva Kulkarni, and Jack FitzGerald. 2020. Converting the point of view of messages spoken to virtual assistants. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2020, pages 154-163, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Direct speech and indirect speech: A functional study", "authors": [ { "first": "N", "middle": [], "last": "Charles", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2011, "venue": "Direct and indirect speech", "volume": "", "issue": "", "pages": "29--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles N Li. 2011. Direct speech and indirect speech: A functional study. In Direct and indirect speech, pages 29-46. De Gruyter Mouton.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Optimus: Organizing sentences via pre-trained modeling of a latent space", "authors": [ { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunyuan Li, Xiang Gao, Yuan Li, Xiujun Li, Baolin Peng, Yizhe Zhang, and Jianfeng Gao. 2020. Opti- mus: Organizing sentences via pre-trained modeling of a latent space. In EMNLP.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A combined extractive with abstractive model for summarization", "authors": [ { "first": "Wenfeng", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yaling", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jinming", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yuzhen", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2021, "venue": "IEEE Access", "volume": "9", "issue": "", "pages": "43970--43980", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenfeng Liu, Yaling Gao, Jinming Li, and Yuzhen Yang. 2021. A combined extractive with abstractive model for summarization. IEEE Access, 9:43970- 43980.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Ahmed H Awadallah, and Dragomir Radev. 2021. Dyle: Dynamic latent extraction for abstractive long-input summarization", "authors": [ { "first": "Ziming", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Chen", "middle": [ "Henry" ], "last": "Wu", "suffix": "" }, { "first": "Ansong", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Yusen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Budhaditya", "middle": [], "last": "Deb", "suffix": "" }, { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2110.08168" ] }, "num": null, "urls": [], "raw_text": "Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chen- guang Zhu, Ahmed H Awadallah, and Dragomir Radev. 2021. Dyle: Dynamic latent extraction for abstractive long-input summarization. arXiv preprint arXiv:2110.08168.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The ami meeting corpus", "authors": [ { "first": "Iain", "middle": [], "last": "Mccowan", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" }, { "first": "Wessel", "middle": [], "last": "Kraaij", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Ashby", "suffix": "" }, { "first": "", "middle": [], "last": "Bourban", "suffix": "" }, { "first": "M", "middle": [], "last": "Flynn", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Guillemot", "suffix": "" }, { "first": "", "middle": [], "last": "Hain", "suffix": "" }, { "first": "Vasilis", "middle": [], "last": "Kadlec", "suffix": "" }, { "first": "", "middle": [], "last": "Karaiskos", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 5th international conference on methods and techniques in behavioral research", "volume": "88", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iain McCowan, Jean Carletta, Wessel Kraaij, Simone Ashby, S Bourban, M Flynn, M Guillemot, Thomas Hain, J Kadlec, Vasilis Karaiskos, et al. 2005. The ami meeting corpus. In Proceedings of the 5th inter- national conference on methods and techniques in behavioral research, volume 88, page 100. Citeseer.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Abstractive meeting summarization with entailment and fusion", "authors": [ { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Tompa", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 14th European Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "136--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yashar Mehdad, Giuseppe Carenini, Frank Tompa, and Raymond Ng. 2013. Abstractive meeting summa- rization with entailment and fusion. In Proceed- ings of the 14th European Workshop on Natural Lan- guage Generation, pages 136-146.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "TextRank: Bringing order into text", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Lan- guage Processing, pages 404-411, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The syntax and semantics of quotation", "authors": [ { "first": "Barbara", "middle": [], "last": "Partee", "suffix": "" } ], "year": 1973, "venue": "A Festschrift for Morris Halle", "volume": "", "issue": "", "pages": "410--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Partee. 1973. The syntax and semantics of quo- tation. In S. R. Anderson and P. Kiparsky, editors, A Festschrift for Morris Halle, pages 410-418. New York: Holt, Reinehart and Winston.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Long story short-global unsupervised models for keyphrase based meeting summarization", "authors": [ { "first": "Korbinian", "middle": [], "last": "Riedhammer", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Benoit Favre", "suffix": "" }, { "first": "", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" } ], "year": 2010, "venue": "Speech Communication", "volume": "52", "issue": "10", "pages": "801--815", "other_ids": {}, "num": null, "urls": [], "raw_text": "Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-T\u00fcr. 2010. Long story short-global unsu- pervised models for keyphrase based meeting sum- marization. Speech Communication, 52(10):801- 815.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Unsupervised abstractive meeting summarization with multi-sentence compression and budgeted submodular maximization", "authors": [ { "first": "Guokan", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Wensi", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Zekun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Jean-Pierre Tixier", "suffix": "" }, { "first": "Polykarpos", "middle": [], "last": "Meladianos", "suffix": "" }, { "first": "Michalis", "middle": [], "last": "Vazirgiannis", "suffix": "" }, { "first": "Jean-Pierre", "middle": [], "last": "Lorr\u00e9", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.05271" ] }, "num": null, "urls": [], "raw_text": "Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Jean-Pierre Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, and Jean-Pierre Lorr\u00e9. 2018. Un- supervised abstractive meeting summarization with multi-sentence compression and budgeted submodu- lar maximization. arXiv preprint arXiv:1805.05271.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A graph degeneracy-based approach to keyword extraction", "authors": [ { "first": "Antoine", "middle": [], "last": "Tixier", "suffix": "" }, { "first": "Fragkiskos", "middle": [], "last": "Malliaros", "suffix": "" }, { "first": "Michalis", "middle": [], "last": "Vazirgiannis", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1860--1870", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Tixier, Fragkiskos Malliaros, and Michalis Vazirgiannis. 2016. A graph degeneracy-based ap- proach to keyword extraction. In Proceedings of the 2016 conference on empirical methods in natu- ral language processing, pages 1860-1870.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Combining graph degeneracy and submodularity for unsupervised extractive summarization", "authors": [ { "first": "Antoine", "middle": [], "last": "Tixier", "suffix": "" }, { "first": "Polykarpos", "middle": [], "last": "Meladianos", "suffix": "" }, { "first": "Michalis", "middle": [], "last": "Vazirgiannis", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the workshop on new frontiers in summarization", "volume": "", "issue": "", "pages": "48--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Tixier, Polykarpos Meladianos, and Michalis Vazirgiannis. 2017. Combining graph degeneracy and submodularity for unsupervised extractive sum- marization. In Proceedings of the workshop on new frontiers in summarization, pages 48-58.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Finding the k shortest loopless paths in a network", "authors": [ { "first": "Y", "middle": [], "last": "Jin", "suffix": "" }, { "first": "", "middle": [], "last": "Yen", "suffix": "" } ], "year": 1971, "venue": "management Science", "volume": "17", "issue": "11", "pages": "712--716", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin Y Yen. 1971. Finding the k shortest loopless paths in a network. management Science, 17(11):712- 716.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Unsupervised abstractive dialogue summarization for tete-a-tetes", "authors": [ { "first": "Xinyuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ruiyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Amr", "middle": [], "last": "Ahmed", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "35", "issue": "", "pages": "14489--14497", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyuan Zhang, Ruiyi Zhang, Manzil Zaheer, and Amr Ahmed. 2021. Unsupervised abstractive dia- logue summarization for tete-a-tetes. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14489-14497.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "MediaSum: A large-scale media interview dataset for dialogue summarization", "authors": [ { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zeng", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "5927--5934", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chenguang Zhu, Yang Liu, Jie Mei, and Michael Zeng. 2021. MediaSum: A large-scale media interview dataset for dialogue summarization. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 5927-5934, Online. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Unsupervised summarization for chat logs with topicoriented ranking and context-aware auto-encoders", "authors": [ { "first": "Yicheng", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Lujun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yangyang", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Zhuoren", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Changlong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaozhong", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "35", "issue": "", "pages": "14674--14682", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yicheng Zou, Jun Lin, Lujun Zhao, Yangyang Kang, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuan- jing Huang, and Xiaozhong Liu. 2021. Unsu- pervised summarization for chat logs with topic- oriented ranking and context-aware auto-encoders. Proceedings of the AAAI Conference on Artificial In- telligence, 35(16):14674-14682.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Our summarization pipeline." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Topic segmentation on AMI meeting ID ES2005b. Green bars indicate sentence boundaries with highest topic distance." }, "TABREF2": { "content": "", "num": null, "html": null, "text": "Statistics for benchmark datasets. All character-level and word-level statistics are averaged over the test set and rounded to the nearest whole number.", "type_str": "table" }, "TABREF4": { "content": "
", "num": null, "html": null, "text": "", "type_str": "table" }, "TABREF6": { "content": "
", "num": null, "html": null, "text": "Summarizing the SAMSum corpus", "type_str": "table" } } } }