{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:02:55.585443Z" }, "title": "GPT-2-based Human-in-the-loop Theatre Play Script Generation *", "authors": [ { "first": "Rudolf", "middle": [], "last": "Rosa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "settlement": "Prague", "region": "Czechia" } }, "email": "rosa@ufal.mff.cuni.cz" }, { "first": "Patr\u00edcia", "middle": [], "last": "Schmidtov\u00e1", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "settlement": "Prague", "region": "Czechia" } }, "email": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "settlement": "Prague", "region": "Czechia" } }, "email": "" }, { "first": "Tom\u00e1\u0161", "middle": [], "last": "Musil", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "settlement": "Prague", "region": "Czechia" } }, "email": "" }, { "first": "David", "middle": [], "last": "Mare\u010dek", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "settlement": "Prague", "region": "Czechia" } }, "email": "" }, { "first": "Saad", "middle": [], "last": "Obaid", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "settlement": "Prague", "region": "Czechia" } }, "email": "" }, { "first": "Marie", "middle": [], "last": "Nov\u00e1kov\u00e1", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Kl\u00e1ra", "middle": [], "last": "Voseck\u00e1", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Academy of Performing Arts", "location": { "settlement": "Prague, Prague", "region": "Czechia" } }, "email": "" }, { "first": "Josef", "middle": [], "last": "Dole\u017eal", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Academy of Performing Arts", "location": { "settlement": "Prague, Prague", "region": "Czechia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We experiment with adapting generative language models for the generation of long coherent narratives in the form of theatre plays. Since fully automatic generation of whole plays is not currently feasible, we created an interactive tool that allows a human user to steer the generation somewhat while minimizing intervention. We pursue two approaches to long-text generation: a flat generation with summarization of context, and a hierarchical text-to-text two-stage approach, where a synopsis is generated first and then used to condition generation of the final script. Our preliminary results and discussions with theatre professionals show improvements over vanilla language model generation, but also identify important limitations of our approach.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "We experiment with adapting generative language models for the generation of long coherent narratives in the form of theatre plays. Since fully automatic generation of whole plays is not currently feasible, we created an interactive tool that allows a human user to steer the generation somewhat while minimizing intervention. We pursue two approaches to long-text generation: a flat generation with summarization of context, and a hierarchical text-to-text two-stage approach, where a synopsis is generated first and then used to condition generation of the final script. Our preliminary results and discussions with theatre professionals show improvements over vanilla language model generation, but also identify important limitations of our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural language generation (NLG) is currently dominated by large pre-trained language models, such as GPT-3 (Brown et al., 2020) . The models show especially strong performance in generating short to medium length in-domain texts, such as news stories, which fit into the window size of the model (e.g. 512 or 1,024 subword tokens). Successfully handling significantly larger and/or out-of-domain documents is a matter of ongoing research Zaheer et al., 2020; Gururangan et al., 2020; Chen et al., 2020) .", "cite_spans": [ { "start": 103, "end": 129, "text": "GPT-3 (Brown et al., 2020)", "ref_id": null }, { "start": 440, "end": 460, "text": "Zaheer et al., 2020;", "ref_id": "BIBREF32" }, { "start": 461, "end": 485, "text": "Gururangan et al., 2020;", "ref_id": "BIBREF11" }, { "start": 486, "end": 504, "text": "Chen et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the THEaiTRE project, we focus on generating theatre play scripts. This task combines the challenges of narrative generation (Riedl, 2016) and dialogue generation (Wen et al., 2016) , and could be seen either as generating dialogues with a very large context, or as generating a narrative in the form of a dialogue. Additional challenges include the complex structure of the theatre scripts (including setting descriptions, dialogue lines with character names, and stage directions), their very * Archival WNU submission. large length, their pseudo-multi-author nature (as lines pertaining to different characters use different styles and represent different standpoints), or the low availability of large in-domain datasets.", "cite_spans": [ { "start": 128, "end": 141, "text": "(Riedl, 2016)", "ref_id": "BIBREF22" }, { "start": 166, "end": 184, "text": "(Wen et al., 2016)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We investigate the capabilities of current NLG approaches on this task. Specifically, we use and adapt current large pre-trained neural language models and employ other relevant natural language processing (NLP) techniques to adjust the existing approaches and tools to the theatrical script domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our aim is to produce a mostly automatically generated play, with minimal human-in-the-loop interventions, and have the generated play rehearsed and staged by a theatre. We build upon our previous work (Rosa et al., 2021) , where we produced a generated play by using vanilla GPT-2 and generated individual, loosely connected scenes, but now aim at full play generation. In order to do so, we explore a two-phase hierarchical text-to-text approach, where a synopsis is generated first and then used as a basis for subsequent generation of scenes. We compare this method to a flat generation approach with summarization, which is similar to our previous work (Rosa et al., 2021) . We use models finetuned on in-domain theatre or movie scripts to better fit the domain, and we allow minimal but precise human intervention using a custom-built web-based interface: regenerating a line, choosing the next character to speak, deleting or inserting a generated or a human-written line into the script. All human interventions are recorded. A simplified demo version of the tool used for the generation is freely available online. 1 We include preliminary intrinsic evaluation and discuss qualitative feedback given by the theatre professionals. Our results support finetuning and more precise human intervention; however, the two-stage hierarchical approach shows difficulties following the pre-generated synopsis.", "cite_spans": [ { "start": 202, "end": 221, "text": "(Rosa et al., 2021)", "ref_id": null }, { "start": 658, "end": 677, "text": "(Rosa et al., 2021)", "ref_id": null }, { "start": 1124, "end": 1125, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach is inspired by the work of Fan et al. (2018) and Fan et al. (2019) , who propose a hierarchical system for story generation. A similar idea has been explored by Rashkin et al. (2020) , who generate a story conditioned on a given outline. Tan et al. 2021approach long text generation by generating domain-specific words first and then iteratively refining it until whole sentences are formed. Unlike these works, we generate scripts rather than stories, i.e. not prose but dialogues, which are also longer than typical stories. For dialogue generation, Xu et al. (2021) 's work is close to our baseline flat approach (Section 3) in that they generate long dialogues by using summarization.", "cite_spans": [ { "start": 40, "end": 57, "text": "Fan et al. (2018)", "ref_id": "BIBREF7" }, { "start": 62, "end": 79, "text": "Fan et al. (2019)", "ref_id": "BIBREF8" }, { "start": 174, "end": 195, "text": "Rashkin et al. (2020)", "ref_id": "BIBREF20" }, { "start": 565, "end": 581, "text": "Xu et al. (2021)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A few works also investigate human-machine interaction during text generation, with different aims from ours: Roemmele (2021) investigates how automatically generated texts can inspire human writing. Akoury et al. (2020) use the amount of required human post-editing as a story quality metric.", "cite_spans": [ { "start": 200, "end": 220, "text": "Akoury et al. (2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A number of language generation tools is available online, both free and paid, typically based on GPT-2 and GPT-3 language models (Radford et al., 2019; Brown et al., 2020) , sometimes trained or fine-tuned for a specific domain or task. Prominent examples include news generators such as Grover 2 by Zellers et al. (2019) or News You Can't Use 3 by Geitgey (2019) , the text adventure game AI Dungeon, 4 the code completion tools GitHub Copilot 5 or Deep Tabnine, 6 and chatbots such as AI|Writer or Project December. 7 However, to the best of our knowledge, no generation tool has been released specifically for theatre scripts.", "cite_spans": [ { "start": 130, "end": 152, "text": "(Radford et al., 2019;", "ref_id": "BIBREF19" }, { "start": 153, "end": 172, "text": "Brown et al., 2020)", "ref_id": "BIBREF4" }, { "start": 301, "end": 322, "text": "Zellers et al. (2019)", "ref_id": "BIBREF33" }, { "start": 350, "end": 364, "text": "Geitgey (2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There have been several other projects using automatically generated scripts, including Beyond the Fence, a musical based on suggestions from several automated tools (Colton et al., 2016) , Sunspring, a short sci-fi movie with an LSTM-generated script (Benjamin et al., 2016) , Lifestyle of the Richard and Family, a theatre play written with the help of a next word suggestion tool (Helper, 2018) , or the performances of the Improbotics group who improvise on stage with real-time GPT-3-generated lines (Mathewson and Mirowski, 2017) . However, the tools used in these projects are not publicly available online, and often there is little transparency about the particularities of the exact design and usage of the tools. Moreover, these projects typically use substantial human curation.", "cite_spans": [ { "start": 166, "end": 187, "text": "(Colton et al., 2016)", "ref_id": "BIBREF6" }, { "start": 252, "end": 275, "text": "(Benjamin et al., 2016)", "ref_id": "BIBREF3" }, { "start": 383, "end": 397, "text": "(Helper, 2018)", "ref_id": "BIBREF12" }, { "start": 505, "end": 535, "text": "(Mathewson and Mirowski, 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The flat generation variant is based on our previous approach (Rosa et al., 2021) of using a standard generative model but employing extractive text summarization to deal with the limited window (1,024 tokens for GPT-2) so that longer scripts can be generated without the loss of the global context. Instead of using a vanilla GPT-2 model as in our previous work, we finetune our models on a large collection of ca. 12k theatre and movie scripts. The domains and volumes of data can be found in Table 1 . The theatre plays and TV shows scripts were scraped from various websites, the movie section comes from (Lison and Meena, 2016) .", "cite_spans": [ { "start": 62, "end": 81, "text": "(Rosa et al., 2021)", "ref_id": null }, { "start": 610, "end": 633, "text": "(Lison and Meena, 2016)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 495, "end": 503, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Flat Generation with Summarization", "sec_num": "3" }, { "text": "The operation of flat generation looks as follows: the user inputs a scene setting, character names and their first lines, from which we construct the input prompt in the following format:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flat Generation with Summarization", "sec_num": "3" }, { "text": "Scene setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flat Generation with Summarization", "sec_num": "3" }, { "text": "Character Name: Character line.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flat Generation with Summarization", "sec_num": "3" }, { "text": "Character Name: Character line.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flat Generation with Summarization", "sec_num": "3" }, { "text": "The model then generates a continuation of the script line by line (see Figure 2 ). 8 At each step, the user can choose whether they want to regenerate the last generated line (i.e. generate a different continuation), or whether they want to continue by generating a further line. They can also choose the next character and let the model generate their line, or insert/delete lines within the generated text. A screenshot of this tool is presented in Figure 1 . Figure 1 : A screenshot of the tool used for the generation. The cross generates an alternative continuation starting with the given line. The arrow creates an alternative line while keeping the script continuation. The plus symbol generates and inserts a line, while the scissors symbol deletes it without any changes to the continuation in both cases. Finally, the triangle symbol allows for human input that prompts the regeneration of the continuation.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 80, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 452, "end": 460, "text": "Figure 1", "ref_id": null }, { "start": 463, "end": 471, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Flat Generation with Summarization", "sec_num": "3" }, { "text": "Polonius speaks to the king. Enter Hamlet. Polonius: I hear him coming; let's hide, sir. Hamlet:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flat Generation with Summarization", "sec_num": "3" }, { "text": "To be or not to be; that is the question. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flat Generation with Summarization", "sec_num": "3" }, { "text": "Our second, newly developed approach is a twophase text-to-text hierarchical generation approach:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-phase Hierarchical Generation", "sec_num": "4" }, { "text": "(1) a synopsis is generated from a user-supplied play title, (2) the play dialogue is generated, conditioned on the title and a part of the synopsis. Both phases use specific models finetuned on our indomain datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-phase Hierarchical Generation", "sec_num": "4" }, { "text": "The input for the first phase is the title of the play, from which the synopsis of the play is generated ( Figure 3 , Section 4.1). At each step, The user has the option of continuing generation, regenerating or deleting lines (roughly corresponding to sentences). Once the user is satisfied with the generated synopsis (or the generation ends by the model generating the endoftext token), the synopsis is used as input for the second phase.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 115, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Two-phase Hierarchical Generation", "sec_num": "4" }, { "text": "In the second phase, shown in Figure 4 , the play script is generated from the synopsis (see Sec-A notebook full of ideas was stolen from an unbuttoned backpack by a mailman in a blue vest.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 38, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Two-phase Hierarchical Generation", "sec_num": "4" }, { "text": "A man named Tom, a man in a trench coat, arrives to deliver the notebook. Tom tries to leave, but is interrupted by the arrival of his mailman brother, Jerry, in his mail truck. Jerry offers to take Tom to his house, but Tom is determined to deliver the notebook. As Tom drives through the cornfield, he accidentally stops at a house, which he mistakenly assumes is inhabited by the owner, a widowed woman named Marjorie. She tells him she is waiting for Tom to come home, and she and Tom go into the house together. Jerry arrives and finds Tom's truck with the notebook, having accidentally left it in the truck while searching for Jerry, and is surprised and angry to find Marjorie there. Figure 3 : An example for hierarchical generation 1st phase: title to synopsis (input title shown above the dividing line, the play follows below). tion 4.2). The user is now provided with a set of options similar to the flat approach: at each step choosing between generating a character line (and potentially also choosing which character should speak the line) or moving on to the next part of the generated synopsis.", "cite_spans": [], "ref_spans": [ { "start": 691, "end": 699, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Two-phase Hierarchical Generation", "sec_num": "4" }, { "text": "The goal of this phase is to generate a synopsis based on a user-specified play title. For this, we finetune pretrained language models on a dataset consisting of synopses of theatre plays (scraped by us from Wikipedia), movies (Robischon, 2018; Kar et al., 2018) , TV series (scraped by us from various fan wiki pages) and books ( A notebook full of ideas was stolen from an unbuttoned backpack by a mailman in a blue vest.", "cite_spans": [ { "start": 228, "end": 245, "text": "(Robischon, 2018;", "ref_id": "BIBREF23" }, { "start": 246, "end": 263, "text": "Kar et al., 2018)", "ref_id": "BIBREF13" }, { "start": 330, "end": 331, "text": "(", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "1st Phase: Synopsis generation", "sec_num": "4.1" }, { "text": "A man named Tom, a man in a trench coat, arrives to deliver the notebook.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1st Phase: Synopsis generation", "sec_num": "4.1" }, { "text": "Tom: We've got an urgent message to deliver to your office. Man: That's impossible! Why'd you bring me here if you were planning to rob the post office? Figure 4 : An example for hierarchical generation 2nd phase: synopsis to script. The script generated in the bottom section is conditioned on the human-written prompt and a line from the generated synopsis, shown in the top section. The user has the option to continue generating automatically, or to control the next character speaking (choose from the previously used ones or input manually).", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 161, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "1st Phase: Synopsis generation", "sec_num": "4.1" }, { "text": "Smith, 2017). The final dataset contains over 50k title-synopsis pairs. We finetuned three different models on our dataset for 15 epochs -GPT2-medium, Pegasus (Zhang et al., 2019) , and DistilBART (Shleifer and Rush, 2020) . Some basic statistics of all the models are shown in Table 2 , comparing to a vanilla GPT2 baseline. We can see that all models show similar scores. To choose the best synopsis model, we performed a small-scale human evaluation with 6 lay annotators rating 12 synopses generated by each model.", "cite_spans": [ { "start": 159, "end": 179, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF34" }, { "start": 197, "end": 222, "text": "(Shleifer and Rush, 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 278, "end": 285, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "1st Phase: Synopsis generation", "sec_num": "4.1" }, { "text": "The annotators were shown one story at a time and were asked to answer the following questions using a 1 (worst) to 5 (best) Likert scale rating:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1st Phase: Synopsis generation", "sec_num": "4.1" }, { "text": "1. Is the text coherent? 2. Are the characters consistent? 3. Is the text original and/or interesting? 4. Is the title relevant to the story? 5. How much did you enjoy reading this text? Based on the results of this evaluation (Table 3) , we picked out GPT2-medium 9 as the best one due to its highest overall impression score (Question 5) and strong performance in the remaining evaluated aspects.", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 236, "text": "(Table 3)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "1st Phase: Synopsis generation", "sec_num": "4.1" }, { "text": "In the second phase, we generate the play script from a pre-generated synopsis. As operating on the whole potentially very long synopsis, let alone the whole script, is beyond the capabilities of current models, we split the synopsis into smaller chunks, and consecutively take each of the chunks as input for generating a part of the script. 10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2nd Phase: Script generation", "sec_num": "4.2" }, { "text": "A major challenge is obtaining the training dataset. Ideally, we would use a set of theatre scripts and corresponding synopses. However, due to licensing and copyright issues, such data are not available to us, except for a modest number of mostly very old plays. Therefore, we use a near-domain Script-Base corpus (Gorinski and Lapata, 2018), which contains movie scripts and their synopses. 11 Both synopses and scripts in ScriptBase are split Algorithm 1 Scene alignment.", "cite_spans": [ { "start": 393, "end": 395, "text": "11", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data preparation and alignment", "sec_num": null }, { "text": "Input:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preparation and alignment", "sec_num": null }, { "text": "{c i } N 1 \u25b7 Script SBERT embeddings Input: {m j } M 1 \u25b7 Synopsis SBERT embeddings s 1,j \u2190 cos(c 1 , m j ) \u25b7 Forward pass for i \u2208 {2, . . . , N }, j \u2208 {1, . . . , M } do s i,j \u2190 cos(c j , m j )+max{s i\u22121,j\u22121 , s i\u22121,j } end for a N \u2190 M \u25b7 Backward pass for i \u2208 {N, . . . , 2} do a i\u22121 \u2190 arg max j\u2208{a i \u22121,a i } s i\u22121,j end for return {a i } N 1 \u25b7 Each c i aligned to m a i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preparation and alignment", "sec_num": null }, { "text": "into scenes, but the granularity is different. The scripts are divided into many very short scenes, sometimes consisting of only one utterance or scenic remark, and a scene synopsis often corresponds to tens of script scenes. We thus use the synopsis scenes, and align script scenes to them in a many-to-one fashion. The resulting dataset contains pairs of synopsis scenes and their aligned script scenes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preparation and alignment", "sec_num": null }, { "text": "First, we process the scripts by removing short one-line scenes or merging them with adjacent scenes: If the line is uttered by a character also present in the previous scene (preferably) or the subsequent scene, we merge the two scenes. Otherwise, we remove the scene; this includes scenes consisting only of a scenic remark. 12 We then represent each script scene i and each synopsis scene j with its SBERT embeddings (Reimers and Gurevych, 2019 ) c i or m j , and align each script scene to the synopsis scene a i using dynamic programming with Algorithm 1. In the forward pass, the algorithm computes a scene pair alignment score s i,j as the cosine similarity of the embeddings, plus the score of the best candidate alignment for aligning the preceding script scene (i\u22121) to either the same synopsis scene (j) or to the preceding synopsis scene (j \u2212 1). The final alignment is computed in the backward pass, assuming the alignment of the last scenes to each other, and iteratively taking the best candidate alignment (a i or a i \u2212 1) for the preceding script scene (i \u2212 1).", "cite_spans": [ { "start": 327, "end": 329, "text": "12", "ref_id": null }, { "start": 420, "end": 447, "text": "(Reimers and Gurevych, 2019", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Data preparation and alignment", "sec_num": null }, { "text": "Furthermore, we filter the alignments by a threshold on SBERT cosine similarity of 0.3 (determined empirically). We thus create two versions of train- ing data for the script generation models (see Table 4 for details).", "cite_spans": [], "ref_spans": [ { "start": 198, "end": 205, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Data preparation and alignment", "sec_num": null }, { "text": "We use the GPT2-medium model finetuned for flat script generation (see Section 3) and finetune it further for the task of generating a script chunk from a synopsis chunk, using both dataset variants created in the previous subsection. For each synopsis scene as the input prompt, we train the model to generate the corresponding script scene. The model uses a 1e \u22125 learning rate for 10 epochs with warm up. A basic comparison using intrinsic statistics (scripts generated based on 6 identical prompts) is shown in Table 5 . While the scripts generated by the Hierarchical variant are shorter on average, they tend to be more variable, using a more varied vocabulary and showing higher entropy and perplexity, which points at less repetitiveness.", "cite_spans": [], "ref_spans": [ { "start": 515, "end": 522, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Script generation model", "sec_num": null }, { "text": "Generating theatre play scripts is a complex task presenting many interesting challenges, many of which we have not yet been able to satisfactorily address, as we are continually being informed by theatre professionals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Limitations", "sec_num": "5" }, { "text": "The main weakness of all our approaches is the inability to differentiate between individual characters to ensure their lines are cohesive while being distinct from other characters in the play. The theatre professionals consider it difficult to portray characters missing a consistent personality and motives behind the lines. While our past as well as ongoing experiments, employing natural language inference, line masking, and character pseudonymization, have shown promising results, they only seem to constitute partial superficial remedies for a deep and complex issue. In the future, we intend to approach the problem by adapting and employing current NLG personalization techniques (Yang and Flek, 2021 Another serious problem, identified by the theatre professionals while working with our hierarchical setup, is the fact that the script generation often strays away from the synopsis. So far, we have been only operating with flat textual representations of script parts in the hierarchical setup, aligning parts of the script to parts of its synopsis. While we believe the currently available data leave us no other option, a more adequate approach should probably operate with theatrological abstractions over the script, such as the notion of dramatic situations of Polti (1921) ; we have performed some small-scale annotations of 50 play scripts in this respect, but our exploratory experiments on the resulting dataset showed that we would require a much larger dataset to be able to employ current machine learning techniques, which is beyond our budget. Unfortunately, corpora of theatrical texts, even unannotated ones, are virtually non-existent, and while we managed to acquire a modest dataset, copyright and licensing issues limit us from releasing most of it.", "cite_spans": [ { "start": 691, "end": 711, "text": "(Yang and Flek, 2021", "ref_id": "BIBREF31" }, { "start": 1280, "end": 1292, "text": "Polti (1921)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Limitations", "sec_num": "5" }, { "text": "The use of extractive summarization and hierarchical generation allows us to generate mediumlength texts (one or several scenes), but a fulllength script is still somewhat out of our reach. We believe further improvements could be brought by employing abstractive summarization (Paulus et al., 2018) , specifically trained for theatre play scripts.", "cite_spans": [ { "start": 278, "end": 299, "text": "(Paulus et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Limitations", "sec_num": "5" }, { "text": "We created an interactive tool for human-in-theloop generation of theatre play scripts, with the aim of producing a stageable play with minimal human intervention. We pursue two different approaches, both based on finetuned GPT-2 models -flat generation with extractive summarization to maintain coherence, and a hierarchical two-stage approach, which first generates a textual synopsis and then generates individual scenes, conditioning on chunks of the synopsis. We release an online demo of our tool for interactive generation of the-atre play scripts. We are able to improve upon previous approaches using vanilla models, but our models still are not able to generate consistent personality or follow the synopsis accurately without human intervention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "A demo of our interactive tool and its source codes are available online. 13 In future work, we plan to incorporate natural language inference checks (Welleck et al., 2019) or experiment with dialogue act semantic representations (Kumar et al., 2018) in order to increase coherence. To improve character consistency, we plan to follow percharacter personalization approaches (Yang and Flek, 2021) .", "cite_spans": [ { "start": 74, "end": 76, "text": "13", "ref_id": null }, { "start": 150, "end": 172, "text": "(Welleck et al., 2019)", "ref_id": "BIBREF28" }, { "start": 230, "end": 250, "text": "(Kumar et al., 2018)", "ref_id": "BIBREF14" }, { "start": 375, "end": 396, "text": "(Yang and Flek, 2021)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "A Generated texts A.1 Baseline Vanilla GPT-2 XL Lucy: Well, it's just a matter of time. I've got a little more than enough to last until next summer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Betty: But what if I want more?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Lucy What do you want more?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Betty: More vegetables! Lucy: Yes, that would be nice. I can get some carrots and celery, but there are still a lot of things to do around here. I think we can manage for now. Betty: What if you need to buy something for yourself? Lucy: What if I want to buy something for myself? Betty: Well, that would be nice, too. I'll just get the vegetables, but you have to take some of my meat and potatoes with you. Lucy: Okay, that would be nice. I'll go with you.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The Doctor:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Finetuned Flat GPT-2 Medium", "sec_num": null }, { "text": "But that's impossible! You've got all the features. Doctor:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Finetuned Flat GPT-2 Medium", "sec_num": null }, { "text": "No, but you can still do things that you wouldn't be able to if you'd only been programmed as I was. Cyber-Planner: It's the last thing we need, my lord.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Finetuned Flat GPT-2 Medium", "sec_num": null }, { "text": "Why not just kill us all and start again? Cyberman 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clara:", "sec_num": null }, { "text": "You have not heard me. We will never cease our attack until the human race is extinct. Cyber-Planner : We are too weak to stop it, Cybermen Rigsy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clara:", "sec_num": null }, { "text": "We can't let you destroy us all, Doctor. We've had enough.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clara:", "sec_num": null }, { "text": "But Master, what can I do? He finally realizes he's done. And it takes him all of 5 seconds to explain to the two other humans why they've been replaced by the older model. He points at one of the humans as he explains: MATHESON:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROBOT:", "sec_num": null }, { "text": "That was the mistake. This robot is never going to return to the factory. The older ROBOT then goes to a console and activates it. The HUMAN COMPUTER lights up with an awesome display of its past performance. As it plays through variousince it was destroyed, we see how things have changed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROBOT:", "sec_num": null }, { "text": "https://theaitre.com/demo", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://rowanzellers.com/grover/ 3 https://newsyoucantuse.com/ 4 https://play.aidungeon.io/ 5 https://copilot.github.com/ 6 https://www.tabnine.com/ 7 https://projectdecember.net/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The GPT-2 model sticks to the input format and generates a dialogue-like output; this is mostly true even for vanilla models, let alone a model finetuned specifically for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Trained with a 1e \u22125 learning rate with warm up. 10 This is motivated by the notion of a theatre script being split into individual scenes, which are partially independent. However, we do not guarantee that our chunks actually correspond to individual scenes, as we have not trained a scene splitter for synopses; therefore, we simply split the synopsis into individual sentences with a sentence splitter.11 Another option could be GraphMovie(Zhu et al., 2020), a similar dataset with better annotations but only available in Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "According to our cursory checks, this does not have a dramatic impact on overall coherence, as such scenes are usually not logically connected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Demo: https://theaitre.com/demo, sources: https://github.com/ufal/theaitrobot", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The project TL03000348 THEaiTRE: Um\u011bl\u00e1 inteligence autorem divadeln\u00ed hry is co-financed with the state support of Technological Agency of the Czech Republic within the ETA 3 Programme. The work described herein has been using data, tools and services provided by the LINDAT/CLARIAH-CZ Research Infrastructure (https://lindat.cz), supported by the Ministry of Education, Youth and Sports of the Czech Republic (Project No. LM2018101).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Today is the first day of my factory training. I have achieved my primary objective: becoming A pillar of the community I am part of.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HUMAN COMPUTING VOICE:", "sec_num": null }, { "text": "Leopold: I will speak to him.Leopold, in a white cape and black hat, steps into a wooden hut, then turns to his brothers. They stand, waiting, as:Katsumoto: Are you ready?Leopold: This is the one I'm seeking. Katsumoto: We seek only Wisdom beyond understanding. He holds out the bird. They gather it in their hands, looking at it, impressed. Katsumoto: This bird has knowledge we do not have. It can show us the way to our death. He holds it up, smiling at them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.4 Finetuned Hierarchical Filtered GPT-2 Medium", "sec_num": null }, { "text": "It speaks?Katsumoto: It teaches us.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algren:", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "STO-RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation", "authors": [ { "first": "Nader", "middle": [], "last": "Akoury", "suffix": "" }, { "first": "Shufan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Whiting", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Hood", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6470--6484", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.525" ] }, "num": null, "urls": [], "raw_text": "Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer. 2020. STO- RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6470-6484, Online. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Cmu book summary dataset", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.34740/KAGGLE/DSV/6662" ] }, "num": null, "urls": [], "raw_text": "David Bamman and Noah Smith. 2017. Cmu book summary dataset.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Longformer: The long-document transformer", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Peters", "suffix": "" }, { "first": "", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.05150" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Sunspring, a sci-fi short film starring Thomas Middleditch", "authors": [ { "first": "A", "middle": [ "I" ], "last": "Benjamin", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "Sharp", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Goodwin", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "AI Benjamin, Oscar Sharp, and Ross Goodwin. 2016. Sunspring, a sci-fi short film starring Thomas Mid- dleditch.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Language models are few-shot learners", "authors": [ { "first": "Benjamin", "middle": [], "last": "Tom B Brown", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "", "middle": [], "last": "Askell", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.14165" ] }, "num": null, "urls": [], "raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Few-shot NLG with pre-trained language model", "authors": [ { "first": "Zhiyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Harini", "middle": [], "last": "Eavani", "suffix": "" }, { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yinyin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "183--190", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.18" ] }, "num": null, "urls": [], "raw_text": "Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 183-190, Online. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Beyond the Fence musical and Computer Says Show documentary", "authors": [ { "first": "Simon", "middle": [], "last": "Colton", "suffix": "" }, { "first": "Maria", "middle": [ "Teresa" ], "last": "Llano", "suffix": "" }, { "first": "Rose", "middle": [], "last": "Hepworth", "suffix": "" }, { "first": "John", "middle": [], "last": "Charnley", "suffix": "" }, { "first": "Catherine", "middle": [ "V" ], "last": "Gale", "suffix": "" }, { "first": "Archie", "middle": [], "last": "Baron", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Pachet", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Seventh International Conference on Computational Creativity", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Colton, Maria Teresa Llano, Rose Hepworth, John Charnley, Catherine V. Gale, Archie Baron, Fran\u00e7ois Pachet, Pierre Roy, Pablo Gerv\u00e1s, Nick Collins, Bob Sturm, Tillman Weyde, Daniel Wolff, and James Robert Lloyd. 2016. The Beyond the Fence musical and Computer Says Show documen- tary. In Proceedings of the Seventh International Conference on Computational Creativity.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hierarchical neural story generation", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "889--898", "other_ids": { "DOI": [ "10.18653/v1/P18-1082" ] }, "num": null, "urls": [], "raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Strategies for structuring story generation", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2650--2660", "other_ids": { "DOI": [ "10.18653/v1/P19-1254" ] }, "num": null, "urls": [], "raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2650- 2660, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "What's this movie about? A joint neural network architecture for movie content analysis", "authors": [ { "first": "Philip", "middle": [], "last": "John Gorinski", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "1770--1781", "other_ids": { "DOI": [ "10.18653/v1/N18-1160" ] }, "num": null, "urls": [], "raw_text": "Philip John Gorinski and Mirella Lapata. 2018. What's this movie about? A joint neural network architec- ture for movie content analysis. In Proceedings of NAACL-HLT, pages 1770-1781, New Orleans, Louisiana.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8342--8360", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.740" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Lifestyle of the Richard and family", "authors": [ { "first": "Roslyn", "middle": [], "last": "Helper", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roslyn Helper. 2018. Lifestyle of the Richard and fam- ily.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "MPST: A corpus of movie plot synopses with tags", "authors": [ { "first": "Sudipta", "middle": [], "last": "Kar", "suffix": "" }, { "first": "A", "middle": [], "last": "Suraj Maharjan", "suffix": "" }, { "first": "Thamar", "middle": [], "last": "L\u00f3pez-Monroy", "suffix": "" }, { "first": "", "middle": [], "last": "Solorio", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sudipta Kar, Suraj Maharjan, A. Pastor L\u00f3pez-Monroy, and Thamar Solorio. 2018. MPST: A corpus of movie plot synopses with tags. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Dialogue-act-driven Conversation Model : An Experimental Study", "authors": [ { "first": "Harshit", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Sachindra", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1246--1256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harshit Kumar, Arvind Agarwal, and Sachindra Joshi. 2018. Dialogue-act-driven Conversation Model : An Experimental Study. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 1246-1256, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Automatic Turn Segmentation for Movie & TV Subtitles", "authors": [ { "first": "Pierre", "middle": [], "last": "Lison", "suffix": "" }, { "first": "Raveesh", "middle": [], "last": "Meena", "suffix": "" } ], "year": 2016, "venue": "IEEE Workshop on Spoken Language Technology. IEEE conference proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierre Lison and Raveesh Meena. 2016. Automatic Turn Segmentation for Movie & TV Subtitles. In 2016 IEEE Workshop on Spoken Language Technology. IEEE conference proceedings.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improvised theatre alongside artificial intelligences", "authors": [ { "first": "W", "middle": [], "last": "Kory", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Mathewson", "suffix": "" }, { "first": "", "middle": [], "last": "Mirowski", "suffix": "" } ], "year": 2017, "venue": "Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kory W Mathewson and Piotr Mirowski. 2017. Im- provised theatre alongside artificial intelligences. In Thirteenth Artificial Intelligence and Interactive Dig- ital Entertainment Conference.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A deep reinforced model for abstractive summarization", "authors": [ { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In International Conference on Learning Representations.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The thirty-six dramatic situations", "authors": [ { "first": "Georges", "middle": [], "last": "Polti", "suffix": "" }, { "first": "; Jk", "middle": [], "last": "Reeve", "suffix": "" } ], "year": 1921, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georges Polti. 1921. The thirty-six dramatic situations. JK Reeve.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Techni- cal report, OpenAI.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "PlotMachines: Outlineconditioned generation with dynamic plot state tracking", "authors": [ { "first": "Asli", "middle": [], "last": "Hannah Rashkin", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Choi", "suffix": "" }, { "first": "", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4274--4295", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.349" ] }, "num": null, "urls": [], "raw_text": "Hannah Rashkin, Asli Celikyilmaz, Yejin Choi, and Jianfeng Gao. 2020. PlotMachines: Outline- conditioned generation with dynamic plot state track- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), pages 4274-4295, Online. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "2019 Conference on Empirical Methods in Natural Language Processing (EMNLP) and 9th International Joint Conference on Natural Language Processing (IJCNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence Embeddings using Siamese BERT- Networks. In 2019 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP) and 9th International Joint Conference on Natural Lan- guage Processing (IJCNLP), Hong Kong.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Computational narrative intelligence: A human-centered goal for artificial intelligence", "authors": [ { "first": "O", "middle": [], "last": "Mark", "suffix": "" }, { "first": "", "middle": [], "last": "Riedl", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1602.06484" ] }, "num": null, "urls": [], "raw_text": "Mark O Riedl. 2016. Computational narrative intelli- gence: A human-centered goal for artificial intelli- gence. arXiv preprint arXiv:1602.06484.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Wikipedia movie plots", "authors": [ { "first": "Justin", "middle": [], "last": "Robischon", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Robischon. 2018. Wikipedia movie plots.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Inspiration through observation: Demonstrating the influence of automatically generated text on creative writing", "authors": [ { "first": "Melissa", "middle": [], "last": "Roemmele", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melissa Roemmele. 2021. Inspiration through obser- vation: Demonstrating the influence of automati- cally generated text on creative writing. CoRR, abs/2107.04007.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Kl\u00e1ra Voseck\u00e1, Tom\u00e1\u0161 Studen\u00edk, and Petr \u017dabka. 2021. THEaiTRE 1.0: Interactive generation of theatre play scripts", "authors": [ { "first": "Rudolf", "middle": [], "last": "Rosa", "suffix": "" }, { "first": "Tom\u00e1\u0161", "middle": [], "last": "Musil", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Dominik", "middle": [], "last": "Jurko", "suffix": "" }, { "first": "Patr\u00edcia", "middle": [], "last": "Schmidtov\u00e1", "suffix": "" }, { "first": "David", "middle": [], "last": "Mare\u010dek", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hrbek", "suffix": "" }, { "first": "David", "middle": [], "last": "Ko\u0161t'\u00e1k", "suffix": "" }, { "first": "Martina", "middle": [], "last": "Kinsk\u00e1", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Nov\u00e1kov\u00e1", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Dole\u017eal", "suffix": "" } ], "year": null, "venue": "Proceedings of the Text2Story'21 Workshop", "volume": "2860", "issue": "", "pages": "71--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudolf Rosa, Tom\u00e1\u0161 Musil, Ond\u0159ej Du\u0161ek, Dominik Jurko, Patr\u00edcia Schmidtov\u00e1, David Mare\u010dek, Ond\u0159ej Bojar, Tom Kocmi, Daniel Hrbek, David Ko\u0161t'\u00e1k, Martina Kinsk\u00e1, Marie Nov\u00e1kov\u00e1, Josef Dole\u017eal, Kl\u00e1ra Voseck\u00e1, Tom\u00e1\u0161 Studen\u00edk, and Petr \u017dabka. 2021. THEaiTRE 1.0: Interactive generation of the- atre play scripts. In Proceedings of the Text2Story'21 Workshop, volume 2860 of CEUR Workshop Pro- ceedings, pages 71-76, Aachen, Germany. RWTH Aachen University.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Pre-trained summarization distillation", "authors": [ { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sam Shleifer and Alexander M. Rush. 2020. Pre-trained summarization distillation. CoRR, abs/2010.13002.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Progressive Generation of Long Text with Pretrained Language Models", "authors": [ { "first": "Zichao", "middle": [], "last": "Bowen Tan", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ai-", "middle": [], "last": "Maruan", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Shedivat", "suffix": "" }, { "first": "Zhiting", "middle": [], "last": "Xing", "suffix": "" }, { "first": "", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.15720[cs].ArXiv:2006.15720" ] }, "num": null, "urls": [], "raw_text": "Bowen Tan, Zichao Yang, Maruan AI-Shedivat, Eric P. Xing, and Zhiting Hu. 2021. Progressive Genera- tion of Long Text with Pretrained Language Models. arXiv:2006.15720 [cs]. ArXiv: 2006.15720.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Dialogue Natural Language Inference", "authors": [ { "first": "Sean", "middle": [], "last": "Welleck", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3731--3741", "other_ids": { "DOI": [ "10.18653/v1/P19-1363" ] }, "num": null, "urls": [], "raw_text": "Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue Natural Language Inference. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3731-3741, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Multi-domain neural network language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Lina", "middle": [ "M" ], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Rojas-Barahona", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "120--129", "other_ids": { "DOI": [ "10.18653/v1/N16-1015" ] }, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016. Multi-domain neural network language generation for spoken dialogue systems. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 120-129, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Beyond goldfish memory: Long-term open-domain conversation", "authors": [ { "first": "Jing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.48550/ARXIV.2107.07567" ] }, "num": null, "urls": [], "raw_text": "Jing Xu, Arthur Szlam, and Jason Weston. 2021. Be- yond goldfish memory: Long-term open-domain con- versation.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Towards user-centric text-to-text generation: A survey", "authors": [ { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Lucie", "middle": [], "last": "Flek", "suffix": "" } ], "year": 2021, "venue": "Text, Speech, and Dialogue", "volume": "", "issue": "", "pages": "3--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diyi Yang and Lucie Flek. 2021. Towards user-centric text-to-text generation: A survey. In Text, Speech, and Dialogue, pages 3-22, Cham. Springer Interna- tional Publishing.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Big bird: Transformers for longer sequences", "authors": [ { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Guru", "middle": [], "last": "Guruganesh", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kumar Avinava Dubey", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Ainslie", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Ontanon", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Qifan", "middle": [], "last": "Ravula", "suffix": "" }, { "first": "Li", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2020, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago On- tanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. In NeurIPS.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Defending against neural fake news", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Franziska", "middle": [], "last": "Roesner", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "9054--9065", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Ad- vances in Neural Information Processing Systems 32, pages 9054-9065. Curran Associates, Inc.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "authors": [ { "first": "Jingqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J. Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "ScriptWriter: Narrative-guided script generation", "authors": [ { "first": "Yutao", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ruihua", "middle": [], "last": "Song", "suffix": "" }, { "first": "Zhicheng", "middle": [], "last": "Dou", "suffix": "" }, { "first": "Jian-Yun", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8647--8657", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.765" ] }, "num": null, "urls": [], "raw_text": "Yutao Zhu, Ruihua Song, Zhicheng Dou, Jian-Yun Nie, and Jin Zhou. 2020. ScriptWriter: Narrative-guided script generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 8647-8657, Online. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Flat script generation example. The initial human-written prompt is shown above the dividing line, the following generated outputs follow below.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "num": null, "content": "
: A brief overview of the script dataset we use for finetuning.
", "text": "", "html": null, "type_str": "table" }, "TABREF3": { "num": null, "content": "
Vanilla GPT-2 Finetuned GPT-2 Finetuned PEGASUS Finetuned DistilBART38.10 29.32 14.80 27.00285.80 536.74 281.40 526.331,371 1,995 1,416 1,1821.72 3.48 2.65 2.43
", "text": "Bamman and Model Avg. # Sentences Avg. # Words Vocab Size Entropy", "html": null, "type_str": "table" }, "TABREF4": { "num": null, "content": "
ModelCoherence Consistency Originality Relevance Overall Impression
Vanilla GPT-2 Finetuned GPT-2 Finetuned PEGASUS Finetuned DistilBART2.7 3.0 2.8 1.92.8 3.1 2.8 2.02.6 3.1 3.0 3.22.7 2.6 2.1 2.02.6 3.2 2.8 2.9
", "text": "Basic characteristics of synopsis generation model outputs (average output lengths in terms of sentences and words, total number of distinct words used on the output, Shannon entropy over all outputs).", "html": null, "type_str": "table" }, "TABREF5": { "num": null, "content": "", "text": "Results of human evaluation of synopsis generation models (1 to 5 points, higher is better). The presented values are the average values across the annotator scores.", "html": null, "type_str": "table" }, "TABREF7": { "num": null, "content": "
", "text": "Statistics of aligned synopsis-script scenes used for hierarchical generation (script-synopsis ratio is the average number of script scenes aligned to a single synopsis scene).", "html": null, "type_str": "table" }, "TABREF8": { "num": null, "content": "
ModelAvg. # Lines Avg. # Sentences Avg. # Words Vocab Size Entropy Perplexity
Vanilla GPT-2 Finetuned GPT-2: Flat Finetuned GPT-2: Hier./Base Finetuned GPT-2: Hier./Filtered7.33 5.67 5.00 5.67203.00 94.33 68.00 61.50500.83 724.50 769.50 678.00863 981 1,336 1,3352.71 3.09 2.93 2.725.19 6.30 9.77 21.87
", "text": ").", "html": null, "type_str": "table" }, "TABREF9": { "num": null, "content": "", "text": "A basic statistics comparison for script generation by different model variants. Cf.Table 2for metrics details; perplexity is measured using vanilla GPT2-XL.", "html": null, "type_str": "table" } } } }