zhuqi commited on
Commit
366ff12
1 Parent(s): fe4afad

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for Taskmaster-1
2
+
3
+ - **Repository:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
4
+ - **Paper:** https://aclanthology.org/2021.acl-long.55.pdf
5
+ - **Leaderboard:** None
6
+ - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com)
7
+
8
+ ### Dataset Summary
9
+
10
+ The Taskmaster-3 (aka TicketTalk) dataset consists of 23,789 movie ticketing dialogs (located in Taskmaster/TM-3-2020/data/). By "movie ticketing" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction.
11
+
12
+ This collection was created using the "self-dialog" method. This means a single, crowd-sourced worker is paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent. In order to gather a wide range of conversational scenarios and linguistic phenomena, workers were given both open-ended as well as highly structured conversational tasks. In all, we used over three dozen sets of instructions while building this corpus. The "instructions" field in data.json provides the exact scenario workers were given to complete each dialog. In this way, conversations involve a wide variety of paths, from those where the customer decides on a movie based on genre, their location, current releases, or from what they already have in mind. In addition, dialogs also include error handling with repect to repair (e.g. "No, I said Tom Cruise."), clarifications (e.g. "Sorry. Did you want the AMC 16 or Century City 16?") and other common conversational hiccups. In some cases instructions are completely open ended e.g. "Pretend you are taking your friend to a movie in Salem, Oregon. Create a conversation where you end up buying two tickets after finding out what is playing in at least two local theaters. Make sure the ticket purchase includes a confirmation of the deatils by the agent before the purchase, including date, time, movie, theater, and number of tickets." In other cases we restrict the conversational content and structure by offering a partially completed conversation that the workers must finalize or fill in based a certain parameters. These partially completed dialogs are labeled "Auto template" in the "scenario" field shown for each conversation in the data.json file. In some cases, we provided a small KB from which workers would choose movies, theaters, etc. but in most cases (pre-pandemic) workers were told to use the internet to get accurate current details for their dialogs. In any case, all relevant entities are annotated.
13
+
14
+ - **How to get the transformed data from original data:**
15
+ - Download [master.zip](https://github.com/google-research-datasets/Taskmaster/archive/refs/heads/master.zip).
16
+ - Run `python preprocess.py` in the current directory.
17
+ - **Main changes of the transformation:**
18
+ - Remove dialogs that are empty or only contain one speaker.
19
+ - Split each domain dialogs into train/validation/test randomly (8:1:1).
20
+ - Merge continuous turns by the same speaker (ignore repeated turns).
21
+ - Annotate `dialogue acts` according to the original segment annotations. Add `intent` annotation (`==inform`). The type of `dialogue act` is set to `non-categorical` if the `slot` is not `description.other` or `description.plot`. Otherwise, the type is set to `binary` (and the `value` is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.
22
+ - Add `domain` and `intent` descriptions.
23
+ - Rename `api` to `db_results`.
24
+ - Add `state` by accumulate `non-categorical dialogue acts` in the order that they appear.
25
+ - **Annotations:**
26
+ - dialogue acts, state, db_results.
27
+
28
+ ### Supported Tasks and Leaderboards
29
+
30
+ NLU, DST, Policy, NLG, E2E
31
+
32
+ ### Languages
33
+
34
+ English
35
+
36
+ ### Data Splits
37
+
38
+ | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) |
39
+ |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------|
40
+ | train | 18997 | 380646 | 20.04 | 10.48 | 1 | - | - | - | 100 |
41
+ | validation | 2380 | 47531 | 19.97 | 10.38 | 1 | - | - | - | 100 |
42
+ | test | 2380 | 48849 | 20.52 | 10.12 | 1 | - | - | - | 100 |
43
+ | all | 23757 | 477026 | 20.08 | 10.43 | 1 | - | - | - | 100 |
44
+
45
+ 1 domains: ['movie']
46
+ - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage.
47
+ - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage.
48
+
49
+ ### Citation
50
+
51
+ ```
52
+ @inproceedings{byrne-etal-2021-tickettalk,
53
+ title = "{T}icket{T}alk: Toward human-level performance with end-to-end, transaction-based dialog systems",
54
+ author = "Byrne, Bill and
55
+ Krishnamoorthi, Karthik and
56
+ Ganesh, Saravanan and
57
+ Kale, Mihir",
58
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
59
+ month = aug,
60
+ year = "2021",
61
+ address = "Online",
62
+ publisher = "Association for Computational Linguistics",
63
+ url = "https://aclanthology.org/2021.acl-long.55",
64
+ doi = "10.18653/v1/2021.acl-long.55",
65
+ pages = "671--680",
66
+ }
67
+ ```
68
+
69
+ ### Licensing Information
70
+
71
+ [**CC BY 4.0**](https://creativecommons.org/licenses/by/4.0/)