conversational_weather / conversational_weather.json
Sebastian Gehrmann
.
9db4fc7
{
"overview": {
"where": {
"has-leaderboard": "no",
"leaderboard-url": "N/A",
"leaderboard-description": "N/A",
"data-url": "[Github](https://github.com/facebookresearch/TreeNLG)",
"paper-url": "[ACL Anthology](https://aclanthology.org/P19-1080)",
"paper-bibtext": "```\n@inproceedings{balakrishnan-etal-2019-constrained,\n title = \"Constrained Decoding for Neural {NLG} from Compositional Representations in Task-Oriented Dialogue\",\n author = \"Balakrishnan, Anusha and\n Rao, Jinfeng and\n Upasani, Kartikeya and\n White, Michael and\n Subba, Rajen\",\n booktitle = \"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2019\",\n address = \"Florence, Italy\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/P19-1080\",\n doi = \"10.18653/v1/P19-1080\",\n pages = \"831--844\"\n}\n```",
"contact-name": "Kartikeya Upasani",
"contact-email": "[email protected]"
},
"languages": {
"is-multilingual": "no",
"license": "cc-by-nc-4.0: Creative Commons Attribution Non Commercial 4.0 International",
"task-other": "N/A",
"task": "Data-to-Text",
"license-other": "N/A",
"communicative": "Producing a text that is a response to a weather query as per the discourse structure and data attributes specified in the input meaning representation.",
"language-names": [
"English"
],
"intended-use": "This dataset is intended to help develop conversational agents that exhibit human-like properties such as matching the framing of the response with the query or contrasting relevant data attributes."
},
"credit": {
"organization-type": [
"industry"
],
"organization-names": "Facebook",
"creators": "Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, Rajen Subba (Facebook Conversational AI)",
"funding": "Facebook",
"gem-added-by": "Vipul Raheja (Grammarly)"
},
"structure": {
"data-fields": "- `gem_id`: (string): GEM-formatted row id\n- `id`: (string): Row id in the original data\n- `user_query`: (string): Natural language weather query from humans\n- `tree_str_mr`: (string): Synthetically-added user context (datetime and location) in the form of a tree-structured MR\n- `response`: (string): A tree-structured annotation of the response.\n",
"structure-example": "```\n{'gem_id': 'weather-train-11',\n'id': '1108963',\n 'synthetic_user_context': '[__DG_INFORM__ [__ARG_TASK__ get_forecast ] '\n '[__ARG_TEMP__ 37 ] [__ARG_TEMP_UNIT__ fahrenheit ] '\n '[__ARG_CLOUD_COVERAGE__ partly cloudy ] '\n '[__ARG_DATE_TIME__ [__ARG_COLLOQUIAL__ currently ] '\n '] [__ARG_LOCATION__ [__ARG_CITY__ Oakland ] '\n '[__ARG_COUNTRY__ United States ] [__ARG_REGION__ '\n 'California ] ] ] [__DG_INFORM__ [__ARG_TASK__ '\n 'get_forecast ] [__ARG_TEMP_SUMMARY__ mid 40s ] '\n '[__ARG_DATE_TIME_RANGE__ [__ARG_COLLOQUIAL__ This '\n 'afternoon ] ] [__ARG_LOCATION__ [__ARG_CITY__ '\n 'Oakland ] [__ARG_COUNTRY__ United States ] '\n '[__ARG_REGION__ California ] ] ] [__DG_INFORM__ '\n '[__ARG_TASK__ get_forecast ] '\n '[__ARG_CLOUD_COVERAGE__ mostly sunny ] '\n '[__ARG_DATE_TIME_RANGE__ [__ARG_COLLOQUIAL__ This '\n 'afternoon ] ] [__ARG_LOCATION__ [__ARG_CITY__ '\n 'Oakland ] [__ARG_COUNTRY__ United States ] '\n '[__ARG_REGION__ California ] ] ]',\n 'tree_str_mr': \"[__DG_INFORM__ It's [__ARG_DATE_TIME__ [__ARG_COLLOQUIAL__ \"\n 'currently ] ] [__ARG_CLOUD_COVERAGE__ partly cloudy ] and '\n '[__ARG_TEMP__ __ARG_TEMP__ ] [__ARG_TEMP_UNIT__ '\n '__ARG_TEMP_UNIT__ ] [__ARG_LOCATION__ in [__ARG_CITY__ '\n '__ARG_CITY__ ] , [__ARG_REGION__ __ARG_REGION__ ] , '\n '[__ARG_COUNTRY__ __ARG_COUNTRY__ ] ] . ] [__DG_INFORM__ '\n '[__ARG_DATE_TIME_RANGE__ [__ARG_COLLOQUIAL__ This afternoon ] '\n \"] , it'll be [__ARG_CLOUD_COVERAGE__ mostly sunny ] ] \"\n '[__DG_INFORM__ with temperatures in the [__ARG_TEMP_SUMMARY__ '\n 'mid <number> ] ]',\n 'user_query': 'Show weather forecast for Oakland, CA. '}\n```",
"structure-splits": "- Standard Splits: Train/Validation/Test\n- Additional Split: Disc_Test (a more challenging subset of the test set that contains discourse relations) ",
"structure-splits-criteria": "The test set contains 3,121 examples, of which 1.1K (35%) have unique MRs that have never been seen in the training set.",
"structure-outlier": "```\n{'gem_id': 'weather-train-13333', 'data_id': '1260610', 'user_query': 'Sundown', 'tree_str_mr': '[__DG_INFORM__ [__ARG_TASK__ get_weather_attribute ] [__ARG_SUNSET_TIME_DATE_TIME__ [__ARG_TIME__ 05:04 PM ] ] ]', 'response': '[__DG_INFORM__ The sun will go down at [__ARG_SUNSET_TIME_DATE_TIME__ [__ARG_TIME__ __ARG_TIME__ ] ] ]'}\n```"
},
"what": {
"dataset": "The purpose of this dataset is to assess how well a model can learn a template-like structure in a very low data setting. The task here is to produce a response to a weather-related query. The reply is further specified through the data attributes and discourse structure in the input. The output contains both the lexicalized text and discourse markers for attributes (e.g., `_ARG_TEMP_ 34`). "
}
},
"curation": {
"original": {
"rationale": "The dataset was curated to develop a weather bot that exhibits human-like properties such as matching the framing of the response with the query or contrasting relevant data attributes. To achieve this, the dataset contains rich tree-structured meaning representations that are specified using several data arguments and discourse acts, the input natural language queries, and annotations for the responses. ",
"communicative": "Producing a text that is a response to a weather query as per the discourse structure and data attributes specified in the input meaning representation.",
"is-aggregated": "no",
"aggregated-sources": "N/A"
},
"language": {
"obtained": [
"Crowdsourced",
"Machine-generated"
],
"found": [],
"crowdsourced": [
"Other crowdworker platform"
],
"created": "N/A",
"machine-generated": "N/A",
"validated": "validated by crowdworker",
"is-filtered": "hybrid",
"pre-processed": "Please refer to Appendix D of the original paper for details.",
"filtered-criteria": "Please refer to Appendix C of the original paper for details.",
"topics": "The dataset is focused on the weather domain: Weather was the first successful case of NLG put into production back in the 80s (Reiter & Dale, 1997). This domain offers significant complexity for NLG. Weather forecast summaries in particular can be very long, and require reasoning over several disjoint pieces of information."
},
"annotations": {
"origin": "none",
"rater-number": "N/A",
"rater-qualifications": "N/A",
"rater-training-num": "N/A",
"rater-test-num": "N/A",
"rater-annotation-service-bool": "no",
"rater-annotation-service": [],
"values": "N/A",
"quality-control": [],
"quality-control-details": "N/A"
},
"consent": {
"has-consent": "no",
"consent-policy": "N/A",
"consent-other": "N/A",
"no-consent-justification": "Annotation was done as work for hire and contains no PII."
},
"pii": {
"has-pii": "no PII",
"no-pii-justification": "Data is simulated and not specific to annotator.",
"is-pii-identified": "N/A",
"pii-identified-method": "N/A",
"is-pii-replaced": "N/A",
"pii-replaced-method": "N/A",
"pii-categories": []
},
"maintenance": {
"has-maintenance": "no",
"description": "N/A",
"contact": "N/A",
"contestation-mechanism": "N/A",
"contestation-link": "N/A",
"contestation-description": "N/A"
}
},
"gem": {
"rationale": {
"contribution": "The dataset was curated to develop a weather bot that exhibits human-like properties such as matching the framing of the response with the query or contrasting relevant data attributes. \n\nThe dataset offers rich tree-based meaning representations that offer fine-grained control over the response, e.g. by specifying which two attributes are to be contrasted. The natural language input queries are also provided to model the coherence of the response based on the input. The output response is annotated with the input meaning components using special bracketing tokens, which enables developing new techniques such as constrained decoding to improve quality of output responses",
"sole-task-dataset": "no",
"distinction-description": "N/A",
"model-ability": "Adequately expressing CONTRAST and JUSTIFY discourse relations with appropriate grouping of arguments; adequately generalizing to many combinations of arguments."
},
"curation": {
"has-additional-curation": "yes",
"modification-types": [
"data points removed"
],
"modification-description": "The original repo contained a challenge set disc_test.tsv, which is a subset of the test set consisting of discourse relations (CONTRAST and JUSTIFY) , but also contained JOIN relations.\nThis discrepancy has been rectified in the GEM version. The rectified version has been added in the `challenge_sets` ",
"has-additional-splits": "no",
"additional-splits-description": "N/A",
"additional-splits-capacicites": "N/A"
},
"starting": {}
},
"results": {
"results": {
"other-metrics-definitions": "Tree accuracy: It measures whether the tree structure in the prediction matches that of the input MR exactly (modulo repeated arguments that need only appear once).",
"has-previous-results": "no",
"current-evaluation": "N/A",
"previous-results": "N/A",
"model-abilities": "Adequately expressing CONTRAST and JUSTIFY discourse relations with appropriate grouping of arguments; adequately generalizing to many combinations of arguments.",
"metrics": [
"BLEU",
"Other: Other Metrics"
],
"original-evaluation": "Automatic metrics are evaluated on the raw model predictions (which have de-lexicalized fields):\n* Tree accuracy: Measures whether the tree structure in the prediction matches that of the input MR exactly. \n* BLEU-4: A word overlap metric commonly used for evaluating NLG systems.\n\nAuthors also performed human evaluation studies by asking annotators to evaluate the quality of responses produced by different models. Annotators provided binary ratings on the following dimensions:\n\u2022 Grammaticality: Measures fluency of the responses. \n\u2022 Correctness: Measures semantic correctness of the responses."
}
},
"considerations": {
"pii": {
"risks-description": "Annotation was done as work for hire and contains no PII. Annotated data is simulated and not specific to annotator.\n"
},
"licenses": {
"dataset-restrictions-other": "N/A",
"data-copyright-other": "N/A"
},
"limitations": {
"data-unsuited-applications": "An imperfect model used to convey actual weather data could mislead users about weather conditions?"
}
},
"context": {
"previous": {
"is-deployed": "no",
"described-risks": "N/A",
"changes-from-observation": "N/A"
},
"underserved": {
"helps-underserved": "no",
"underserved-description": "N/A"
},
"biases": {
"has-biases": "unsure",
"bias-analyses": "N/A",
"speaker-distibution": "Grammatical evaluations performed with the data to date have used norms from informal Standard American English. These prescriptive notions of grammaticality potentially serve to perpetuate systemic power imbalances as they\u2019re conveyed by language. \n\nSince the data only contains informal Standard American English, its use to train a model may not be appropriate depending on the potential use case."
}
}
}