File size: 5,903 Bytes
cbc09b1 13e1ad4 cbc09b1 d1d8529 cbc09b1 13e1ad4 cbc09b1 cde8e25 cbc09b1 d3f8940 cbc09b1 d1d8529 cbc09b1 d3f8940 cbc09b1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
---
language:
- multilingual
license:
- cc-by-4.0
multilinguality:
- multilingual
source_datasets:
- nluplusplus
task_categories:
- text-classification
pretty_name: multi3-nlu
---
# Dataset Card for Multi<sup>3</sup>NLU++
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Paper:** [arXiv](https://arxiv.org/abs/2212.10455)
### Dataset Summary
Please access the dataset using
```
git clone https://huggingface.co/datasets/uoe-nlp/multi3-nlu/
```
Multi<sup>3</sup>NLU++ consists of 3080 utterances per language representing challenges in building multilingual multi-intent multi-domain task-oriented dialogue systems. The domains include banking and hotels. There are 62 unique intents.
### Supported Tasks and Leaderboards
- multi-label intent detection
- slot filling
- cross-lingual language understanding for task-oriented dialogue
### Languages
The dataset covers four language pairs in addition to the source dataset in English:
Spanish, Turkish, Marathi, Amharic
Please find the source dataset in English [here](https://github.com/PolyAI-LDN/task-specific-datasets/tree/master/nlupp/data)
## Dataset Structure
### Data Instances
Each data instance contains the following features: _text_, _intents_, _uid_, _lang_, and ocassionally _slots_ and _values_
See the [Multi<sup>3</sup>NLU++ corpus viewer](https://huggingface.co/datasets/uoe-nlp/multi3-nlu/viewer/uoe-nlp--multi3-nlu/train) to explore more examples.
An example from the Multi<sup>3</sup>NLU++ looks like the following:
```
{
"text": "माझे उद्याचे रिझर्वेशन मला रद्द का करता येणार नाही?",
"intents": [
"why",
"booking",
"cancel_close_leave_freeze",
"wrong_notworking_notshowing"
],
"slots": {
"date_from": {
"text": "उद्याचे",
"span": [
5,
12
],
"value": {
"day": 16,
"month": 3,
"year": 2022
}
}
},
"uid": "hotel_1_1",
"lang": "mr"
}
```
### Data Fields
- 'text': a string containing the utterance for which the intent needs to be detected
- 'intents': the corresponding intent labels
- 'uid': unique identifier per language
- 'lang': the language of the dataset
- 'slots': annotation of the span that needs to be extracted for value extraction with its label and _value_
### Data Splits
The experiments are done on different k-fold validation setups. The dataset has multiple types of data splits. Please see Section 4 of the paper.
## Dataset Creation
### Curation Rationale
Existing task-oriented dialogue datasets are 1) predominantly limited to detecting a single intent, 2) focused on a single domain, and 3) include a small set of slot types. Furthermore, the success of task-oriented dialogue is 4) often evaluated on a small set of higher-resource languages (i.e., typically English) which does not test how generalisable systems are to the diverse range of the world's languages.
Our proposed dataset addresses all these limitations
### Source Data
#### Initial Data Collection and Normalization
Please see Section 3 of the paper
#### Who are the source language producers?
The source language producers are authors of [NLU++ dataset](https://arxiv.org/abs/2204.13021). The dataset was professionally translated into our chosen four languages. We used Blend Express and Proz.com to recruit these translators.
### Personal and Sensitive Information
None. Names are fictional
### Discussion of Biases
We have carefully vetted the examples to exclude the problematic examples.
### Other Known Limitations
The dataset comprises utterances extracted from real dialogues between users and conversational agents as well as synthetic human-authored utterances constructed with the aim of introducing additional combinations of intents and slots. The utterances therefore lack the wider context that would be present in a complete dialogue. As such the dataset cannot be used to evaluate systems with respect to discourse-level phenomena present in dialogue.
## Additional Information
Baseline models:
Our MLP and QA models are based on the huggingface transformers library.
### QA
We use the following code snippet for our QA experiments. Please refer to the paper for more details
```
https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py
python run_qa.py config_qa.json
```
### Licensing Information
The dataset is Creative Commons Attribution 4.0 International (cc-by-4.0)
### Citation Information
Coming soon
### Contact
[Nikita Moghe](mailto:[email protected]) and [Evgeniia Razumovskaia]([email protected]) and [Liane Guillou](mailto:[email protected])
Dataset card based on [Allociné](https://huggingface.co/datasets/allocine) |