Datasets:
Commit
•
f0efeef
0
Parent(s):
Update files from the datasets library (from 1.16.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.16.0
- .gitattributes +27 -0
- README.md +210 -0
- cmu_hinglish_dog.py +189 -0
- dataset_infos.json +1 -0
- dummy/0.0.0/dummy_data.zip +3 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,210 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- machine-generated
|
4 |
+
language_creators:
|
5 |
+
- crowdsourced
|
6 |
+
languages:
|
7 |
+
- en
|
8 |
+
- hi
|
9 |
+
licenses:
|
10 |
+
- cc-by-sa-3-0
|
11 |
+
- gfdl-1-3-or-later
|
12 |
+
multilinguality:
|
13 |
+
- multilingual
|
14 |
+
- translation
|
15 |
+
pretty_name: CMU Document Grounded Conversations
|
16 |
+
size_categories:
|
17 |
+
- 1K<n<10K
|
18 |
+
source_datasets:
|
19 |
+
- original
|
20 |
+
task_categories:
|
21 |
+
- conditional-text-generation
|
22 |
+
task_ids:
|
23 |
+
- machine-translation
|
24 |
+
---
|
25 |
+
|
26 |
+
# Dataset Card for CMU Document Grounded Conversations
|
27 |
+
|
28 |
+
## Table of Contents
|
29 |
+
- [Dataset Description](#dataset-description)
|
30 |
+
- [Dataset Summary](#dataset-summary)
|
31 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
32 |
+
- [Languages](#languages)
|
33 |
+
- [Dataset Structure](#dataset-structure)
|
34 |
+
- [Data Instances](#data-instances)
|
35 |
+
- [Data Fields](#data-fields)
|
36 |
+
- [Data Splits](#data-splits)
|
37 |
+
- [Dataset Creation](#dataset-creation)
|
38 |
+
- [Curation Rationale](#curation-rationale)
|
39 |
+
- [Source Data](#source-data)
|
40 |
+
- [Annotations](#annotations)
|
41 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
42 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
43 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
44 |
+
- [Discussion of Biases](#discussion-of-biases)
|
45 |
+
- [Other Known Limitations](#other-known-limitations)
|
46 |
+
- [Additional Information](#additional-information)
|
47 |
+
- [Dataset Curators](#dataset-curators)
|
48 |
+
- [Licensing Information](#licensing-information)
|
49 |
+
- [Citation Information](#citation-information)
|
50 |
+
- [Contributions](#contributions)
|
51 |
+
|
52 |
+
## Dataset Description
|
53 |
+
|
54 |
+
- **Homepage:** [CMU Hinglish DoG](http://festvox.org/cedar/data/notyet/)
|
55 |
+
- **Repository:** [CMU Document Grounded Conversations (English version)](https://github.com/festvox/datasets-CMU_DoG)
|
56 |
+
- **Paper:** [CMU Document Grounded Conversations (English version)](https://arxiv.org/pdf/1809.07358.pdf)
|
57 |
+
- **Point of Contact:**
|
58 |
+
|
59 |
+
### Dataset Summary
|
60 |
+
|
61 |
+
This is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English versions. Can be used for Translating between the two. The dataset has been provided by Prof. Alan Black's group from CMU.
|
62 |
+
|
63 |
+
### Supported Tasks and Leaderboards
|
64 |
+
|
65 |
+
- `abstractive-mt`
|
66 |
+
|
67 |
+
### Languages
|
68 |
+
|
69 |
+
## Dataset Structure
|
70 |
+
|
71 |
+
### Data Instances
|
72 |
+
|
73 |
+
A typical data point comprises a Hinglish text, with key `hi_en` and its English version with key `en`. The `docIdx` contains the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document. The `uid` has the user id of this utterance.
|
74 |
+
|
75 |
+
An example from the CMU_Hinglish_DoG train set looks as follows:
|
76 |
+
```
|
77 |
+
{'rating': 2,
|
78 |
+
'wikiDocumentIdx': 13,
|
79 |
+
'utcTimestamp': '2018-03-16T17:48:22.037Z',
|
80 |
+
'uid': 'user2',
|
81 |
+
'date': '2018-03-16T17:47:21.964Z',
|
82 |
+
'uid2response': {'response': [1, 2, 3, 5], 'type': 'finish'},
|
83 |
+
'uid1LogInTime': '2018-03-16T17:47:21.964Z',
|
84 |
+
'user2_id': 'USR664',
|
85 |
+
'uid1LogOutTime': '2018-03-16T18:02:29.072Z',
|
86 |
+
'whoSawDoc': ['user1', 'user2'],
|
87 |
+
'status': 1,
|
88 |
+
'docIdx': 0,
|
89 |
+
'uid1response': {'response': [1, 2, 3, 4], 'type': 'finish'},
|
90 |
+
'translation': {'en': 'The director is Zack Snyder, 27% Rotten Tomatoes, 4.9/10.',
|
91 |
+
'hi_en': 'Zack Snyder director hai, 27% Rotten Tomatoes, 4.9/10.'}}
|
92 |
+
```
|
93 |
+
|
94 |
+
### Data Fields
|
95 |
+
|
96 |
+
- `date`: the time the file is created, as a string
|
97 |
+
- `docIdx`: the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document.
|
98 |
+
- `translation`:
|
99 |
+
- `hi_en`: The text in Hinglish
|
100 |
+
- `en`: The text in English
|
101 |
+
- `uid`: the user id of this utterance.
|
102 |
+
- `utcTimestamp`: the server utc timestamp of this utterance, as a string
|
103 |
+
- `rating`: A number from 1 or 2 or 3. A larger number means the quality of the conversation is better.
|
104 |
+
- `status`: status as an integer
|
105 |
+
- `uid1LogInTime`: optional login time of user 1, as a string
|
106 |
+
- `uid1LogOutTime`: optional logout time of user 1, as a string
|
107 |
+
- `uid1response`: a json object contains the status and response of user after finishing the conversation. Fields in the object includes:
|
108 |
+
- `type`: should be one of ['finish', 'abandon','abandonWithouAnsweringFeedbackQuestion']. 'finish' means the user successfully finishes the conversation, either by completing 12 or 15 turns or in the way that the other user leaves the conversation first. 'abandon' means the user abandons the conversation in the middle, but entering the feedback page. 'abandonWithouAnsweringFeedbackQuestion' means the user just disconnects or closes the web page without providing the feedback.
|
109 |
+
- `response`: the answer to the post-conversation questions. The worker can choose multiple of them. The options presented to the user are as follows:
|
110 |
+
For type 'finish'
|
111 |
+
1: The conversation is understandable.
|
112 |
+
2: The other user is actively responding me.
|
113 |
+
3: The conversation goes smoothly.
|
114 |
+
For type 'abandon'
|
115 |
+
1: The other user is too rude.
|
116 |
+
2: I don't know how to proceed with the conversation.
|
117 |
+
3: The other user is not responding to me.
|
118 |
+
For users given the document
|
119 |
+
4: I have watched the movie before.
|
120 |
+
5: I have not watched the movie before.
|
121 |
+
For the users without the document
|
122 |
+
4: I will watch the movie after the other user's introduction.
|
123 |
+
5: I will not watch the movie after the other user's introduction.
|
124 |
+
- `uid2response`: same as uid1response
|
125 |
+
- `user2_id`: the generated user id of user 2
|
126 |
+
- `whoSawDoc`: Should be one of ['user1'], ['user2'], ['user1', 'user2']. Indicating which user read the document.
|
127 |
+
- `wikiDocumentId`: the index of the wiki document.
|
128 |
+
|
129 |
+
### Data Splits
|
130 |
+
|
131 |
+
| name |train|validation|test|
|
132 |
+
|----------|----:|---------:|---:|
|
133 |
+
|CMU DOG | 8060| 942| 960|
|
134 |
+
|
135 |
+
## Dataset Creation
|
136 |
+
|
137 |
+
[More Information Needed]
|
138 |
+
|
139 |
+
### Curation Rationale
|
140 |
+
|
141 |
+
[More Information Needed]
|
142 |
+
|
143 |
+
### Source Data
|
144 |
+
|
145 |
+
The Hinglish dataset is derived from the original CMU DoG (Document Grounded Conversations Dataset). More info about that can be found in the [repo](https://github.com/festvox/datasets-CMU_DoG)
|
146 |
+
|
147 |
+
#### Initial Data Collection and Normalization
|
148 |
+
|
149 |
+
[More Information Needed]
|
150 |
+
|
151 |
+
#### Who are the source language producers?
|
152 |
+
|
153 |
+
[More Information Needed]
|
154 |
+
|
155 |
+
### Annotations
|
156 |
+
|
157 |
+
[More Information Needed]
|
158 |
+
|
159 |
+
#### Annotation process
|
160 |
+
|
161 |
+
[More Information Needed]
|
162 |
+
|
163 |
+
#### Who are the annotators?
|
164 |
+
|
165 |
+
[More Information Needed]
|
166 |
+
|
167 |
+
### Personal and Sensitive Information
|
168 |
+
|
169 |
+
[More Information Needed]
|
170 |
+
|
171 |
+
## Considerations for Using the Data
|
172 |
+
|
173 |
+
### Social Impact of Dataset
|
174 |
+
|
175 |
+
The purpose of this dataset is to help develop better question answering systems.
|
176 |
+
|
177 |
+
|
178 |
+
### Discussion of Biases
|
179 |
+
|
180 |
+
[More Information Needed]
|
181 |
+
|
182 |
+
### Other Known Limitations
|
183 |
+
|
184 |
+
[More Information Needed]
|
185 |
+
|
186 |
+
## Additional Information
|
187 |
+
|
188 |
+
### Dataset Curators
|
189 |
+
|
190 |
+
The dataset was initially created by Prof Alan W Black's group at CMU
|
191 |
+
|
192 |
+
### Licensing Information
|
193 |
+
|
194 |
+
[More Information Needed]
|
195 |
+
|
196 |
+
### Citation Information
|
197 |
+
|
198 |
+
```bibtex
|
199 |
+
@inproceedings{
|
200 |
+
cmu_dog_emnlp18,
|
201 |
+
title={A Dataset for Document Grounded Conversations},
|
202 |
+
author={Zhou, Kangyan and Prabhumoye, Shrimai and Black, Alan W},
|
203 |
+
year={2018},
|
204 |
+
booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}
|
205 |
+
}
|
206 |
+
```
|
207 |
+
|
208 |
+
### Contributions
|
209 |
+
|
210 |
+
Thanks to [@Ishan-Kumar2](https://github.com/Ishan-Kumar2) for adding this dataset.
|
cmu_hinglish_dog.py
ADDED
@@ -0,0 +1,189 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
|
16 |
+
import json
|
17 |
+
import os
|
18 |
+
import re
|
19 |
+
|
20 |
+
import datasets
|
21 |
+
|
22 |
+
|
23 |
+
_CITATION = """\
|
24 |
+
@inproceedings{cmu_dog_emnlp18,
|
25 |
+
title={A Dataset for Document Grounded Conversations},
|
26 |
+
author={Zhou, Kangyan and Prabhumoye, Shrimai and Black, Alan W},
|
27 |
+
year={2018},
|
28 |
+
booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}
|
29 |
+
}
|
30 |
+
|
31 |
+
@inproceedings{khanuja-etal-2020-gluecos,
|
32 |
+
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
|
33 |
+
author = "Khanuja, Simran and
|
34 |
+
Dandapat, Sandipan and
|
35 |
+
Srinivasan, Anirudh and
|
36 |
+
Sitaram, Sunayana and
|
37 |
+
Choudhury, Monojit",
|
38 |
+
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
|
39 |
+
month = jul,
|
40 |
+
year = "2020",
|
41 |
+
address = "Online",
|
42 |
+
publisher = "Association for Computational Linguistics",
|
43 |
+
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
|
44 |
+
pages = "3575--3585"
|
45 |
+
}
|
46 |
+
"""
|
47 |
+
|
48 |
+
_DESCRIPTION = """\
|
49 |
+
This is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English only versions. Can be used for Translating between the two.
|
50 |
+
"""
|
51 |
+
|
52 |
+
_HOMEPAGE = "http://festvox.org/cedar/data/notyet/"
|
53 |
+
_URL_HINGLISH = "http://festvox.org/cedar/data/notyet/CMUHinglishDoG.zip"
|
54 |
+
_URL_ENGLISH = "https://github.com/festvox/datasets-CMU_DoG/archive/master/Conversations.zip"
|
55 |
+
|
56 |
+
|
57 |
+
class CMUHinglishDoG(datasets.GeneratorBasedBuilder):
|
58 |
+
"""Load the CMU Hinglish DoG Data for MT"""
|
59 |
+
|
60 |
+
def _info(self):
|
61 |
+
features = datasets.Features(
|
62 |
+
{
|
63 |
+
"date": datasets.Value("string"),
|
64 |
+
"docIdx": datasets.Value("int64"),
|
65 |
+
"translation": datasets.Translation(languages=["en", "hi_en"]),
|
66 |
+
"uid": datasets.Value("string"),
|
67 |
+
"utcTimestamp": datasets.Value("string"),
|
68 |
+
"rating": datasets.Value("int64"),
|
69 |
+
"status": datasets.Value("int64"),
|
70 |
+
"uid1LogInTime": datasets.Value("string"),
|
71 |
+
"uid1LogOutTime": datasets.Value("string"),
|
72 |
+
"uid1response": {
|
73 |
+
"response": datasets.Sequence(datasets.Value("int64")),
|
74 |
+
"type": datasets.Value("string"),
|
75 |
+
},
|
76 |
+
"uid2response": {
|
77 |
+
"response": datasets.Sequence(datasets.Value("int64")),
|
78 |
+
"type": datasets.Value("string"),
|
79 |
+
},
|
80 |
+
"user2_id": datasets.Value("string"),
|
81 |
+
"whoSawDoc": datasets.Sequence(datasets.Value("string")),
|
82 |
+
"wikiDocumentIdx": datasets.Value("int64"),
|
83 |
+
}
|
84 |
+
)
|
85 |
+
return datasets.DatasetInfo(
|
86 |
+
description=_DESCRIPTION,
|
87 |
+
features=features,
|
88 |
+
supervised_keys=None,
|
89 |
+
homepage=_HOMEPAGE,
|
90 |
+
citation=_CITATION,
|
91 |
+
)
|
92 |
+
|
93 |
+
def _split_generators(self, dl_manager):
|
94 |
+
"""The linking part between Hinglish data and English data is inspired from the implementation in GLUECoS.
|
95 |
+
Refer here for the original script https://github.com/microsoft/GLUECoS/blob/7fdc51653e37a32aee17505c47b7d1da364fa77e/Data/Preprocess_Scripts/preprocess_mt_en_hi.py"""
|
96 |
+
|
97 |
+
eng_path = dl_manager.download_and_extract(_URL_ENGLISH)
|
98 |
+
data_dir_en = os.path.join(eng_path, "datasets-CMU_DoG-master", "Conversations")
|
99 |
+
|
100 |
+
hi_en_path = dl_manager.download_and_extract(_URL_HINGLISH)
|
101 |
+
data_dir_hi_en = os.path.join(hi_en_path, "CMUHinglishDoG", "Conversations_Hinglish")
|
102 |
+
|
103 |
+
hi_en_dirs = {
|
104 |
+
"train": os.path.join(data_dir_hi_en, "train"),
|
105 |
+
"valid": os.path.join(data_dir_hi_en, "valid"),
|
106 |
+
"test": os.path.join(data_dir_hi_en, "test"),
|
107 |
+
}
|
108 |
+
|
109 |
+
return [
|
110 |
+
datasets.SplitGenerator(
|
111 |
+
name=datasets.Split.TRAIN,
|
112 |
+
gen_kwargs={
|
113 |
+
"hi_en_dir": hi_en_dirs["train"],
|
114 |
+
"data_dir_en": data_dir_en,
|
115 |
+
},
|
116 |
+
),
|
117 |
+
datasets.SplitGenerator(
|
118 |
+
name=datasets.Split.TEST,
|
119 |
+
gen_kwargs={
|
120 |
+
"hi_en_dir": hi_en_dirs["test"],
|
121 |
+
"data_dir_en": data_dir_en,
|
122 |
+
},
|
123 |
+
),
|
124 |
+
datasets.SplitGenerator(
|
125 |
+
name=datasets.Split.VALIDATION,
|
126 |
+
gen_kwargs={
|
127 |
+
"hi_en_dir": hi_en_dirs["valid"],
|
128 |
+
"data_dir_en": data_dir_en,
|
129 |
+
},
|
130 |
+
),
|
131 |
+
]
|
132 |
+
|
133 |
+
def _generate_examples(self, hi_en_dir, data_dir_en):
|
134 |
+
"""Yields examples."""
|
135 |
+
english_files_train = os.listdir(os.path.join(data_dir_en, "train"))
|
136 |
+
english_files_val = os.listdir(os.path.join(data_dir_en, "valid"))
|
137 |
+
english_files_test = os.listdir(os.path.join(data_dir_en, "test"))
|
138 |
+
|
139 |
+
hinglish_files = os.listdir(hi_en_dir)
|
140 |
+
key = 0
|
141 |
+
for f in hinglish_files:
|
142 |
+
en_file_path = f.split(".json")[0] + ".json"
|
143 |
+
found = True
|
144 |
+
# Looks for the corresponding english file in all 3 splits
|
145 |
+
if en_file_path in english_files_train:
|
146 |
+
en = json.load(open(os.path.join(os.path.join(data_dir_en, "train"), en_file_path)))
|
147 |
+
elif en_file_path in english_files_val:
|
148 |
+
en = json.load(open(os.path.join(os.path.join(data_dir_en, "valid"), en_file_path)))
|
149 |
+
elif en_file_path in english_files_test:
|
150 |
+
en = json.load(open(os.path.join(os.path.join(data_dir_en, "test"), en_file_path)))
|
151 |
+
else:
|
152 |
+
found = False
|
153 |
+
if found:
|
154 |
+
hi_en = json.load(open(os.path.join(hi_en_dir, f)))
|
155 |
+
|
156 |
+
assert len(en["history"]) == len(hi_en["history"])
|
157 |
+
|
158 |
+
for x, y in zip(en["history"], hi_en["history"]):
|
159 |
+
assert x["docIdx"] == y["docIdx"]
|
160 |
+
assert x["uid"] == y["uid"]
|
161 |
+
assert x["utcTimestamp"] == y["utcTimestamp"]
|
162 |
+
|
163 |
+
x["text"] = re.sub("\t|\n", " ", x["text"])
|
164 |
+
y["text"] = re.sub("\t|\n", " ", y["text"])
|
165 |
+
line = {
|
166 |
+
"date": hi_en["date"],
|
167 |
+
"uid": x["uid"],
|
168 |
+
"docIdx": x["docIdx"],
|
169 |
+
"utcTimestamp": x["utcTimestamp"],
|
170 |
+
"translation": {"hi_en": y["text"], "en": x["text"]},
|
171 |
+
"rating": hi_en["rating"],
|
172 |
+
"status": hi_en["status"],
|
173 |
+
"uid1LogOutTime": hi_en.get("uid1LogOutTime"),
|
174 |
+
"uid1LogInTime": hi_en["uid1LogInTime"],
|
175 |
+
"uid1response": {
|
176 |
+
"response": hi_en["uid1response"]["response"] if "uid1response" in hi_en else [],
|
177 |
+
"type": hi_en["uid1response"]["type"] if "uid1response" in hi_en else None,
|
178 |
+
},
|
179 |
+
"uid2response": {
|
180 |
+
"response": hi_en["uid2response"]["response"] if "uid2response" in hi_en else [],
|
181 |
+
"type": hi_en["uid2response"]["type"] if "uid2response" in hi_en else None,
|
182 |
+
},
|
183 |
+
"user2_id": hi_en["user2_id"],
|
184 |
+
"whoSawDoc": hi_en["whoSawDoc"],
|
185 |
+
"wikiDocumentIdx": hi_en["wikiDocumentIdx"],
|
186 |
+
}
|
187 |
+
|
188 |
+
yield key, line
|
189 |
+
key += 1
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"default": {"description": "This is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English only versions. Can be used for Translating between the two.\n", "citation": "@inproceedings{cmu_dog_emnlp18,\n title={A Dataset for Document Grounded Conversations},\n author={Zhou, Kangyan and Prabhumoye, Shrimai and Black, Alan W},\n year={2018},\n booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}\n}\n\n@inproceedings{khanuja-etal-2020-gluecos,\n title = \"{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}\",\n author = \"Khanuja, Simran and\n Dandapat, Sandipan and\n Srinivasan, Anirudh and\n Sitaram, Sunayana and\n Choudhury, Monojit\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.329\",\n pages = \"3575--3585\"\n}\n", "homepage": "http://festvox.org/cedar/data/notyet/", "license": "", "features": {"date": {"dtype": "string", "id": null, "_type": "Value"}, "docIdx": {"dtype": "int64", "id": null, "_type": "Value"}, "translation": {"languages": ["en", "hi_en"], "id": null, "_type": "Translation"}, "uid": {"dtype": "string", "id": null, "_type": "Value"}, "utcTimestamp": {"dtype": "string", "id": null, "_type": "Value"}, "rating": {"dtype": "int64", "id": null, "_type": "Value"}, "status": {"dtype": "int64", "id": null, "_type": "Value"}, "uid1LogInTime": {"dtype": "string", "id": null, "_type": "Value"}, "uid1LogOutTime": {"dtype": "string", "id": null, "_type": "Value"}, "uid1response": {"response": {"feature": {"dtype": "int64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "type": {"dtype": "string", "id": null, "_type": "Value"}}, "uid2response": {"response": {"feature": {"dtype": "int64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "type": {"dtype": "string", "id": null, "_type": "Value"}}, "user2_id": {"dtype": "string", "id": null, "_type": "Value"}, "whoSawDoc": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "wikiDocumentIdx": {"dtype": "int64", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cmu_hinglish_do_g", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3142398, "num_examples": 8060, "dataset_name": "cmu_hinglish_do_g"}, "test": {"name": "test", "num_bytes": 379521, "num_examples": 960, "dataset_name": "cmu_hinglish_do_g"}, "validation": {"name": "validation", "num_bytes": 368726, "num_examples": 942, "dataset_name": "cmu_hinglish_do_g"}}, "download_checksums": {"https://github.com/festvox/datasets-CMU_DoG/archive/master/Conversations.zip": {"num_bytes": 8162724, "checksum": "87e390c091f2114a09160aaa96ca45136d12b3ffd8ec82f1513a81251af0ac32"}, "http://festvox.org/cedar/data/notyet/CMUHinglishDoG.zip": {"num_bytes": 586961, "checksum": "2bb5cee3c7ca60e2e2ed25e2775e4025790623fe079d2e9d1831fbf6f6fc8086"}}, "download_size": 8749685, "post_processing_size": null, "dataset_size": 3890645, "size_in_bytes": 12640330}}
|
dummy/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:51122d6c7f702c95b8403bd993187975d9cd4f45133fd91f1169f3b737ed00fd
|
3 |
+
size 34763
|