Datasets:

ArXiv:
License:
system HF staff commited on
Commit
59befff
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - expert-generated
6
+ - found
7
+ languages:
8
+ en:
9
+ - en
10
+ zh:
11
+ - zh
12
+ licenses:
13
+ - unknown
14
+ multilinguality:
15
+ - monolingual
16
+ size_categories:
17
+ - n>1K
18
+ source_datasets:
19
+ - original
20
+ task_categories:
21
+ - question-answering
22
+ task_ids:
23
+ - closed-domain-qa
24
+ ---
25
+
26
+ # Dataset Card for [Dataset Name]
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-instances)
36
+ - [Data Splits](#data-instances)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** https://github.com/UCSD-AI4H/Medical-Dialogue-System
54
+ - **Repository:** Hosted on [this link](https://drive.google.com/drive/folders/1r09_i8nJ9c1nliXVGXwSqRYqklcHd9e2) for Chinese and [this link](https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD) for English.
55
+ - **Paper:** Details about the dataset can be found in [this arxiv papaer](https://arxiv.org/abs/2004.03329)
56
+ - **Leaderboard:**
57
+ - **Point of Contact:**
58
+
59
+ ### Dataset Summary
60
+
61
+ The MedDialog dataset (Chinese) contains conversations (in Chinese) between doctors and patients. It has 1.1 million dialogues and 4 million utterances. The data is continuously growing and more dialogues will be added. The raw dialogues are from haodf.com. All copyrights of the data belong to haodf.com.
62
+
63
+ The MedDialog dataset (English) contains conversations (in English) between doctors and patients. It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com. All copyrights of the data belong to healthcaremagic.com and icliniq.com.
64
+
65
+ Directions for using the pre-trained model using BERT using PyTorch is available in the Homepage.
66
+
67
+
68
+ ### Supported Tasks and Leaderboards
69
+
70
+ Closed domain qa
71
+
72
+ ### Languages
73
+
74
+ Monolingual. The datasets are in English (EN) and Chinese (ZH)
75
+
76
+ ## Dataset Structure
77
+
78
+ ### Data Instances
79
+ #### For English:
80
+
81
+ Each consultation consists of the below:
82
+ - ID
83
+ - URL
84
+ - Description of patient’s medical condition
85
+ - Dialogue
86
+
87
+ The dataset is built from [icliniq.com](https://www.icliniq.com/), [healthcaremagic.com](https://www.healthcaremagic.com/), [healthtap.com](https://www.healthtap.com/) and all copyrights of the data belong to these websites.
88
+
89
+ #### For Chinese:
90
+
91
+ Each consultation consists of the below:
92
+ - ID
93
+ - URL
94
+ - Description of patient’s medical condition
95
+ - Dialogue
96
+ - (Optional) Diagnosis and suggestions.
97
+
98
+ The dataset is built from [Haodf.com](https://www.haodf.com/) and all copyrights of the data belong to [Haodf.com](https://www.haodf.com/).
99
+
100
+ One example for chinese is
101
+
102
+ ```
103
+ {
104
+ {'dialogue_id': 2,
105
+ 'dialogue_turns': [{'speaker': '病人',
106
+ 'utterance': '孩子哭闹时,鸡鸡旁边会肿起,情绪平静时肿块会消失,去一个私人诊所看过,说是疝气.如果确定是疝气,是不是一定要手术治疗?我孩子只有1岁10月,自愈的可能性大吗?如果一定要手术,这么小的孩子风险大吗?术后的恢复困难吗?谢谢.'},
107
+ {'speaker': '医生', 'utterance': '南方医的B超说得不清楚,可能是鞘膜积液,可到我医院复查一个B超。'}],
108
+ 'dialogue_url': 'https://www.haodf.com/doctorteam/flow_team_6477251152.htm',
109
+ 'file_name': '2020.txt'},
110
+ }
111
+ ```
112
+
113
+
114
+ ### Data Fields
115
+
116
+ For generating the QA only the below fields have been considered:
117
+ - ID : Consultatation Identifier (restarts for each file)
118
+ - URL: The url link of the extracted conversation
119
+ - Dialogue : The conversation between the doctor and the patient.
120
+
121
+ These are arranged as below in the prepared dataset. Each item will be represented with these parameters.
122
+
123
+ - "file_name": string - signifies the file from which the conversation was extracted
124
+ - "dialogue_id": int32 - the dialogue id
125
+ - "dialogue_url": string - url of the conversation
126
+ - "dialogue_turns": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=["病人", "医生"]), and "utterance"(string) for each turn. (ClassLable(names=["Patient", "Doctor"]) for english)
127
+
128
+
129
+ ### Data Splits
130
+
131
+ There are no data splits on the original data
132
+
133
+ ## Dataset Creation
134
+
135
+ ### Curation Rationale
136
+
137
+ Medical dialogue systems are promising in assisting in telemedicine to increase access to healthcare services, improve the quality of patient care, and reduce medical costs.
138
+
139
+ ### Source Data
140
+
141
+ #### Initial Data Collection and Normalization
142
+
143
+ [More Information Needed]
144
+
145
+ #### Who are the source language producers?
146
+
147
+ [More Information Needed]
148
+
149
+ ### Annotations
150
+
151
+ #### Annotation process
152
+
153
+ [More Information Needed]
154
+
155
+ #### Who are the annotators?
156
+
157
+ [More Information Needed]
158
+
159
+ ### Personal and Sensitive Information
160
+
161
+ [More Information Needed]
162
+
163
+ ## Considerations for Using the Data
164
+
165
+ ### Social Impact of Dataset
166
+
167
+ [More Information Needed]
168
+
169
+ ### Discussion of Biases
170
+
171
+ [More Information Needed]
172
+
173
+ ### Other Known Limitations
174
+
175
+ [More Information Needed]
176
+
177
+ ## Additional Information
178
+
179
+ ### Dataset Curators
180
+
181
+ [More Information Needed]
182
+
183
+ ### Licensing Information
184
+
185
+ [More Information Needed]
186
+
187
+ ### Citation Information
188
+ @article{chen2020meddiag,
189
+ title={MedDialog: a large-scale medical dialogue dataset},
190
+ author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},
191
+ journal={arXiv preprint arXiv:2004.03329},
192
+ year={2020}
193
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"en": {"description": "The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.\nAll copyrights of the data belong to healthcaremagic.com and icliniq.com.\n", "citation": "@article{chen2020meddiag,\n title={MedDialog: a large-scale medical dialogue dataset},\n author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},\n journal={arXiv preprint arXiv:2004.03329},\n year={2020}\n}\n", "homepage": "https://github.com/UCSD-AI4H/Medical-Dialogue-System", "license": "", "features": {"file_name": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_id": {"dtype": "int32", "id": null, "_type": "Value"}, "dialogue_url": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_turns": {"feature": {"speaker": {"num_classes": 2, "names": ["Patient", "Doctor"], "names_file": null, "id": null, "_type": "ClassLabel"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "medical_dialog", "config_name": "en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 0, "num_examples": 0, "dataset_name": "medical_dialog"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 0, "size_in_bytes": 0}, "zh": {"description": "The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.\nAll copyrights of the data belong to healthcaremagic.com and icliniq.com.\n", "citation": "@article{chen2020meddiag,\n title={MedDialog: a large-scale medical dialogue dataset},\n author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},\n journal={arXiv preprint arXiv:2004.03329},\n year={2020}\n}\n", "homepage": "https://github.com/UCSD-AI4H/Medical-Dialogue-System", "license": "", "features": {"file_name": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_id": {"dtype": "int32", "id": null, "_type": "Value"}, "dialogue_url": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_turns": {"feature": {"speaker": {"num_classes": 2, "names": ["\u75c5\u4eba", "\u533b\u751f"], "names_file": null, "id": null, "_type": "ClassLabel"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "medical_dialog", "config_name": "zh", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 0, "num_examples": 0, "dataset_name": "medical_dialog"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 0, "size_in_bytes": 0}}
dummy/en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bcacc16962d17aad0f8594c807098f1c2836653691a553677bec2b5a23147ea
3
+ size 15185
dummy/zh/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ce905874bdd26967b6f7e1641cf5d4c72c6ea4183d47dc14efb31cb14f67d5a
3
+ size 9241
medical_dialog.py ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """Medical Dialog dataset in english and chinese"""
15
+
16
+ from __future__ import absolute_import, division, print_function
17
+
18
+ import copy
19
+ import os
20
+ import re
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @article{chen2020meddiag,
27
+ title={MedDialog: a large-scale medical dialogue dataset},
28
+ author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},
29
+ journal={arXiv preprint arXiv:2004.03329},
30
+ year={2020}
31
+ }
32
+ """
33
+
34
+
35
+ _DESCRIPTION = """\
36
+ The MedDialog dataset (English) contains conversations (in English) between doctors and patients.\
37
+ It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. \
38
+ The raw dialogues are from healthcaremagic.com and icliniq.com.\
39
+
40
+ All copyrights of the data belong to healthcaremagic.com and icliniq.com.
41
+ """
42
+
43
+ _HOMEPAGE = "https://github.com/UCSD-AI4H/Medical-Dialogue-System"
44
+
45
+ _LICENSE = ""
46
+
47
+
48
+ class MedicalDialog(datasets.GeneratorBasedBuilder):
49
+ VERSION = datasets.Version("1.0.0")
50
+
51
+ BUILDER_CONFIGS = [
52
+ datasets.BuilderConfig(name="en", description="The dataset of medical dialogs in English.", version=VERSION),
53
+ datasets.BuilderConfig(name="zh", description="The dataset of medical dialogs in Chinese.", version=VERSION),
54
+ ]
55
+
56
+ @property
57
+ def manual_download_instructions(self):
58
+ return """\
59
+ \n For English:\nYou need to go to https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing,\
60
+ and manually download the dataset from Google Drive. Once it is completed,
61
+ a file named Medical-Dialogue-Dataset-English-<timestamp-info>.zip will appear in your Downloads folder(
62
+ or whichever folder your browser chooses to save files to). Unzip the folder to obtain
63
+ a folder named "Medical-Dialogue-Dataset-English" several text files.
64
+
65
+ Now, you can specify the path to this folder for the data_dir argument in the
66
+ datasets.load_dataset(...) option.
67
+ The <path/to/folder> can e.g. be "/Downloads/Medical-Dialogue-Dataset-English".
68
+ The data can then be loaded using the below command:\
69
+ `datasets.load_dataset("medical_dialog", name="en", data_dir="/Downloads/Medical-Dialogue-Dataset-English")`.
70
+
71
+ \n For Chinese:\nFollow the above process. Change the 'name' to 'zh'.The download link is https://drive.google.com/drive/folders/1r09_i8nJ9c1nliXVGXwSqRYqklcHd9e2
72
+
73
+ **NOTE**
74
+ - A caution while downloading from drive. It is better to download single files since creating a zip might not include files <500 MB. This has been observed mutiple times.
75
+ - After downloading the files and adding them to the appropriate folder, the path of the folder can be given as input tu the data_dir path.
76
+ """
77
+
78
+ def _info(self):
79
+ if self.config.name == "zh":
80
+ features = datasets.Features(
81
+ {
82
+ "file_name": datasets.Value("string"),
83
+ "dialogue_id": datasets.Value("int32"),
84
+ "dialogue_url": datasets.Value("string"),
85
+ "dialogue_turns": datasets.Sequence(
86
+ {
87
+ "speaker": datasets.ClassLabel(names=["病人", "医生"]),
88
+ "utterance": datasets.Value("string"),
89
+ }
90
+ ),
91
+ }
92
+ )
93
+
94
+ if self.config.name == "en":
95
+ features = datasets.Features(
96
+ {
97
+ "file_name": datasets.Value("string"),
98
+ "dialogue_id": datasets.Value("int32"),
99
+ "dialogue_url": datasets.Value("string"),
100
+ "dialogue_turns": datasets.Sequence(
101
+ {
102
+ "speaker": datasets.ClassLabel(names=["Patient", "Doctor"]),
103
+ "utterance": datasets.Value("string"),
104
+ }
105
+ ),
106
+ }
107
+ )
108
+
109
+ return datasets.DatasetInfo(
110
+ # This is the description that will appear on the datasets page.
111
+ description=_DESCRIPTION,
112
+ features=features,
113
+ supervised_keys=None,
114
+ # Homepage of the dataset for documentation
115
+ homepage=_HOMEPAGE,
116
+ # License for the dataset if available
117
+ license=_LICENSE,
118
+ # Citation for the dataset
119
+ citation=_CITATION,
120
+ )
121
+
122
+ def _split_generators(self, dl_manager):
123
+ """Returns SplitGenerators."""
124
+ path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
125
+ if not os.path.exists(path_to_manual_file):
126
+ raise FileNotFoundError(
127
+ "{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('medical_dialog', data_dir=...)`. Manual download instructions: {})".format(
128
+ path_to_manual_file, self.manual_download_instructions
129
+ )
130
+ )
131
+
132
+ filepaths = [
133
+ os.path.join(path_to_manual_file, txt_file_name)
134
+ for txt_file_name in sorted(os.listdir(path_to_manual_file))
135
+ if txt_file_name.endswith("txt")
136
+ ]
137
+
138
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": filepaths})]
139
+
140
+ def _generate_examples(self, filepaths):
141
+ """Yields examples. Iterates over each file and give the creates the corresponding features.
142
+
143
+ NOTE:
144
+ - The code makes some assumption on the structure of the raw .txt file.
145
+ - There are some checks to separate different id's. Hopefully, should not cause further issues later when more txt files are added.
146
+ """
147
+ data_lang = self.config.name
148
+ id_ = -1
149
+ for filepath in filepaths:
150
+ with open(filepath, encoding="utf-8") as f_in:
151
+ # Parameters to just "sectionize" the raw data
152
+ last_part = ""
153
+ last_dialog = {}
154
+ last_list = []
155
+ last_user = ""
156
+ check_list = []
157
+
158
+ # These flags are present to have a single function address both chinese and english data
159
+ # English data is a little hahazard (i.e. the sentences spans multiple different lines),
160
+ # Chinese is compact with one line for doctor and patient.
161
+ conv_flag = False
162
+ des_flag = False
163
+
164
+ while True:
165
+ line = f_in.readline()
166
+ if not line:
167
+ break
168
+
169
+ # Extracting the dialog id
170
+ if line[:2] == "id": # Hardcode alert!
171
+ # Handling ID references that may come in the description
172
+ # These were observed in the Chinese dataset and were not
173
+ # followed by numbers
174
+ try:
175
+ dialogue_id = int(re.findall(r"\d+", line)[0])
176
+ except IndexError:
177
+ continue
178
+
179
+ # Extracting the url
180
+ if line[:4] == "http": # Hardcode alert!
181
+ dialogue_url = line.rstrip()
182
+
183
+ # Extracting the patient info from description.
184
+ if line[:11] == "Description": # Hardcode alert!
185
+ last_part = "description"
186
+ last_dialog = {}
187
+ last_list = []
188
+ last_user = ""
189
+ last_conv = {"speaker": "", "utterance": ""}
190
+ while True:
191
+ line = f_in.readline()
192
+ if (not line) or (line in ["\n", "\n\r"]):
193
+ break
194
+ else:
195
+ if data_lang == "zh": # Condition in chinese
196
+ if line[:5] == "病情描述:": # Hardcode alert!
197
+ last_user = "病人"
198
+ sen = f_in.readline().rstrip()
199
+ des_flag = True
200
+
201
+ if data_lang == "en":
202
+ last_user = "Patient"
203
+ sen = line.rstrip()
204
+ des_flag = True
205
+
206
+ if des_flag:
207
+ if sen == "":
208
+ continue
209
+ if sen in check_list:
210
+ last_conv["speaker"] = ""
211
+ last_conv["utterance"] = ""
212
+ else:
213
+ last_conv["speaker"] = last_user
214
+ last_conv["utterance"] = sen
215
+ check_list.append(sen)
216
+ des_flag = False
217
+ break
218
+ # Extracting the conversation info from dialogue.
219
+ elif line[:8] == "Dialogue": # Hardcode alert!
220
+ if last_part == "description" and len(last_conv["utterance"]) > 0:
221
+ last_part = "dialogue"
222
+ if data_lang == "zh":
223
+ last_user = "病人"
224
+
225
+ if data_lang == "en":
226
+ last_user = "Patient"
227
+
228
+ while True:
229
+ line = f_in.readline()
230
+ if (not line) or (line in ["\n", "\n\r"]):
231
+ conv_flag = False
232
+ last_user = ""
233
+ last_list.append(copy.deepcopy(last_conv))
234
+ # To ensure close of conversation, only even number of sentences
235
+ # are extracted
236
+ last_turn = len(last_list)
237
+ if int(last_turn / 2) > 0:
238
+ temp = int(last_turn / 2)
239
+ id_ += 1
240
+ last_dialog["file_name"] = filepath
241
+ last_dialog["dialogue_id"] = dialogue_id
242
+ last_dialog["dialogue_url"] = dialogue_url
243
+ last_dialog["dialogue_turns"] = last_list[: temp * 2]
244
+ yield id_, last_dialog
245
+ break
246
+
247
+ if data_lang == "zh":
248
+ if line[:3] == "病人:" or line[:3] == "医生:": # Hardcode alert!
249
+ user = line[:2] # Hardcode alert!
250
+ line = f_in.readline()
251
+ conv_flag = True
252
+
253
+ # The elif block is to ensure that multi-line sentences are captured.
254
+ # This has been observed only in english.
255
+ if data_lang == "en":
256
+ if line.strip() == "Patient:" or line.strip() == "Doctor:": # Hardcode alert!
257
+ user = line.replace(":", "").rstrip()
258
+ line = f_in.readline()
259
+ conv_flag = True
260
+ elif line[:2] != "id": # Hardcode alert!
261
+ conv_flag = True
262
+
263
+ # Continues till the next ID is parsed
264
+ if conv_flag:
265
+ sen = line.rstrip()
266
+ if sen == "":
267
+ continue
268
+
269
+ if user == last_user:
270
+ last_conv["utterance"] = last_conv["utterance"] + sen
271
+ else:
272
+ last_user = user
273
+ last_list.append(copy.deepcopy(last_conv))
274
+ last_conv["utterance"] = sen
275
+ last_conv["speaker"] = user