ZoneTwelve
commited on
Commit
•
732eb0d
0
Parent(s):
Update patient doctor rawdata
Browse files
.gitattributes
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
blob_0007 filter=lfs diff=lfs merge=lfs -text
|
2 |
+
blob_0010 filter=lfs diff=lfs merge=lfs -text
|
3 |
+
blob_0002 filter=lfs diff=lfs merge=lfs -text
|
4 |
+
blob_0004 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
blob_0005 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
blob_0008 filter=lfs diff=lfs merge=lfs -text
|
7 |
+
blob_0009 filter=lfs diff=lfs merge=lfs -text
|
8 |
+
blob_0001 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
blob_0003 filter=lfs diff=lfs merge=lfs -text
|
10 |
+
blob_0006 filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Patient Doctor QA Raw Dataset (ZoneTwelve/patient_doctor_qa_rawdata)
|
2 |
+
|
3 |
+
⚠️ **IMPORTANT**: All files in this dataset use Big5 encoding, as they originate from public e-hospital. Make sure your system is configured to handle Big5 encoded files properly.
|
4 |
+
|
5 |
+
## Overview
|
6 |
+
This dataset (`ZoneTwelve/patient_doctor_qa_rawdata`) contains patient-related Question-Answer pairs derived from Taiwan's e-Hospital system. The data is sourced from the Taiwan e-Hospital platform (https://sp1.hso.mohw.gov.tw/), providing valuable healthcare-related QA information.
|
7 |
+
|
8 |
+
## Dataset Structure
|
9 |
+
The dataset is stored in Hugging Face's blob format for efficient storage and access. It includes:
|
10 |
+
- Question-answer pairs related to patient care
|
11 |
+
- Medical terminology and procedures
|
12 |
+
- Healthcare facility information
|
13 |
+
- Patient care guidelines
|
14 |
+
|
15 |
+
## Data Source
|
16 |
+
- **Origin**: Taiwan e-Hospital Platform
|
17 |
+
- **URL**: https://sp1.hso.mohw.gov.tw/
|
18 |
+
- **Format**: Processed and stored in blob format
|
19 |
+
|
20 |
+
## Tools and Utilities
|
21 |
+
|
22 |
+
### 1. Viewer Tool (`viewer.py`)
|
23 |
+
A Python script to preview the contents of blob files.
|
24 |
+
|
25 |
+
#### Features:
|
26 |
+
- List all files in the blob storage
|
27 |
+
- Preview specific file contents
|
28 |
+
- Support for both text and binary files
|
29 |
+
- Configurable preview size
|
30 |
+
|
31 |
+
#### Usage:
|
32 |
+
```bash
|
33 |
+
# List all files
|
34 |
+
python viewer.py /path/to/ZoneTwelve/patient_docker_qa_data --list
|
35 |
+
|
36 |
+
# Preview specific file
|
37 |
+
python viewer.py /path/to/ZoneTwelve/patient_docker_qa_data --file 999
|
38 |
+
|
39 |
+
# Preview with custom size limit
|
40 |
+
python viewer.py /path/to/ZoneTwelve/patient_docker_qa_data --file 999 --size 2048
|
41 |
+
```
|
42 |
+
|
43 |
+
### 2. Export Tool (`export.py`)
|
44 |
+
A Python script to extract files from the blob storage.
|
45 |
+
|
46 |
+
#### Features:
|
47 |
+
- Extract specific files from blobs
|
48 |
+
- Maintain original file structure
|
49 |
+
- Custom output path support
|
50 |
+
- File listing capability
|
51 |
+
|
52 |
+
#### Usage:
|
53 |
+
```bash
|
54 |
+
# List available files
|
55 |
+
python export.py /path/to/ZoneTwelve/patient_docker_qa_data --list
|
56 |
+
|
57 |
+
# Extract specific file
|
58 |
+
python export.py /path/to/ZoneTwelve/patient_docker_qa_data --file 999 --output /path/to/save/999.html
|
59 |
+
|
60 |
+
# Extract to default location
|
61 |
+
python export.py /path/to/ZoneTwelve/patient_docker_qa_data --file 999
|
62 |
+
```
|
63 |
+
|
64 |
+
## Directory Structure
|
65 |
+
```
|
66 |
+
.
|
67 |
+
├── blobs/
|
68 |
+
│ ├── metadata.json
|
69 |
+
│ ├── blob_0
|
70 |
+
│ ├── blob_1
|
71 |
+
│ └── ...
|
72 |
+
├── viewer.py
|
73 |
+
└── export.py
|
74 |
+
```
|
75 |
+
|
76 |
+
## Requirements
|
77 |
+
- Python 3.6 or higher
|
78 |
+
- Standard Python libraries:
|
79 |
+
- json
|
80 |
+
- argparse
|
81 |
+
- pathlib
|
82 |
+
- shutil
|
83 |
+
|
84 |
+
## Installation
|
85 |
+
1. Clone the repository
|
86 |
+
2. Ensure Python 3.6+ is installed
|
87 |
+
3. No additional dependencies required
|
88 |
+
|
89 |
+
## Usage Examples
|
90 |
+
|
91 |
+
### Viewing Files
|
92 |
+
```bash
|
93 |
+
# List all available files
|
94 |
+
python viewer.py ./blobs --list
|
95 |
+
|
96 |
+
# Preview a specific QA file
|
97 |
+
python viewer.py ./blobs --file 999
|
98 |
+
```
|
99 |
+
|
100 |
+
### Extracting Files
|
101 |
+
```bash
|
102 |
+
# Extract a specific file to custom location
|
103 |
+
python export.py ./blobs --file qa/patient_guidelines.txt --output ./exported/guidelines.txt
|
104 |
+
|
105 |
+
# Extract maintaining original structure
|
106 |
+
python export.py ./blobs --file qa/medical_terms.txt
|
107 |
+
```
|
108 |
+
|
109 |
+
## Data Format
|
110 |
+
The blob storage contains file types including:
|
111 |
+
- Raw HTML files (.html), but without extention name
|
112 |
+
|
113 |
+
## License and Usage Rights
|
114 |
+
Please refer to Taiwan e-Hospital's terms of service and data usage guidelines for specific licensing information.
|
115 |
+
|
116 |
+
## Contributing
|
117 |
+
For contributions or issues, please follow these steps:
|
118 |
+
1. Fork the repository
|
119 |
+
2. Create a feature branch
|
120 |
+
3. Submit a pull request
|
121 |
+
|
122 |
+
## Contact
|
123 |
+
For questions or support regarding the dataset, please contact the maintainers or refer to the Taiwan e-Hospital platform.
|
124 |
+
|
125 |
+
## Acknowledgments
|
126 |
+
- Taiwan e-Hospital for providing the source data
|
127 |
+
- Contributors to the data collection and processing pipeline
|
blob_0001
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:20a299e84491e87e7e411b8ced9968dc04f9222a7a70dc596d1e5d5b75b010f8
|
3 |
+
size 99987420
|
blob_0002
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ed7becf5259381f4b8f9fcfc1492965548363955f821a1d45d94f94a3d23342a
|
3 |
+
size 99999376
|
blob_0003
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b0e39d1b604866d7d9209a186aac767d27a3f98d05abbc9c833c9dcd08b26515
|
3 |
+
size 99985846
|
blob_0004
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:89173351e986218e27e333eac124f3dcb124f12bc1235bcd4fc7d34f5c7a1e70
|
3 |
+
size 99997432
|
blob_0005
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6aa4daa6b5501f89d574be69e62c935e19ca5a2ff0e47d5a8be6d625461767a0
|
3 |
+
size 99983944
|
blob_0006
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9910b67ffdd0860f4ac22ecbe1dc8661daad43589a6db81c61429f3ce21b516f
|
3 |
+
size 99988029
|
blob_0007
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1481d269b6b529bc213078c3bbf5b41d8475a25a3ea0a6f4d1cdaef1f9507d8a
|
3 |
+
size 99983959
|
blob_0008
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4cd849374d9347ee731ee30a63419f13882c191379f08e60e4d366bf638eb93e
|
3 |
+
size 99996310
|
blob_0009
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2c9ec23aa38de627c3066f215140e8a00a0566687d62bed88d55e81b7efdaeb2
|
3 |
+
size 99997661
|
blob_0010
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5705fbe0d163223306165cc1a58eba088c0f96d46af8398bc7832d4310987be5
|
3 |
+
size 24462146
|
export.py
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
Script to extract files from blobs created for Hugging Face datasets.
|
4 |
+
Usage:
|
5 |
+
python blob_extract.py /path/to/blob/directory --file path/to/extract.txt --output /path/to/save
|
6 |
+
"""
|
7 |
+
|
8 |
+
import json
|
9 |
+
import argparse
|
10 |
+
from pathlib import Path
|
11 |
+
import shutil
|
12 |
+
import os
|
13 |
+
|
14 |
+
class BlobExtractor:
|
15 |
+
def __init__(self, blob_dir: str):
|
16 |
+
"""Initialize extractor with directory containing blobs and metadata."""
|
17 |
+
self.blob_dir = Path(blob_dir)
|
18 |
+
self.metadata_path = self.blob_dir / 'metadata.json'
|
19 |
+
self.metadata = self._load_metadata()
|
20 |
+
|
21 |
+
def _load_metadata(self) -> dict:
|
22 |
+
"""Load and return the metadata JSON file."""
|
23 |
+
if not self.metadata_path.exists():
|
24 |
+
raise FileNotFoundError(f"Metadata file not found at {self.metadata_path}")
|
25 |
+
|
26 |
+
with open(self.metadata_path, 'r') as f:
|
27 |
+
return json.load(f)
|
28 |
+
|
29 |
+
def list_files(self):
|
30 |
+
"""Print all files stored in the blobs."""
|
31 |
+
print("\nFiles available for extraction:")
|
32 |
+
print("-" * 50)
|
33 |
+
for file_path in sorted(self.metadata.keys()):
|
34 |
+
info = self.metadata[file_path]
|
35 |
+
print(f"{file_path:<50} (size: {info['size']} bytes, blob: {info['blob']})")
|
36 |
+
|
37 |
+
def extract_file(self, file_path: str, output_path: str = None) -> None:
|
38 |
+
"""
|
39 |
+
Extract a specific file from the blob.
|
40 |
+
|
41 |
+
Args:
|
42 |
+
file_path: Path of the file within the blob
|
43 |
+
output_path: Where to save the extracted file. If None, uses original filename
|
44 |
+
"""
|
45 |
+
if file_path not in self.metadata:
|
46 |
+
print(f"Error: File '{file_path}' not found in blobs")
|
47 |
+
return
|
48 |
+
|
49 |
+
info = self.metadata[file_path]
|
50 |
+
blob_path = self.blob_dir / info['blob']
|
51 |
+
|
52 |
+
# Determine output path
|
53 |
+
if output_path is None:
|
54 |
+
output_path = file_path
|
55 |
+
|
56 |
+
# Create output directory if needed
|
57 |
+
output_path = Path(output_path)
|
58 |
+
output_path.parent.mkdir(parents=True, exist_ok=True)
|
59 |
+
|
60 |
+
print(f"\nExtracting: {file_path}")
|
61 |
+
print(f"From blob: {info['blob']}")
|
62 |
+
print(f"Size: {info['size']} bytes")
|
63 |
+
print(f"To: {output_path}")
|
64 |
+
|
65 |
+
try:
|
66 |
+
with open(blob_path, 'rb') as src, open(output_path, 'wb') as dst:
|
67 |
+
# Seek to file's position in blob
|
68 |
+
src.seek(info['offset'])
|
69 |
+
# Copy exact number of bytes
|
70 |
+
shutil.copyfileobj(src, dst, length=info['size'])
|
71 |
+
|
72 |
+
print(f"Successfully extracted to: {output_path}")
|
73 |
+
|
74 |
+
except Exception as e:
|
75 |
+
print(f"Error extracting file: {e}")
|
76 |
+
|
77 |
+
def main():
|
78 |
+
parser = argparse.ArgumentParser(description="Extract files from blobs")
|
79 |
+
parser.add_argument("blob_dir", help="Directory containing blobs and metadata.json")
|
80 |
+
parser.add_argument("--file", "-f", help="Specific file to extract")
|
81 |
+
parser.add_argument("--output", "-o", help="Output path for extracted file")
|
82 |
+
parser.add_argument("--list", "-l", action="store_true", help="List all files in blobs")
|
83 |
+
|
84 |
+
args = parser.parse_args()
|
85 |
+
|
86 |
+
try:
|
87 |
+
extractor = BlobExtractor(args.blob_dir)
|
88 |
+
|
89 |
+
if args.list:
|
90 |
+
extractor.list_files()
|
91 |
+
elif args.file:
|
92 |
+
extractor.extract_file(args.file, args.output)
|
93 |
+
else:
|
94 |
+
extractor.list_files() # Default to listing files if no specific action
|
95 |
+
|
96 |
+
except Exception as e:
|
97 |
+
print(f"Error: {e}")
|
98 |
+
|
99 |
+
if __name__ == "__main__":
|
100 |
+
main()
|
metadata.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
viewer.py
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
Simple script to preview contents of blob files created for Hugging Face datasets.
|
4 |
+
Usage:
|
5 |
+
python blob_preview.py /path/to/blob/directory [--file specific/file.txt] [--list]
|
6 |
+
"""
|
7 |
+
|
8 |
+
import json
|
9 |
+
import argparse
|
10 |
+
from pathlib import Path
|
11 |
+
from typing import Dict, Tuple
|
12 |
+
|
13 |
+
class BlobPreview:
|
14 |
+
def __init__(self, blob_dir: str):
|
15 |
+
"""Initialize preview with directory containing blobs and metadata."""
|
16 |
+
self.blob_dir = Path(blob_dir)
|
17 |
+
self.metadata_path = self.blob_dir / 'metadata.json'
|
18 |
+
self.metadata = self._load_metadata()
|
19 |
+
|
20 |
+
def _load_metadata(self) -> Dict:
|
21 |
+
"""Load and return the metadata JSON file."""
|
22 |
+
if not self.metadata_path.exists():
|
23 |
+
raise FileNotFoundError(f"Metadata file not found at {self.metadata_path}")
|
24 |
+
|
25 |
+
with open(self.metadata_path, 'r') as f:
|
26 |
+
return json.load(f)
|
27 |
+
|
28 |
+
def list_files(self):
|
29 |
+
"""Print all files stored in the blobs."""
|
30 |
+
print("\nFiles in blobs:")
|
31 |
+
print("-" * 50)
|
32 |
+
for file_path in sorted(self.metadata.keys()):
|
33 |
+
info = self.metadata[file_path]
|
34 |
+
print(f"{file_path:<50} (size: {info['size']} bytes, blob: {info['blob']})")
|
35 |
+
|
36 |
+
def preview_file(self, file_path: str, max_size: int = 1024) -> None:
|
37 |
+
"""Preview content of a specific file."""
|
38 |
+
if file_path not in self.metadata:
|
39 |
+
print(f"Error: File '{file_path}' not found in blobs")
|
40 |
+
return
|
41 |
+
|
42 |
+
info = self.metadata[file_path]
|
43 |
+
blob_path = self.blob_dir / info['blob']
|
44 |
+
|
45 |
+
print(f"\nFile: {file_path}")
|
46 |
+
print(f"Location: {info['blob']}")
|
47 |
+
print(f"Size: {info['size']} bytes")
|
48 |
+
print(f"Offset: {info['offset']} bytes")
|
49 |
+
print("-" * 50)
|
50 |
+
|
51 |
+
try:
|
52 |
+
with open(blob_path, 'rb') as f:
|
53 |
+
# Seek to file's position in blob
|
54 |
+
f.seek(info['offset'])
|
55 |
+
# Read the content (up to max_size)
|
56 |
+
content = f.read(min(info['size'], max_size))
|
57 |
+
|
58 |
+
# Try to decode as text
|
59 |
+
try:
|
60 |
+
text = content.decode('utf-8')
|
61 |
+
print("Content preview (decoded as UTF-8):")
|
62 |
+
print(text)
|
63 |
+
if len(content) < info['size']:
|
64 |
+
print("\n... (truncated)")
|
65 |
+
except UnicodeDecodeError:
|
66 |
+
print("Content preview (hexdump):")
|
67 |
+
for i in range(0, len(content), 16):
|
68 |
+
chunk = content[i:i+16]
|
69 |
+
hex_values = ' '.join(f'{b:02x}' for b in chunk)
|
70 |
+
ascii_values = ''.join(chr(b) if 32 <= b <= 126 else '.' for b in chunk)
|
71 |
+
print(f"{i:08x} {hex_values:<48} {ascii_values}")
|
72 |
+
if len(content) < info['size']:
|
73 |
+
print("\n... (truncated)")
|
74 |
+
|
75 |
+
except Exception as e:
|
76 |
+
print(f"Error reading file: {e}")
|
77 |
+
|
78 |
+
def main():
|
79 |
+
parser = argparse.ArgumentParser(description="Preview contents of blob files")
|
80 |
+
parser.add_argument("blob_dir", help="Directory containing blobs and metadata.json")
|
81 |
+
parser.add_argument("--file", "-f", help="Specific file to preview")
|
82 |
+
parser.add_argument("--list", "-l", action="store_true", help="List all files in blobs")
|
83 |
+
parser.add_argument("--size", "-s", type=int, default=1024,
|
84 |
+
help="Maximum preview size in bytes (default: 1024)")
|
85 |
+
|
86 |
+
args = parser.parse_args()
|
87 |
+
|
88 |
+
try:
|
89 |
+
preview = BlobPreview(args.blob_dir)
|
90 |
+
|
91 |
+
if args.list:
|
92 |
+
preview.list_files()
|
93 |
+
elif args.file:
|
94 |
+
preview.preview_file(args.file, args.size)
|
95 |
+
else:
|
96 |
+
preview.list_files() # Default to listing files if no specific action
|
97 |
+
|
98 |
+
except Exception as e:
|
99 |
+
print(f"Error: {e}")
|
100 |
+
|
101 |
+
if __name__ == "__main__":
|
102 |
+
main()
|