--- license: apache-2.0 tags: - multi-modal - 3D medical segmentation size_categories: - 1K M3D_Seg/ 0000/ 1/ image.npy mask_(1, 512, 512, 96).npz 2/ ...... 0000.json 0001/ ...... ### Dataset Download The total dataset size is approximately 224G. #### Clone with HTTP ```bash git clone https://huggingface.co/datasets/GoodBaiBai88/M3D-Seg ``` #### SDK Download ```bash from datasets import load_dataset dataset = load_dataset("GoodBaiBai88/M3D-Seg") ``` #### Manual Download Manually download all files from the dataset files. It is recommended to use batch download tools for efficient downloading. Please note the following: - **Downloading in Parts and Merging**: Since dataset 0024 has a large volume, the original compressed file has been split into two parts: `0024_1` and `0024_2`. Make sure to download these two files separately and unzip them in the same directory to ensure data integrity. - **Masks with Sparse Matrices**: To save storage space effectively, foreground information in masks is stored in sparse matrix format and saved with the extension `.npz`. The name of each mask file typically includes its shape information for identification and loading purposes. - **Data Load Demo**: There is a script named data_load_demo.py, which serves as a reference for correctly reading the sparse matrix format of masks and other related data. Please refer to this script for specific loading procedures and required dependencies. ### Dataset Loading Method #### 1. Direct Usage of Preprocessed Data If you have already downloaded the preprocessed dataset, no additional data processing steps are required. You can directly jump to step 2 to build and load the dataset. Please note that the contents provided by this dataset have been transformed and numbered through data_process.py, differing from the original `nii.gz` files. To understand the specific preprocessing process, refer to the data_process.py file for detailed information. If adding new datasets or modifying existing ones, please refer to data_process.py for data preprocessing and uniform formatting. #### 2. Build Dataset To facilitate model training and evaluation using this dataset, we provide an example code for the Dataset class. Wrap the dataset in your project according to the following example: ```python ``` ### Data Splitting Each sub-dataset folder is splitted into `train` and `test` parts through a JSON file, facilitating model training and testing. ### Dataset Sources | ID | Dataset | Link | | ------------- | ------------- | ------------- | | 0000 |CHAOS| https://chaos.grand-challenge.org/ | | 0001 |HaN-Seg| https://han-seg2023.grand-challenge.org/| | 0002 |AMOS22| https://amos22.grand-challenge.org/| | 0003 |AbdomenCT-1k| https://github.com/JunMa11/AbdomenCT-1K| | 0004 |KiTS23| https://kits-challenge.org/kits23/| | 0005 |KiPA22| https://kipa22.grand-challenge.org/| | 0006 |KiTS19| https://kits19.grand-challenge.org/| | 0007 |BTCV| https://www.synapse.org/#!Synapse:syn3193805/wiki/217753| | 0008 |Pancreas-CT| https://wiki.cancerimagingarchive.net/display/public/pancreas-ct| | 0009 | 3D-IRCADB | https://www.kaggle.com/datasets/nguyenhoainam27/3dircadb | | 0010 |FLARE22| https://flare22.grand-challenge.org/| | 0011 |TotalSegmentator| https://github.com/wasserth/TotalSegmentator| | 0012 |CT-ORG| https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=61080890| | 0013 |WORD| https://paperswithcode.com/dataset/word| | 0014 |VerSe19| https://osf.io/nqjyw/| | 0015 |VerSe20| https://osf.io/t98fz/| | 0016 |SLIVER07| https://sliver07.grand-challenge.org/| | 0017 |QUBIQ| https://qubiq.grand-challenge.org/| | 0018 |MSD-Colon| http://medicaldecathlon.com/| | 0019 |MSD-HepaticVessel| http://medicaldecathlon.com/| | 0020 |MSD-Liver| http://medicaldecathlon.com/| | 0021 |MSD-lung| http://medicaldecathlon.com/| | 0022 |MSD-pancreas| http://medicaldecathlon.com/| | 0023 |MSD-spleen| http://medicaldecathlon.com/| | 0024 |LUNA16| https://luna16.grand-challenge.org/Data/| ## Dataset Copyright Information All datasets in this dataset are publicly available. For detailed copyright information, please refer to the corresponding dataset links. ## Citation If you use this dataset, please cite the following works: ```BibTeX @misc{bai2024m3d, title={M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models}, author={Fan Bai and Yuxin Du and Tiejun Huang and Max Q. -H. Meng and Bo Zhao}, year={2024}, eprint={2404.00578}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{du2024segvol, title={SegVol: Universal and Interactive Volumetric Medical Image Segmentation}, author={Yuxin Du and Fan Bai and Tiejun Huang and Bo Zhao}, year={2024}, eprint={2311.13385}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```