Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Enhancing Cross-Modal Medical Image Segmentation through Compositionality and Disentanglement
|
2 |
|
3 |
This repository contains the checkpoints of several disentangled representation learning models for cross-modal medical image segmentation, used in the paper: 'Enhancing Cross-Modal Medical Image Segmentation through Compositionality'.
|
@@ -5,4 +11,4 @@ In particular, it contains the checkpoints of our proposed method, where we intr
|
|
5 |
|
6 |
The checkpoints are trained for MYO, LV, and RV segmentation using the MMWHS dataset in both directions, i.e. with CT and MRI as target domain. Moreover, they are trained for liver parenchyma segmentation using the CHAOS dataset, with both MRI T1 and MRI T2 as target domain.
|
7 |
|
8 |
-
Please refer to the [original GitHub repository](https://github.com/Trustworthy-AI-UU-NKI/Cross-Modal-Segmentation) for the code.
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- Cross Modal Segmentation
|
4 |
+
- Disentangled Representation Learning
|
5 |
+
- Compositionality
|
6 |
+
---
|
7 |
# Enhancing Cross-Modal Medical Image Segmentation through Compositionality and Disentanglement
|
8 |
|
9 |
This repository contains the checkpoints of several disentangled representation learning models for cross-modal medical image segmentation, used in the paper: 'Enhancing Cross-Modal Medical Image Segmentation through Compositionality'.
|
|
|
11 |
|
12 |
The checkpoints are trained for MYO, LV, and RV segmentation using the MMWHS dataset in both directions, i.e. with CT and MRI as target domain. Moreover, they are trained for liver parenchyma segmentation using the CHAOS dataset, with both MRI T1 and MRI T2 as target domain.
|
13 |
|
14 |
+
Please refer to the [original GitHub repository](https://github.com/Trustworthy-AI-UU-NKI/Cross-Modal-Segmentation) for the code.
|