bibtex_url
null | proceedings
stringlengths 58
58
| bibtext
stringlengths 511
974
| abstract
stringlengths 92
2k
| title
stringlengths 30
207
| authors
sequencelengths 1
22
| id
stringclasses 1
value | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 14
values | n_linked_authors
int64 -1
1
| upvotes
int64 -1
1
| num_comments
int64 -1
0
| n_authors
int64 -1
10
| Models
sequencelengths 0
4
| Datasets
sequencelengths 0
1
| Spaces
sequencelengths 0
0
| old_Models
sequencelengths 0
4
| old_Datasets
sequencelengths 0
1
| old_Spaces
sequencelengths 0
0
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values | unique_id
int64 0
855
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://papers.miccai.org/miccai-2024/paper/3191_paper.pdf | @InProceedings{ Sae_SurvRNC_MICCAI2024,
author = { Saeed, Numan and Ridzuan, Muhammad and Maani, Fadillah Adamsyah and Alasmawi, Hussain and Nandakumar, Karthik and Yaqub, Mohammad },
title = { { SurvRNC: Learning Ordered Representations for Survival Prediction using Rank-N-Contrast } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Predicting the likelihood of survival is of paramount importance for individuals diagnosed with cancer as it provides invaluable information regarding prognosis at an early stage. This knowledge enables the formulation of effective treatment plans that lead to improved patient outcomes. In the past few years, deep learning models have provided a feasible solution for assessing medical images, electronic health records, and genomic data to estimate cancer risk scores. However, these models often fall short of their potential because they struggle to learn regression-aware feature representations. In this study, we propose Survival Rank-N-Contrast (SurvRNC) method, which introduces a loss function as a regularizer to obtain an ordered representation based on the survival times. This function can handle censored data and can be incorporated into any survival model to ensure that the learned representation is ordinal. The model was extensively evaluated on a HEad & NeCK TumOR (HECKTOR) segmentation and the outcome-prediction task dataset. We demonstrate that using the SurvRNC method for training can achieve higher performance on different deep survival models. Additionally, it outperforms state-of-the-art methods by 3.6% on the concordance index. The code is publicly available at https://github.com/numanai/SurvRNC. | SurvRNC: Learning Ordered Representations for Survival Prediction using Rank-N-Contrast | [
"Saeed, Numan",
"Ridzuan, Muhammad",
"Maani, Fadillah Adamsyah",
"Alasmawi, Hussain",
"Nandakumar, Karthik",
"Yaqub, Mohammad"
] | Conference | 2403.10603 | [
"https://github.com/numanai/SurvRNC"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 200 |
|
null | https://papers.miccai.org/miccai-2024/paper/0192_paper.pdf | @InProceedings{ An_SubjectAdaptive_MICCAI2024,
author = { An, Sion and Kang, Myeongkyun and Kim, Soopil and Chikontwe, Philip and Shen, Li and Park, Sang Hyun },
title = { { Subject-Adaptive Transfer Learning Using Resting State EEG Signals for Cross-Subject EEG Motor Imagery Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Electroencephalography (EEG) motor imagery (MI) classification is a fundamental, yet challenging task due to the variation of signals between individuals i.e., inter-subject variability. Previous approaches try to mitigate this using task-specific (TS) EEG signals from the target subject in training. However, recording TS EEG signals requires time and limits its applicability in various fields. In contrast, resting state (RS) EEG signals are a viable alternative due to ease of acquisition with rich subject information. In this paper, we propose a novel subject-adaptive transfer learning strategy that utilizes RS EEG signals to adapt models on unseen subject data. Specifically, we disentangle extracted features into task- and subject-dependent features and use them to calibrate RS EEG signals for obtaining task information while preserving subject characteristics. The calibrated signals are then used to adapt the model to the target subject, enabling the model to simulate processing TS EEG signals of the target subject. The proposed method achieves state-of-the-art accuracy on three public benchmarks, demonstrating the effectiveness of our method in cross-subject EEG MI classification. Our findings highlight the potential of leveraging RS EEG signals to advance practical brain-computer interface systems. The code is available at https://github.com/SionAn/MICCAI2024-ResTL. | Subject-Adaptive Transfer Learning Using Resting State EEG Signals for Cross-Subject EEG Motor Imagery Classification | [
"An, Sion",
"Kang, Myeongkyun",
"Kim, Soopil",
"Chikontwe, Philip",
"Shen, Li",
"Park, Sang Hyun"
] | Conference | 2405.19346 | [
"https://github.com/SionAn/MICCAI2024-ResTL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 201 |
|
null | https://papers.miccai.org/miccai-2024/paper/1844_paper.pdf | @InProceedings{ Zen_Reciprocal_MICCAI2024,
author = { Zeng, Qingjie and Lu, Zilin and Xie, Yutong and Lu, Mengkang and Ma, Xinke and Xia, Yong },
title = { { Reciprocal Collaboration for Semi-supervised Medical Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | To acquire information from unlabeled data, current semi-supervised methods are mainly developed based on the mean-teacher or co-training paradigm, with non-controversial optimization objectives so as to regularize the discrepancy in learning towards consistency.
However, these methods suffer from the consensus issue, where the learning process might devolve into vanilla self-training due to identical learning targets.
To address this issue, we propose a novel \textbf{Re}ciprocal \textbf{Co}llaboration model (ReCo) for semi-supervised medical image classification.
ReCo is composed of a main network and an auxiliary network, which are constrained by distinct while latently consistent objectives. On labeled data, the main network learns from the ground truth acquiescently, while simultaneously generating auxiliary labels utilized as the supervision for the auxiliary network. Specifically, given a labeled image, the auxiliary label is defined as the category with the second-highest classification score predicted by the main network, thus symbolizing the most likely mistaken classification. Hence, the auxiliary network is specifically designed to discern \emph{which category the image should \textbf{NOT} belong to}. On unlabeled data, cross pseudo supervision is applied using reversed predictions. Furthermore, feature embeddings are purposefully regularized under the guidance of contrary predictions, with the aim of differentiating between categories susceptible to misclassification.
We evaluate our approach on two public benchmarks. Our results demonstrate the superiority of ReCo, which consistently outperforms popular competitors and sets a new state of the art. | Reciprocal Collaboration for Semi-supervised Medical Image Classification | [
"Zeng, Qingjie",
"Lu, Zilin",
"Xie, Yutong",
"Lu, Mengkang",
"Ma, Xinke",
"Xia, Yong"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 202 |
||
null | https://papers.miccai.org/miccai-2024/paper/0187_paper.pdf | @InProceedings{ Li_Comprehensive_MICCAI2024,
author = { Li, Wei and Zhang, Jingyang and Heng, Pheng-Ann and Gu, Lixu },
title = { { Comprehensive Generative Replay for Task-Incremental Segmentation with Concurrent Appearance and Semantic Forgetting } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Generalist segmentation models are increasingly favored for diverse tasks involving various objects from different image sources. Task-Incremental Learning (TIL) offers a privacy-preserving training paradigm using tasks arriving sequentially, instead of gathering them due to strict data sharing policies. However, the task evolution can span a wide scope that involves shifts in both image appearance and segmentation semantics with intricate correlation, causing concurrent appearance and semantic forgetting. To solve this issue, we propose a Comprehensive Generative Replay (CGR) framework that restores appearance and semantic knowledge by synthesizing image-mask pairs to mimic past task data, which focuses on two aspects: modeling image-mask correspondence and promoting scalability for diverse tasks. Specifically, we introduce a novel Bayesian Joint Diffusion (BJD) model for high-quality synthesis of image-mask pairs with their correspondence explicitly preserved by conditional denoising. Furthermore, we develop a Task-Oriented Adapter (TOA) that recalibrates prompt embeddings to modulate the diffusion model, making the data synthesis compatible with different tasks. Experiments on incremental tasks (cardiac, fundus and prostate segmentation) show its clear advantage for alleviating concurrent appearance and semantic forgetting. Code is available at https://github.com/jingyzhang/CGR. | Comprehensive Generative Replay for Task-Incremental Segmentation with Concurrent Appearance and Semantic Forgetting | [
"Li, Wei",
"Zhang, Jingyang",
"Heng, Pheng-Ann",
"Gu, Lixu"
] | Conference | 2406.19796 | [
"https://github.com/jingyzhang/CGR"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 203 |
|
null | https://papers.miccai.org/miccai-2024/paper/2336_paper.pdf | @InProceedings{ Aya_UnWaveNet_MICCAI2024,
author = { Ayad, Ishak and Tarpau, Cécilia and Cebeiro, Javier and Nguyen, Maï K. },
title = { { UnWave-Net: Unrolled Wavelet Network for Compton Tomography Image Reconstruction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Computed tomography (CT) is a widely used medical imaging technique to scan internal structures of a body, typically involving collimation and mechanical rotation. Compton scatter tomography (CST) presents an interesting alternative to conventional CT by leveraging Compton physics instead of collimation to gather information from multiple directions. While CST introduces new imaging opportunities with several advantages such as high sensitivity, compactness, and entirely fixed systems, image reconstruction remains an open problem due to the mathematical challenges of CST modeling. In contrast, deep unrolling networks have demonstrated potential in CT image reconstruction, despite their computationally intensive nature. In this study, we investigate the efficiency of unrolling networks for CST image reconstruction. To address the important computational cost required for training, we propose UnWave-Net, a novel unrolled wavelet-based reconstruction network. This architecture includes a non-local regularization term based on wavelets, which captures long-range dependencies within images and emphasizes the multi-scale components of the wavelet transform. We evaluate our approach using a CST of circular geometry which stays completely static during data acquisition, where UnWave-Net facilitates image reconstruction in the absence of a specific reconstruction formula. Our method outperforms existing approaches and achieves state-of-the-art performance in terms of SSIM and PSNR, and offers an improved computational efficiency compared to traditional unrolling networks. | UnWave-Net: Unrolled Wavelet Network for Compton Tomography Image Reconstruction | [
"Ayad, Ishak",
"Tarpau, Cécilia",
"Cebeiro, Javier",
"Nguyen, Maï K."
] | Conference | 2406.03413 | [
"https://github.com/Ishak96/UnWave-Net"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 204 |
|
null | https://papers.miccai.org/miccai-2024/paper/2479_paper.pdf | @InProceedings{ Shu_SlideGCD_MICCAI2024,
author = { Shu, Tong and Shi, Jun and Sun, Dongdong and Jiang, Zhiguo and Zheng, Yushan },
title = { { SlideGCD: Slide-based Graph Collaborative Training with Knowledge Distillation for Whole Slide Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Existing WSI analysis methods lie on the consensus that histopathological characteristics of tumors are significant guidance for cancer diagnostics. Particularly, as the evolution of cancers is a continuous process, the correlations and differences across various stages, anatomical locations and patients should be taken into account. However, recent research mainly focuses on the inner-contextual information in a single WSI, ignoring the correlations between slides. To verify whether introducing the slide inter-correlations can bring improvements to WSI representation learning, we propose a generic WSI analysis pipeline SlideGCD that considers the existing multi-instance learning (MIL) methods as the backbone and forge the WSI classification task as a node classification problem. More specifically, SlideGCD declares a node buffer that stores previous slide embeddings for subsequent extensive slide-based graph construction and conducts graph learning to explore the inter-correlations implied in the slide-based graph. Moreover, we frame the MIL classifier and graph learning into two parallel workflows and deploy the knowledge distillation to transfer the differentiable information to the graph neural network. The consistent performance boosting, brought by SlideGCD, of four previous state-of-the-art MIL methods is observed on two TCGA benchmark datasets. | SlideGCD: Slide-based Graph Collaborative Training with Knowledge Distillation for Whole Slide Image Classification | [
"Shu, Tong",
"Shi, Jun",
"Sun, Dongdong",
"Jiang, Zhiguo",
"Zheng, Yushan"
] | Conference | 2407.08968 | [
"https://github.com/HFUT-miaLab/SlideGCD"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 205 |
|
null | https://papers.miccai.org/miccai-2024/paper/2986_paper.pdf | @InProceedings{ Bai_CrossPhase_MICCAI2024,
author = { Bai, Bizhe and Zhou, Yan-Jie and Hu, Yujian and Mok, Tony C. W. and Xiang, Yilang and Lu, Le and Zhang, Hongkun and Xu, Minfeng },
title = { { Cross-Phase Mutual Learning Framework for Pulmonary Embolism Identification on Non-Contrast CT Scans } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Pulmonary embolism (PE) is a life-threatening condition where rapid and accurate diagnosis is imperative yet difficult due to predominantly atypical symptomatology. Computed tomography pulmonary angiography (CTPA) is acknowledged as the gold standard imaging tool in clinics, yet it can be contraindicated for emergency department (ED) patients and represents an onerous procedure, thus necessitating PE identification through non-contrast CT (NCT) scans. In this work, we explore the feasibility of applying a deep-learning approach to NCT scans for PE identification. We propose a novel Cross-Phase Mutual learNing framework (CPMN) that fosters knowledge transfer from CTPA to NCT, while concurrently conducting embolism segmentation and abnormality classification in a multi-task manner. The proposed CPMN leverages the Inter-Feature Alignment (IFA) strategy that enhances spatial contiguity and mutual learning between the dual-pathway network, while the Intra-Feature Discrepancy (IFD) strategy can facilitate precise segmentation of PE against complex backgrounds for single-pathway networks. For a comprehensive assessment of the proposed approach, a large-scale dual-phase dataset containing 334 PE patients and 1,105 normal subjects has been established. Experimental results demonstrate that CPMN achieves the leading identification performance, which is 95.4% and 99.6% in patient-level sensitivity and specificity on NCT scans, indicating the potential of our approach as an economical, accessible, and precise tool for PE identification in clinical practice. | Cross-Phase Mutual Learning Framework for Pulmonary Embolism Identification on Non-Contrast CT Scans | [
"Bai, Bizhe",
"Zhou, Yan-Jie",
"Hu, Yujian",
"Mok, Tony C. W.",
"Xiang, Yilang",
"Lu, Le",
"Zhang, Hongkun",
"Xu, Minfeng"
] | Conference | 2407.11529 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 206 |
|
null | https://papers.miccai.org/miccai-2024/paper/1643_paper.pdf | @InProceedings{ Jun_DeformationAware_MICCAI2024,
author = { Jung, Sunyoung and Choi, Yoonseok and Al-masni, Mohammed A. and Jung, Minyoung and Kim, Dong-Hyun },
title = { { Deformation-Aware Segmentation Network Robust to Motion Artifacts for Brain Tissue Segmentation using Disentanglement Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Motion artifacts caused by prolonged acquisition time are a significant challenge in Magnetic Resonance Imaging (MRI), hindering accurate tissue segmentation. These artifacts appear as blurred images that mimic tissue-like appearances, making segmentation difficult. This study proposes a novel deep learning framework that demonstrates superior performance in both motion correction and robust brain tissue segmentation in the presence of artifacts. The core concept lies in a complementary process: a disentanglement learning network progressively removes artifacts, leading to cleaner images and consequently, more accurate segmentation by a jointly trained motion estimation and segmentation network. This network generates three outputs: a motion-corrected image, a motion deformation map that identifies artifact-affected regions, and a brain tissue segmentation mask. This deformation serves as a guidance mechanism for the disentanglement process, aiding the model in recovering lost information or removing artificial structures introduced by the artifacts. Extensive in-vivo experiments on pediatric motion data demonstrate that our proposed framework outperforms state-of-the-art methods in segmenting motion-corrupted MRI scans. The code is available at https://github.com/SunYJ-hxppy/Multi-Net. | Deformation-Aware Segmentation Network Robust to Motion Artifacts for Brain Tissue Segmentation using Disentanglement Learning | [
"Jung, Sunyoung",
"Choi, Yoonseok",
"Al-masni, Mohammed A.",
"Jung, Minyoung",
"Kim, Dong-Hyun"
] | Conference | [
"https://github.com/SunYJ-hxppy/Multi-Net"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 207 |
||
null | https://papers.miccai.org/miccai-2024/paper/2333_paper.pdf | @InProceedings{ Gao_MEDBind_MICCAI2024,
author = { Gao, Yuan and Kim, Sangwook and Austin, David E and McIntosh, Chris },
title = { { MEDBind: Unifying Language and Multimodal Medical Data Embeddings } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Medical vision-language pretraining models (VLPM) have achieved remarkable progress in fusing chest X-rays (CXR) with clinical texts, introducing image-text data binding approaches that enable zero-shot learning and downstream clinical tasks. However, the current landscape lacks the holistic integration of additional medical modalities, such as electrocardiograms (ECG). We present MEDBind (Medical Electronic patient recorD Bind), which learns joint embeddings across CXR, ECG, and text. Using text data as the central anchor, MEDBind features tri-modality binding, delivering competitive performance in top-K retrieval, zero-shot, and few-shot benchmarks against established VLPM, and the ability for CXR-to-ECG zero-shot classification and retrieval. This seamless integration is achieved by combining contrastive loss on modality-text pairs with our proposed contrastive loss function, Edge-Modality Contrastive Loss, fostering a cohesive embedding space for CXR, ECG, and text. Finally, we demonstrate that MEDBind can improve downstream tasks by directly integrating CXR and ECG embeddings into a large-language model for multimodal prompt tuning. | MEDBind: Unifying Language and Multimodal Medical Data Embeddings | [
"Gao, Yuan",
"Kim, Sangwook",
"Austin, David E",
"McIntosh, Chris"
] | Conference | 2403.12894 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 208 |
|
null | https://papers.miccai.org/miccai-2024/paper/3719_paper.pdf | @InProceedings{ Che_Disentangled_MICCAI2024,
author = { Cheng, Jiale and Wu, Zhengwang and Yuan, Xinrui and Wang, Li and Lin, Weili and Grewen, Karen and Li, Gang },
title = { { Disentangled Hybrid Transformer for Identification of Infants with Prenatal Drug Exposure } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Prenatal drug exposure, which occurs during a time of extraordinary and critical brain development, is typically associated with cognitive, behavioral, and physiological deficits during infancy, childhood, and adolescence. Early identifying infants with prenatal drug exposures and associated biomarkers using neuroimages can help inform earlier, more effective, and personalized interventions to greatly improve later cognitive outcomes. To this end, we propose a novel deep learning model called disentangled hybrid volume-surface transformer for identifying individual infants with prenatal drug exposures. Specifically, we design two distinct branches, a volumetric network for learning non-cortical features in 3D image space, and a surface network for learning features on the highly convoluted cortical surface manifold. To better capture long-range dependency and generate highly discriminative representations, image and surface transformers are respectively employed for the volume and surface branches. Then, a disentanglement strategy is further proposed to separate the representations from two branches into complementary variables and common variables, thus removing redundant information and boosting expressive capability. After that, the disentangled representations are concatenated to a classifier to determine if there is an existence of prenatal drug exposures. We have validated our method on 210 infant MRI scans and demonstrated its superior performance, compared to ablated models and state-of-the-art methods. | Disentangled Hybrid Transformer for Identification of Infants with Prenatal Drug Exposure | [
"Cheng, Jiale",
"Wu, Zhengwang",
"Yuan, Xinrui",
"Wang, Li",
"Lin, Weili",
"Grewen, Karen",
"Li, Gang"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 209 |
||
null | https://papers.miccai.org/miccai-2024/paper/1746_paper.pdf | @InProceedings{ Chu_Anatomicconstrained_MICCAI2024,
author = { Chu, Yuetan and Yang, Changchun and Luo, Gongning and Qiu, Zhaowen and Gao, Xin },
title = { { Anatomic-constrained Medical Image Synthesis via Physiological Density Sampling } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Despite substantial progress in utilizing deep learning methods for clinical diagnosis, their efficacy depends on sufficient annotated data, which is often limited available owing to the extensive manual efforts required for labeling. Although prevalent data synthesis techniques can mitigate such data scarcity, they risk generating outputs with distorted anatomy that poorly represent real-world data. We address this challenge through a novel integration of anatomically constrained synthesis with registration uncertainty-based refinement, termed Anatomic-Constrained medical image Synthesis (ACIS). Specifically, we (1) generate the pseudo-mask via the physiological density estimation and Voronoi tessellation to represent the spatial anatomical information as the image synthesis prior; (2) synthesize diverse yet realistic image-annotation guided by the pseudo-masks, and (3) refine the outputs by registration uncertainty estimation to encourage the anatomical consistency between synthesized and real-world images. We validate ACIS for improving performance in both segmentation and image reconstruction tasks for few-shot learning. Experiments across diverse datasets demonstrate that ACIS outperforms state-of-the-art image synthesis techniques and enables models trained on only 10% or less of the total training data to achieve comparable or superior performance to that of models trained on complete datasets. The source code is publicly available at https://github.com/Arturia-Pendragon-Iris/VonoroiGeneration. | Anatomic-constrained Medical Image Synthesis via Physiological Density Sampling | [
"Chu, Yuetan",
"Yang, Changchun",
"Luo, Gongning",
"Qiu, Zhaowen",
"Gao, Xin"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 210 |
||
null | https://papers.miccai.org/miccai-2024/paper/0495_paper.pdf | @InProceedings{ Zha_IHCSurv_MICCAI2024,
author = { Zhang, Yejia and Chao, Hanqing and Qiu, Zhongwei and Liu, Wenbin and Shen, Yixuan and Sapkota, Nishchal and Gu, Pengfei and Chen, Danny Z. and Lu, Le and Yan, Ke and Jin, Dakai and Bian, Yun and Jiang, Hui },
title = { { IHCSurv: Effective Immunohistochemistry Priors for Cancer Survival Analysis in Gigapixel Multi-stain Whole Slide Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Recent cancer survival prediction approaches have made great
strides in analyzing H&E-stained gigapixel whole-slide images. However, methods targeting the immunohistochemistry (IHC) modality remain largely unexplored. We remedy this methodological gap and propose IHCSurv, a new framework that leverages IHC-specific priors to improve downstream survival prediction. We use these priors to guide our model to the most prognostic tissue regions and simultaneously enrich local features. To address drawbacks in recent approaches related to limited spatial context and cross-regional relation modeling, we propose a spatially-constrained spectral clustering algorithm that preserves spatial context alongside an efficient tissue region encoder that facilitates information transfer across tissue regions both within and between images. We evaluate our framework on a multi-stain IHC dataset of pancreatic cancer patients, where IHCSurv markedly outperforms existing state-of-the-art survival prediction methods. | IHCSurv: Effective Immunohistochemistry Priors for Cancer Survival Analysis in Gigapixel Multi-stain Whole Slide Images | [
"Zhang, Yejia",
"Chao, Hanqing",
"Qiu, Zhongwei",
"Liu, Wenbin",
"Shen, Yixuan",
"Sapkota, Nishchal",
"Gu, Pengfei",
"Chen, Danny Z.",
"Lu, Le",
"Yan, Ke",
"Jin, Dakai",
"Bian, Yun",
"Jiang, Hui"
] | Conference | [
"https://github.com/charzharr/miccai24-ihcsurv"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 211 |
||
null | https://papers.miccai.org/miccai-2024/paper/0074_paper.pdf | @InProceedings{ Liu_UinTSeg_MICCAI2024,
author = { Liu, Jiameng and Liu, Feihong and Sun, Kaicong and Sun, Yuhang and Huang, Jiawei and Jiang, Caiwen and Rekik, Islem and Shen, Dinggang },
title = { { UinTSeg: Unified Infant Brain Tissue Segmentation with Anatomy Delineation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Accurate brain tissue segmentation is a vital prerequisite for charting infant brain development and for diagnosing early brain disorders. However, due to inherently ongoing myelination and maturation, the intensity distributions of gray matter (GM) and white matter (WM) on T1-weighted (T1w) data undergo substantial variations in intensity from neonatal to 24 months. Especially at the ages around 6 months, the intensity distributions of GM and WM are highly overlapped. These physiological phenomena pose great challenges for automatic infant brain tissue segmentation, even for expert radiologists. To address these issues, in this study, we present a unified infant brain tissue segmentation (UinTSeg) framework to accurately segment brain tissues of infants aged 0-24 months using a single model. UinTSeg comprises two stages: 1) boundary extraction and 2) tissue segmentation. In the first stage, to alleviate the difficulty of tissue segmentation caused by variations in intensity, we extract the intensity-invariant tissue boundaries from T1w data driven by edge maps extracted from the Sobel filter. In the second stage, the Sobel edge maps and extracted boundaries of GM, WM, and cerebrospinal fluid (CSF) are utilized as intensity-invariant anatomy information to ensure unified and accurate tissue segmentation in infants age period of 0-24 months. Both stages are built upon an attention-based surrounding-aware segmentation network (ASNet), which exploits the contextual information from multi-scale patches to improve the segmentation performance. Extensive experiments on the baby connectome project dataset demonstrate the superiority of our proposed framework over five state-of-the-art methods. | UinTSeg: Unified Infant Brain Tissue Segmentation with Anatomy Delineation | [
"Liu, Jiameng",
"Liu, Feihong",
"Sun, Kaicong",
"Sun, Yuhang",
"Huang, Jiawei",
"Jiang, Caiwen",
"Rekik, Islem",
"Shen, Dinggang"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 212 |
||
null | https://papers.miccai.org/miccai-2024/paper/0481_paper.pdf | @InProceedings{ Liu_Overlay_MICCAI2024,
author = { Liu, Jiacheng and Qian, Wenhua and Cao, Jinde and Liu, Peng },
title = { { Overlay Mantle-Free for Semi-Supervised Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Semi-supervised medical image segmentation, crucial for medical research, enhances model generalization using unlabeled data with minimal labeled data. Current methods face edge uncertainty and struggle to learn specific shapes from pixel classification alone. To address these issues, we proposed two-stage knowledge distillation approach employs a teacher model to distill information from labeled data, enhancing the student model with unlabeled data. In the first stage, we use true labels to augment data and sharpen target edges to make teacher predictions more confident. In the second stage, we freeze the teacher model parameters to generate pseudo labels for unlabeled data and guide the student model to learn. By feeding the original background image to the teacher and the enhanced image to the student, The student model learns the information hidden under the mantle and the overall shape of hidden information of the segmented target. Experimental results on the Left Atrium dataset surpass existing methods. Our overlay mantle-free training method enables segmentation based on learned shape information even in data loss scenarios, exhibiting improved edge segmentation accuracy. The code is available at https://github.com/vigilliu/OMF. | Overlay Mantle-Free for Semi-Supervised Medical Image Segmentation | [
"Liu, Jiacheng",
"Qian, Wenhua",
"Cao, Jinde",
"Liu, Peng"
] | Conference | [
"https://github.com/vigilliu/OMF"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 213 |
||
null | https://papers.miccai.org/miccai-2024/paper/0067_paper.pdf | @InProceedings{ Zen_Missing_MICCAI2024,
author = { Zeng, Zhilin and Peng, Zelin and Yang, Xiaokang and Shen, Wei },
title = { { Missing as Masking: Arbitrary Cross-modal Feature Reconstruction for Incomplete Multimodal Brain Tumor Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Automatic brain tumor segmentation using multimodal MRI images is a critical task in medical imaging. A complete set of multimodal MRI images for a subject offers comprehensive views of brain tumors, and thus providing ideal tumor segmentation performance. However, acquiring such modality-complete data for every subject is frequently impractical in clinical practice, which requires a segmentation model to be able to 1) flexibly leverage both modality-complete and modality-incomplete data for model training, and 2) prevent significant performance degradation in inference if certain modalities are missing. To meet these two demands, in this paper, we propose M$^3$FeCon (\textbf{M}issing as \textbf{M}asking: arbitrary cross-\textbf{M}odal \textbf{Fe}ature Re\textbf{Con}struction) for incomplete multimodal brain tumor segmentation, which can learn approximate modality-complete feature representations from modality-incomplete data. Specifically, we treat missing modalities also as masked modalities, and employ a strategy similar to Masked Autoencoder (MAE) to learn feature-to-feature reconstruction across arbitrary modality combinations. The reconstructed features for missing modalities act as supplements to form approximate modality-complete feature representations. Extensive evaluations on the BraTS18 dataset demonstrate that our method achieves state-of-the-art performance in brain tumor segmentation with incomplete modalities, especicall in enhancing tumor with 4.61\% improvement in terms of Dice score. | Missing as Masking: Arbitrary Cross-modal Feature Reconstruction for Incomplete Multimodal Brain Tumor Segmentation | [
"Zeng, Zhilin",
"Peng, Zelin",
"Yang, Xiaokang",
"Shen, Wei"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 214 |
||
null | https://papers.miccai.org/miccai-2024/paper/1810_paper.pdf | @InProceedings{ Luo_Textual_MICCAI2024,
author = { Luo, Yuanjiang and Li, Hongxiang and Wu, Xuan and Cao, Meng and Huang, Xiaoshuang and Zhu, Zhihong and Liao, Peixi and Chen, Hu and Zhang, Yi },
title = { { Textual Inversion and Self-supervised Refinement for Radiology Report Generation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Existing mainstream approaches follow the encoder-decoder paradigm for generating radiology reports. They focus on improving the network structure of encoders and decoders, which leads to two shortcomings: overlooking the modality gap and ignoring report content constraints. In this paper, we proposed Textual Inversion and Self-supervised Refinement (TISR) to address the above two issues. Specifically, textual inversion can project text and image into the same space by representing images as pseudo words to eliminate the cross-modeling gap. Subsequently, self-supervised refinement refines these pseudo words through contrastive loss computation between images and texts, enhancing the fidelity of generated reports to images. Notably, TISR is orthogonal to most existing methods, plug-and-play. We conduct experiments on two widely-used public datasets and achieve significant improvements on various baselines, which demonstrates the effectiveness and generalization of TISR. The code will be available soon. | Textual Inversion and Self-supervised Refinement for Radiology Report Generation | [
"Luo, Yuanjiang",
"Li, Hongxiang",
"Wu, Xuan",
"Cao, Meng",
"Huang, Xiaoshuang",
"Zhu, Zhihong",
"Liao, Peixi",
"Chen, Hu",
"Zhang, Yi"
] | Conference | 2405.20607 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 215 |
|
null | https://papers.miccai.org/miccai-2024/paper/1632_paper.pdf | @InProceedings{ Den_Enable_MICCAI2024,
author = { Deng, Zhipeng and Luo, Luyang and Chen, Hao },
title = { { Enable the Right to be Forgotten with Federated Client Unlearning in Medical Imaging } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | The right to be forgotten, as stated in most data regulations, poses an underexplored challenge in federated learning (FL), leading to the development of federated unlearning (FU). However, current FU approaches often face trade-offs between efficiency, model performance, forgetting efficacy, and privacy preservation. In this paper, we delve into the paradigm of Federated Client Unlearning to guarantee a client the right to erase the contribution or the influence, introducing the first FU framework in medical imaging. In the unlearning process of a client, the proposed Model-Contrastive Unlearning marks a pioneering step towards feature-level unlearning, and Frequency-Guided Memory Preservation ensures smooth forgetting of local knowledge while maintaining the generalizability of the trained global model, thus avoiding performance compromises and guaranteeing rapid post-training. We evaluate our FCU framework on two public medical image datasets, including Intracranial hemorrhage diagnosis and skin lesion diagnosis, demonstrating our proposed framework outperforms other state-of-the-art FU frameworks working, with an expected speed-up of 10-15 times compared with retraining from scratch. The code and organized datasets will be made public. | Enable the Right to be Forgotten with Federated Client Unlearning in Medical Imaging | [
"Deng, Zhipeng",
"Luo, Luyang",
"Chen, Hao"
] | Conference | 2407.02356 | [
"https://github.com/dzp2095/FCU"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 216 |
|
null | https://papers.miccai.org/miccai-2024/paper/0638_paper.pdf | @InProceedings{ Yan_DCrownFormer_MICCAI2024,
author = { Yang, Su and Han, Jiyong and Lim, Sang-Heon and Yoo, Ji-Yong and Kim, SuJeong and Song, Dahyun and Kim, Sunjung and Kim, Jun-Min and Yi, Won-Jin },
title = { { DCrownFormer: Morphology-aware Point-to-Mesh Generation Transformer for Dental Crown Prosthesis from 3D Scan Data of Antagonist and Preparation Teeth } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Dental prosthesis is important in designing artificial replacements to restore the function and appearance of teeth. However, designing a patient-specific dental prosthesis is still labor-intensive and depends on dental professionals with knowledge of oral anatomy and their experience. Also, the initial tooth template for designing dental crowns is not personalized. In this paper, we propose a novel point-to-mesh generation transformer (DCrownFormer) to directly and efficiently generate dental crown meshes from point inputs of 3D scans of antagonist and preparation teeth. Specifically, to learn morphological relationships between a point input and generated points of a dental crown, we introduce a morphology-aware cross-attention module (MCAM) in a transformer decoder and curvature-penalty loss (CPL). Furthermore, we adopt Differentiable Poisson surface reconstruction for mesh reconstruction from generated points and normals of a dental crown by directly optimizing an indicator function using mesh reconstruction loss (MRL). Experimental results demonstrate the superiority of DCrwonFormer compared with other methods, by improving morphological details of occlusal surfaces such as dental grooves and cusps. We further validate the effectiveness of MCAM, MRL, and significant benefits of CPL through ablation studies. The code is available at https://github.com/suyang93/DCrownFormer/. | DCrownFormer: Morphology-aware Point-to-Mesh Generation Transformer for Dental Crown Prosthesis from 3D Scan Data of Antagonist and Preparation Teeth | [
"Yang, Su",
"Han, Jiyong",
"Lim, Sang-Heon",
"Yoo, Ji-Yong",
"Kim, SuJeong",
"Song, Dahyun",
"Kim, Sunjung",
"Kim, Jun-Min",
"Yi, Won-Jin"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 217 |
||
null | https://papers.miccai.org/miccai-2024/paper/1664_paper.pdf | @InProceedings{ Dai_SaSaMIM_MICCAI2024,
author = { Dai, Pengyu and Ou, Yafei and Yang, Yuqiao and Liu, Dichao and Hashimoto, Masahiro and Jinzaki, Masahiro and Miyake, Mototaka and Suzuki, Kenji },
title = { { SaSaMIM: Synthetic Anatomical Semantics-Aware Masked Image Modeling for Colon Tumor Segmentation in Non-contrast Abdominal Computed Tomography } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Colorectal cancer (CRC) is a critical global concern, despite advancements in computer-aided techniques, the development of early-stage computer-aided segmentation holds substantial clinical potential and warrants further exploration. This can be attributed to the challenge for localizing tumor-related information within the colonic region of the abdomen when doing segmentation and that cancerous tissue remains indistinguishable from surrounding tissue even with contrast enhancement. In this work, a task-oriented Synthetic anatomical Semantics-aware Masked Image Modeling (SaSaMIM) method is proposed that leverages both existing and synthesized semantics for efficient utilization of unlabeled data. We first introduce a novel fine-grain synthetic mask modeling strategy that effectively integrates coarse organ semantics and synthetic tumor semantics in a label-free manner. Thus, tumor location perception in the pretraining phase is achieved by means of integrating both semantics. Next, a frequency-aware decoding branch is designed to achieve further supervision and representation of the Gaussian noise-based tumor semantics. Since the CT intensity of tumors follows Gaussian distribution, representation in the frequency domain solves the difficulty in distinguishing cancerous tissues from surrounding healthy tissues due to their homogeneity. To demonstrate the proposed method’s performance, a non-contrast CT (NCCT) colon cancer dataset was assembled, aiming at early tumor diagnosis in a broader clinical setting. We validate our approach on a cross-validation of these 110 cases and outperform the current SOTA self-supervised method for 5% Dice score improvement on average. Comprehensive experiments have confirmed the efficacy of our proposed method. To our knowledge, this is the first study to apply task-oriented self-supervised learning methods on NCCT to achieve end-to-end early-stage colon tumor segmentation. | SaSaMIM: Synthetic Anatomical Semantics-Aware Masked Image Modeling for Colon Tumor Segmentation in Non-contrast Abdominal Computed Tomography | [
"Dai, Pengyu",
"Ou, Yafei",
"Yang, Yuqiao",
"Liu, Dichao",
"Hashimoto, Masahiro",
"Jinzaki, Masahiro",
"Miyake, Mototaka",
"Suzuki, Kenji"
] | Conference | [
"https://github.com/Da1daidaidai/SaSaMIM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 218 |
||
null | https://papers.miccai.org/miccai-2024/paper/3032_paper.pdf | @InProceedings{ Yu_Material_MICCAI2024,
author = { Yu, Xiaopeng and Wu, Qianyu and Qin, Wenhui and Zhong, Tao and Su, Mengqing and Ma, Jinglu and Zhang, Yikun and Ji, Xu and Quan, Guotao and Chen, Yang and Du, Yanfeng and Lai, Xiaochun },
title = { { Material Decomposition in Photon-Counting CT: A Deep Learning Approach Driven by Detector Physics and ASIC Modeling } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Photon-counting computed tomography (PCCT) based on photon-counting detectors (PCDs) stands out as a cutting-edge CT technology, offering enhanced spatial resolution, reduced radiation dose, and advanced material decomposition capabilities. Despite its recognized advantages, challenges arise from real-world phenomena such as PCD charge-sharing effects, application-specific integrated circuit (ASIC) pile-up, and spectrum shift, introducing a disparity between actual physical effects and the assumptions made in ideal physics models. This misalignment can lead to substantial errors during image reconstruction processes, particularly in material decomposition. In this paper, we introduce a novel detector physics and ASIC model-guided deep learning system model tailored for PCCT. This model adeptly captures the comprehensive response of the PCCT system, encompassing both detector and ASIC responses. We present experimental results demonstrating the model’s exceptional accuracy and robustness. Key advancements include reduced calibration errors, enhanced quality in material decomposition imaging, and improved quantitative consistency. This model represents a significant stride in bridging the gap between theoretical assumptions and practical complexities of PCCT, paving the way for more precise and reliable medical imaging. | Material Decomposition in Photon-Counting CT: A Deep Learning Approach Driven by Detector Physics and ASIC Modeling | [
"Yu, Xiaopeng",
"Wu, Qianyu",
"Qin, Wenhui",
"Zhong, Tao",
"Su, Mengqing",
"Ma, Jinglu",
"Zhang, Yikun",
"Ji, Xu",
"Quan, Guotao",
"Chen, Yang",
"Du, Yanfeng",
"Lai, Xiaochun"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 219 |
||
null | https://papers.miccai.org/miccai-2024/paper/1656_paper.pdf | @InProceedings{ Li_EndoSelf_MICCAI2024,
author = { Li, Wenda and Hayashi, Yuichiro and Oda, Masahiro and Kitasaka, Takayuki and Misawa, Kazunari and Mori, Kensaku },
title = { { EndoSelf: Self-Supervised Monocular 3D Scene Reconstruction of Deformable Tissues with Neural Radiance Fields on Endoscopic Videos } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Neural radiance field has recently emerged as a powerful representation to reconstruct deformable tissues from endoscopic videos. Previous methods mainly focus on depth-supervised approaches based on endoscopic datasets. As additional information, depth values were proven important in reconstructing deformable tissues by previous methods. However, collecting a large number of datasets with accurate depth values limits the applicability of these approaches for endoscopic scenes.
To address this issue, we propose a novel self-supervised monocular 3D scene reconstruction method based on neural radiance fields without prior depth as supervision. We consider the monocular 3D reconstruction based on two approaches: ray-tracing-based neural radiance fields and structure-from-motion-based photogrammetry. We introduce structure from motion framework and leverage color values as a supervision to complete the self-supervised learning strategy. In addition, we predict the depth values from neural radiance fields and enforce the geometric constraint for depth values from adjacent views. Moreover, we propose a looped loss function to fully explore the temporal correlation between input images. The experimental results showed that the proposed method without prior depth outperformed the previous depth-supervised methods on two endoscopic datasets. Our code is available. | EndoSelf: Self-Supervised Monocular 3D Scene Reconstruction of Deformable Tissues with Neural Radiance Fields on Endoscopic Videos | [
"Li, Wenda",
"Hayashi, Yuichiro",
"Oda, Masahiro",
"Kitasaka, Takayuki",
"Misawa, Kazunari",
"Mori, Kensaku"
] | Conference | [
"https://github.com/MoriLabNU/EndoSelf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 220 |
||
null | https://papers.miccai.org/miccai-2024/paper/1415_paper.pdf | @InProceedings{ Tan_HFResDiff_MICCAI2024,
author = { Tang, Zixin and Jiang, Caiwen and Cui, Zhiming and Shen, Dinggang },
title = { { HF-ResDiff: High-Frequency-guided Residual Diffusion for Multi-dose PET Reconstruction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Positron Emission Tomography (PET), an advanced nuclear imaging technology capable of visualizing human biological processes, plays an irreplaceable role in diagnosing various diseases. Nonetheless, PET imaging necessitates the administration of radionuclides into the human body, inevitably leading to radiation exposure. To mitigate the risk, many studies seek to reconstruct high-quality standard-dose PET from low-dose PET to reduce the required dosage of radionuclides. However, these efforts perform poorly in capturing high-frequency details in images. Meanwhile, they are limited to single-dose PET reconstruction, overlooking a clinical fact: due to inherent individual variations among patients, the actual dose level of PET images obtained can exhibit considerable discrepancies. In this paper, we propose a multi-dose PET reconstruction framework that aligns closely with clinical requirements and effectively preserves high-frequency information. Specifically, we design a High-Frequency-guided Residual Diffusion for Multi-dose PET Reconstruction (HF-ResDiff) that enhances traditional diffusion models by 1) employing a simple CNN to predict low-frequency content, allowing the diffusion model to focus more on high-frequency counterparts while significantly promoting the training efficiency, 2) incorporating a Frequency Domain Information Separator and a High-frequency-guided Cross-attention to further assist the diffusion model in accurately recovering high-frequency details, and 3) embedding a dose control module to enable the diffusion model to accommodate PET reconstruction at different dose levels. Through extensive experiments, our HF-ResDiff outperforms the state-of-the-art methods in PET reconstruction across multiple doses. | HF-ResDiff: High-Frequency-guided Residual Diffusion for Multi-dose PET Reconstruction | [
"Tang, Zixin",
"Jiang, Caiwen",
"Cui, Zhiming",
"Shen, Dinggang"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 221 |
||
null | https://papers.miccai.org/miccai-2024/paper/1466_paper.pdf | @InProceedings{ Zha_Whole_MICCAI2024,
author = { Zhang, Yundi and Chen, Chen and Shit, Suprosanna and Starck, Sophie and Rueckert, Daniel and Pan, Jiazhen },
title = { { Whole Heart 3D+T Representation Learning Through Sparse 2D Cardiac MR Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Cardiac Magnetic Resonance (CMR) imaging serves as the gold-standard for evaluating cardiac morphology and function. Typically, a multi-view CMR stack, covering short-axis (SA) and 2/3/4-chamber long-axis (LA) views, is acquired for a thorough cardiac assessment. However, efficiently streamlining the complex, high-dimensional 3D+T CMR data and distilling compact, coherent representation remains a challenge. In this work, we introduce a whole-heart self-supervised learning framework that utilizes masked imaging modeling to automatically uncover the correlations between spatial and temporal patches throughout the cardiac stacks. This process facilitates the generation of meaningful and well-clustered heart representations without relying on the traditionally required, and often costly, labeled data. The learned heart representation can be directly used for various downstream tasks. Furthermore, our method demonstrates remarkable robustness, ensuring consistent representations even when certain CMR planes are missing/flawed. We train our model on 14,000 unlabeled CMR data from UK BioBank and evaluate it on 1,000 annotated data. The proposed method demonstrates superior performance to baselines in tasks that demand comprehensive 3D+T cardiac information, e.g. cardiac phenotype (ejection fraction and ventricle volume) prediction and multi-plane/multi-frame CMR segmentation, highlighting its effectiveness in extracting comprehensive cardiac features that are both anatomically and pathologically relevant. The code is available at https://github.com/Yundi-Zhang/WholeHeartRL.git. | Whole Heart 3D+T Representation Learning Through Sparse 2D Cardiac MR Images | [
"Zhang, Yundi",
"Chen, Chen",
"Shit, Suprosanna",
"Starck, Sophie",
"Rueckert, Daniel",
"Pan, Jiazhen"
] | Conference | 2406.00329 | [
"https://github.com/Yundi-Zhang/WholeHeartRL.git"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 222 |
|
null | https://papers.miccai.org/miccai-2024/paper/0511_paper.pdf | @InProceedings{ Pug_Enhancing_MICCAI2024,
author = { Puglisi, Lemuel and Alexander, Daniel C. and Ravì, Daniele },
title = { { Enhancing Spatiotemporal Disease Progression Models via Latent Diffusion and Prior Knowledge } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | In this work, we introduce Brain Latent Progression (BrLP), a novel spatiotemporal disease progression model based on latent diffusion. BrLP is designed to predict the evolution of diseases at the individual level on 3D brain MRIs. Existing deep generative models developed for this task are primarily data-driven and face challenges in learning disease progressions. BrLP addresses these challenges by incorporating prior knowledge from disease models to enhance the accuracy of predictions. To implement this, we propose to integrate an auxiliary model that infers volumetric changes in various brain regions. Additionally, we introduce Latent Average Stabilization (LAS), a novel technique to improve spatiotemporal consistency of the predicted progression. BrLP is trained and evaluated on a large dataset comprising 11,730 T1-weighted brain MRIs from 2,805 subjects, collected from three publicly available, longitudinal Alzheimer’s Disease (AD) studies. In our experiments, we compare the MRI scans generated by BrLP with the actual follow-up MRIs available from the subjects, in both cross-sectional and longitudinal settings. BrLP demonstrates significant improvements over existing methods, with an increase of 22% in volumetric accuracy across AD-related brain regions and 43% in image similarity to the ground-truth scans. The ability of BrLP to generate conditioned 3D scans at the subject level, along with the novelty of integrating prior knowledge to enhance accuracy, represents a significant advancement in disease progression modeling, opening new avenues for precision medicine. The code of BrLP is available at the following link: https://github.com/LemuelPuglisi/BrLP. | Enhancing Spatiotemporal Disease Progression Models via Latent Diffusion and Prior Knowledge | [
"Puglisi, Lemuel",
"Alexander, Daniel C.",
"Ravì, Daniele"
] | Conference | 2405.03328 | [
"https://github.com/LemuelPuglisi/BrLP"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 223 |
|
null | https://papers.miccai.org/miccai-2024/paper/1110_paper.pdf | @InProceedings{ Mor_Topological_MICCAI2024,
author = { Morlana, Javier and Tardós, Juan D. and Montiel, José M. M. },
title = { { Topological SLAM in colonoscopies leveraging deep features and topological priors } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | We introduce ColonSLAM, a system that combines classical multiple-map metric SLAM with deep features and topological priors to create topological maps of the whole colon. The SLAM pipeline by itself is able to create disconnected individual metric submaps representing locations from short video subsections of the colon, but is not able to merge covisible submaps due to deformations and the limited performance of the SIFT descriptor in the medical domain. ColonSLAM is guided by topological priors and combines a deep localization network trained to distinguish if two images come from the same place or not and the soft verification of a transformer-based matching network, being able to relate far-in-time submaps during an exploration, grouping them in nodes imaging the same colon place, building more complex maps than any other approach in the literature. We demonstrate our approach in the Endomapper dataset, showing its potential for producing maps of the whole colon in real human explorations. Code and models are available at: https://github.com/endomapper/ColonSLAM | Topological SLAM in colonoscopies leveraging deep features and topological priors | [
"Morlana, Javier",
"Tardós, Juan D.",
"Montiel, José M. M."
] | Conference | 2409.16806 | [
"https://github.com/endomapper/ColonSLAM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 224 |
|
null | https://papers.miccai.org/miccai-2024/paper/2366_paper.pdf | @InProceedings{ Ceb_Vesselaware_MICCAI2024,
author = { Ceballos-Arroyo, Alberto M. and Nguyen, Hieu T. and Zhu, Fangrui and Yadav, Shrikanth M. and Kim, Jisoo and Qin, Lei and Young, Geoffrey and Jiang, Huaizu },
title = { { Vessel-aware aneurysm detection using multi-scale deformable 3D attention } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Manual detection of intracranial aneurysms in computed tomography (CT) scans is a complex, time-consuming task even for expert clinicians, and automating the process is no less challenging. Critical difficulties associated with detecting aneurysms include their small (yet varied) size compared to scans and a high potential for false positive (FP) predictions. To address these issues, we propose a 3D, multi-scale neural architecture that detects aneurysms via a deformable attention mechanism that operates on vessel distance maps derived from vessel segmentations and 3D features extracted from the layers of a convolutional network. Likewise, we reformulate aneurysm segmentation as bounding cuboid prediction using binary cross entropy and three localization losses (location, size, IoU). Given three validation sets comprised of 152/138/38 CT scans and containing 126/101/58 aneurysms, we achieved a Sensitivity of 91.3%/97.0%/74.1% @ FP rates 0.53/0.56/0.87, with Sensitivity
around 80% on small aneurysms. Manual inspection of outputs by experts showed our model only tends to miss aneurysms located in unusual locations. Code and model weights are available online. | Vessel-aware aneurysm detection using multi-scale deformable 3D attention | [
"Ceballos-Arroyo, Alberto M.",
"Nguyen, Hieu T.",
"Zhu, Fangrui",
"Yadav, Shrikanth M.",
"Kim, Jisoo",
"Qin, Lei",
"Young, Geoffrey",
"Jiang, Huaizu"
] | Conference | [
"https://github.com/alceballosa/deform-aneurysm-detection"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 225 |
||
null | https://papers.miccai.org/miccai-2024/paper/2127_paper.pdf | @InProceedings{ Don_Prompt_MICCAI2024,
author = { Dong, Zijian and Wu, Yilei and Chen, Zijiao and Zhang, Yichi and Jin, Yueming and Zhou, Juan Helen },
title = { { Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of fMRI Pre-trained Model } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | We introduce Scaffold Prompt Tuning (ScaPT), a novel prompt-based framework for adapting large-scale functional magnetic resonance imaging (fMRI) pre-trained models to downstream tasks, with high parameter efficiency and improved performance compared to fine-tuning and baselines for prompt tuning. The full fine-tuning updates all pre-trained parameters, which may distort the learned feature space and lead to overfitting with limited training data which is common in fMRI fields. In contrast, we design a hierarchical prompt structure that transfers the knowledge learned from high-resource tasks to low-resource ones. This structure, equipped with a Deeply-conditioned Input-Prompt (DIP) mapping module, allows for efficient adaptation by updating only 2% of the trainable parameters. The framework enhances semantic interpretability through attention mechanisms between inputs and prompts, and it clusters prompts in the latent space in alignment with prior knowledge. Experiments on public resting state fMRI datasets reveal ScaPT outperforms fine-tuning and multitask-based prompt tuning in neurodegenerative diseases diagnosis/prognosis and personality trait prediction, even with fewer than 20 participants. It highlights ScaPT’s efficiency in adapting pre-trained fMRI models to low-resource tasks. | Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of fMRI Pre-trained Model | [
"Dong, Zijian",
"Wu, Yilei",
"Chen, Zijiao",
"Zhang, Yichi",
"Jin, Yueming",
"Zhou, Juan Helen"
] | Conference | 2408.10567 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 226 |
|
null | https://papers.miccai.org/miccai-2024/paper/3515_paper.pdf | @InProceedings{ Nih_Estimation_MICCAI2024,
author = { Nihalaani, Rachaell and Kataria, Tushar and Adams, Jadie and Elhabian, Shireen Y. },
title = { { Estimation and Analysis of Slice Propagation Uncertainty in 3D Anatomy Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Supervised methods for 3D anatomy segmentation demonstrate superior performance but are often limited by the availability of annotated data. This limitation has led to a growing interest in self-supervised approaches in tandem with the abundance of available un-annotated data. Slice propagation has recently emerged as an self-supervised approach that leverages slice registration as a self-supervised task to achieve full anatomy segmentation with minimal supervision. This approach significantly reduces the need for domain expertise, time and the cost associated with building fully annotated datasets required for training segmentation networks. However, this shift toward reduced supervision via deterministic networks raises concerns about the trustworthiness and reliability of predictions, especially when compared with more accurate supervised approaches. To address this concern, we propose the
integration of calibrated uncertainty quantification (UQ) into slice propagation methods, providing insights into the model’s predictive reliability and confidence levels. Incorporating uncertainty measures enhances user confidence in self-supervised approaches, thereby improving their practical applicability. We conducted experiments on three datasets for 3D abdominal segmentation using five different UQ methods. The results illustrate that incorporating UQ not only improves model trustworthiness, but also segmentation accuracy. Furthermore, our analysis reveals various failure modes of slice propagation methods that might not be immediately apparent to end-users. This opens up new research avenues to improve the accuracy and trustworthiness of slice propagation methods. | Estimation and Analysis of Slice Propagation Uncertainty in 3D Anatomy Segmentation | [
"Nihalaani, Rachaell",
"Kataria, Tushar",
"Adams, Jadie",
"Elhabian, Shireen Y."
] | Conference | 2403.12290 | [
"https://github.com/RachaellNihalaani/SlicePropUQ"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 227 |
|
null | https://papers.miccai.org/miccai-2024/paper/1627_paper.pdf | @InProceedings{ Liu_SwinUMamba_MICCAI2024,
author = { Liu, Jiarun and Yang, Hao and Zhou, Hong-Yu and Xi, Yan and Yu, Lequan and Li, Cheng and Liang, Yong and Shi, Guangming and Yu, Yizhou and Zhang, Shaoting and Zheng, Hairong and Wang, Shanshan },
title = { { Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Accurate medical image segmentation demands the integration of multi-scale information, spanning from local features to global dependencies. However, it is challenging for existing methods to model long-range global information, where convolutional neural networks are constrained by their local receptive fields, and vision transformers suffer from high quadratic complexity of their attention mechanism. Recently, Mamba-based models have gained great attention for their impressive ability in long sequence modeling. Several studies have demonstrated that these models can outperform popular vision models in various tasks, offering higher accuracy, lower memory consumption, and less computational burden. However, existing Mamba-based models are mostly trained from scratch and do not explore the power of pretraining, which has been proven to be quite effective for data-efficient medical image analysis. This paper introduces a novel Mamba-based model, Swin-UMamba, designed specifically for medical image segmentation tasks, leveraging the advantages of ImageNet-based pretraining. Our experimental results reveal the vital role of ImageNet-based training in enhancing the performance of Mamba-based models. Swin-UMamba demonstrates superior performance with a large margin compared to CNNs, ViTs, and latest Mamba-based models. Notably, on AbdomenMRI, Encoscopy, and Microscopy datasets, Swin-UMamba outperforms its closest counterpart U-Mamba by an average score of 2.72%. The code and models of Swin-UMamba are publicly available at: https://github.com/JiarunLiu/Swin-UMamba. | Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining | [
"Liu, Jiarun",
"Yang, Hao",
"Zhou, Hong-Yu",
"Xi, Yan",
"Yu, Lequan",
"Li, Cheng",
"Liang, Yong",
"Shi, Guangming",
"Yu, Yizhou",
"Zhang, Shaoting",
"Zheng, Hairong",
"Wang, Shanshan"
] | Conference | 2402.03302 | [
"https://github.com/JiarunLiu/Swin-UMamba"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 228 |
|
null | https://papers.miccai.org/miccai-2024/paper/3995_paper.pdf | @InProceedings{ Tap_SuperField_MICCAI2024,
author = { Tapp, Austin and Zhao, Can and Roth, Holger R. and Tanedo, Jeffrey and Anwar, Syed Muhammad and Bourke, Niall J. and Hajnal, Joseph V. and Nankabirwa, Victoria and Deoni, Sean and Lepore, Natasha and Linguraru, Marius George },
title = { { Super-Field MRI Synthesis for Infant Brains Enhanced by Dual Channel Latent Diffusion } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | In resource-limited settings, portable ultra-low-field (uLF, i.e., 0.064T) magnetic resonance imaging (MRI) systems expand accessibility of radiological scanning, particularly for low-income areas as well as underserved populations like neonates and infants. However, compared to high-field (HF, e.g., ≥ 1.5T) systems, inferior image quality in uLF scanning poses challenges for research and clinical use. To address this, we introduce Super-Field Network (SFNet), a custom swinUNETRv2 with generative adversarial network components that uses uLF MRIs to generate super-field (SF) images comparable to HF MRIs. We acquired a cohort of infant data (n=30, aged 0-2 years) with paired uLF-HF MRI data from a resource-limited setting with an underrepresented population in research. To enhance the small dataset, we present a novel use of latent diffusion to create dual-channel (uLF-HF) paired MRIs. We compare SFNet with state-of-the-art synthesis methods by HF-SF image similarity perceptual scores and by automated HF and SF segmentations of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The best performance was achieved by SFNet trained on the latent diffusion enhanced dataset yielding state-of-the-art results in Fréchet inception distance at 9.08 ± 1.21, perceptual similarity at 0.11 ± 0.01, and PSNR at 22.64 ± 1.31. True HF and SF segmentations had a strong overlap with Dice similarity coefficients of 0.71 ± 0.1, 0.79 ± 0.2, and 0.73 ± 0.08 for WM, GM, and CSF, respectively, in the developing infant brain with incomplete myelination, and displayed 166%, 107%, and 106% improvement over respective uLF-based segmentation metrics. SF MRI supports health equity by enhancing the clinical use of uLF imaging systems and improving the diagnostic capabilities of low-cost portable MRI systems in resource-limited settings and for underserved populations. Our code is made openly available at https://github.com/AustinTapp/SFnet. | Super-Field MRI Synthesis for Infant Brains Enhanced by Dual Channel Latent Diffusion | [
"Tapp, Austin",
"Zhao, Can",
"Roth, Holger R.",
"Tanedo, Jeffrey",
"Anwar, Syed Muhammad",
"Bourke, Niall J.",
"Hajnal, Joseph V.",
"Nankabirwa, Victoria",
"Deoni, Sean",
"Lepore, Natasha",
"Linguraru, Marius George"
] | Conference | [
"https://github.com/AustinTapp/SFnet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 229 |
||
null | https://papers.miccai.org/miccai-2024/paper/1232_paper.pdf | @InProceedings{ Wan_EndoGSLAM_MICCAI2024,
author = { Wang, Kailing and Yang, Chen and Wang, Yuehao and Li, Sikuang and Wang, Yan and Dou, Qi and Yang, Xiaokang and Shen, Wei },
title = { { EndoGSLAM: Real-Time Dense Reconstruction and Tracking in Endoscopic Surgeries using Gaussian Splatting } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Precise camera tracking, high-fidelity 3D tissue reconstruction, and real-time online visualization are critical for intrabody medical imaging devices such as endoscopes and capsule robots. However, existing SLAM (Simultaneous Localization and Mapping) methods often struggle to achieve both complete high-quality surgical field reconstruction and efficient computation, restricting their intraoperative applications among endoscopic surgeries. In this paper, we introduce EndoGSLAM, an efficient SLAM approach for endoscopic surgeries, which integrates streamlined Gaussian representation and differentiable rasterization to facilitate over 100 fps rendering speed during online camera tracking and tissue reconstructing. Extensive experiments show that EndoGSLAM achieves a better trade-off between intraoperative availability and reconstruction quality than traditional or neural SLAM approaches, showing tremendous potential for endoscopic surgeries. | EndoGSLAM: Real-Time Dense Reconstruction and Tracking in Endoscopic Surgeries using Gaussian Splatting | [
"Wang, Kailing",
"Yang, Chen",
"Wang, Yuehao",
"Li, Sikuang",
"Wang, Yan",
"Dou, Qi",
"Yang, Xiaokang",
"Shen, Wei"
] | Conference | 2403.15124 | [
"https://github.com/Loping151/EndoGSLAM"
] | https://huggingface.co/papers/2403.15124 | 0 | 1 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | Poster | 230 |
null | https://papers.miccai.org/miccai-2024/paper/0448_paper.pdf | @InProceedings{ Liu_TaggedtoCine_MICCAI2024,
author = { Liu, Xiaofeng and Xing, Fangxu and Bian, Zhangxing and Arias-Vergara, Tomas and Pérez-Toro, Paula Andrea and Maier, Andreas and Stone, Maureen and Zhuo, Jiachen and Prince, Jerry L. and Woo, Jonghye },
title = { { Tagged-to-Cine MRI Sequence Synthesis via Light Spatial-Temporal Transformer } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Tagged magnetic resonance imaging (MRI) has been successfully used to track the motion of internal tissue points within moving organs. Typically, to analyze motion using tagged MRI, cine MRI data in the same coordinate system are acquired, incurring additional time and costs. Consequently, tagged-to-cine MR synthesis holds the potential to reduce the extra acquisition time and costs associated with cine MRI, without disrupting downstream motion analysis tasks. Previous approaches have processed each frame independently, thereby overlooking the fact that complementary information from occluded regions of the tag patterns could be present in neighboring frames exhibiting motion. Furthermore, the inconsistent visual appearance, e.g., tag fading, across frames can reduce synthesis performance. To address this, we propose an efficient framework for tagged-to-cine MR sequence synthesis, leveraging both spatial and temporal information with relatively limited data. Specifically, we follow a split-and-integral protocol to balance spatial-temporal modeling efficiency and consistency. The light spatial-temporal transformer (LiST$^2$) is designed to exploit the local and global attention in motion sequence with relatively lightweight training parameters. The directional product relative position-time bias is adapted to make the model aware of the spatial-temporal correlation, while the shifted window is used for motion alignment. Then, a recurrent sliding fine-tuning (ReST) scheme is applied to further enhance the temporal consistency. Our framework is evaluated on paired tagged and cine MRI sequences, demonstrating superior performance over comparison methods. | Tagged-to-Cine MRI Sequence Synthesis via Light Spatial-Temporal Transformer | [
"Liu, Xiaofeng",
"Xing, Fangxu",
"Bian, Zhangxing",
"Arias-Vergara, Tomas",
"Pérez-Toro, Paula Andrea",
"Maier, Andreas",
"Stone, Maureen",
"Zhuo, Jiachen",
"Prince, Jerry L.",
"Woo, Jonghye"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 231 |
||
null | https://papers.miccai.org/miccai-2024/paper/0960_paper.pdf | @InProceedings{ Li_Textmatch_MICCAI2024,
author = { Li, Aibing and Zeng, Xinyi and Zeng, Pinxian and Ding, Sixian and Wang, Peng and Wang, Chengdi and Wang, Yan },
title = { { Textmatch: Using Text Prompts to Improve Semi-supervised Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Semi-supervised learning, a paradigm involving training models with limited labeled data alongside abundant unlabeled images, has significantly ad-vanced medical image segmentation. However, the absence of label supervi-sion introduces noise during training, posing a challenge in achieving a well-clustered feature space essential for acquiring discriminative representations in segmentation tasks. In this context, the emergence of vision-language (VL) models in natural image processing has showcased promising capabili-ties in aiding object localization through the utilization of text prompts, demonstrating potential as an effective solution for addressing annotation scarcity. Building upon this insight, we present Textmatch, a novel frame-work that leverages text prompts to enhance segmentation performance in semi-supervised medical image segmentation. Specifically, our approach in-troduces a Bilateral Prompt Decoder (BPD) to address modal discrepancies between visual and linguistic features, facilitating the extraction of comple-mentary information from multi-modal data. Then, we propose the Multi-views Consistency Regularization (MCR) strategy to ensure consistency among multiple views derived from perturbations in both image and text domains, reducing the impact of noise and generating more reliable pseudo-labels. Furthermore, we leverage these pseudo-labels and conduct Pseudo-Label Guided Contrastive Learning (PGCL) in the feature space to encourage intra-class aggregation and inter-class separation between features and proto-types, thus enhancing the generation of more discriminative representations for segmentation. Extensive experiments on two publicly available datasets demonstrate that our framework outperforms previous methods employing image-only and multi-modal approaches, establishing a new state-of-the-art performance. | Textmatch: Using Text Prompts to Improve Semi-supervised Medical Image Segmentation | [
"Li, Aibing",
"Zeng, Xinyi",
"Zeng, Pinxian",
"Ding, Sixian",
"Wang, Peng",
"Wang, Chengdi",
"Wang, Yan"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 232 |
||
null | https://papers.miccai.org/miccai-2024/paper/1608_paper.pdf | @InProceedings{ She_DCDiff_MICCAI2024,
author = { Shen, Ruochong and Li, Xiaoxu and Li, Yuan-Fang and Sui, Chao and Peng, Yu and Ke, Qiuhong },
title = { { DCDiff: Dual-Domain Conditional Diffusion for CT Metal Artifact Reduction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Metallic implants in X-ray Computed Tomography (CT) scans can lead to undesirable artifacts, adversely affecting the quality of images and, consequently, the effectiveness of clinical treatment. Metal Artifact Reduction (MAR) is essential for improving diagnostic accuracy, yet this task is challenging due to the uncertainty associated with the affected regions. In this paper, inspired by the capabilities of diffusion models in generating high-quality images, we present a novel MAR framework termed Dual-Domain Conditional Diffusion (DCDiff). Specifically, our DCDiff takes dual-domain information as the input conditions for generating clean images: 1) the image domain incorporating raw CT image and the filtered back project (FBP) output of the metal trace, and 2) the sinogram domain achieved with a new diffusion interpolation algorithm. Experimental results demonstrate that our DCDiff outperforms state-of-the-art methods, showcasing its effectiveness for MAR. | DCDiff: Dual-Domain Conditional Diffusion for CT Metal Artifact Reduction | [
"Shen, Ruochong",
"Li, Xiaoxu",
"Li, Yuan-Fang",
"Sui, Chao",
"Peng, Yu",
"Ke, Qiuhong"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 233 |
||
null | https://papers.miccai.org/miccai-2024/paper/1667_paper.pdf | @InProceedings{ Mat_Learning_MICCAI2024,
author = { Matsuo, Shinnosuke and Suehiro, Daiki and Uchida, Seiichi and Ito, Hiroaki and Terada, Kazuhiro and Yoshizawa, Akihiko and Bise, Ryoma },
title = { { Learning from Partial Label Proportions for Whole Slide Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | In this paper, we address the segmentation of tumor subtypes in whole slide images (WSI) by utilizing incomplete label proportions. Specifically, we utilize partial' label proportions, which give the proportions among tumor subtypes but do not give the proportion between tumor and non-tumor. Partial label proportions are recorded as the standard diagnostic information by pathologists, and we, therefore, want to use them for realizing the segmentation model that can classify each WSI patch into one of the tumor subtypes or non-tumor. We call this problem `learning from partial label proportions (LPLP)’’ and formulate the problem as a weakly supervised learning problem. Then, we propose an efficient algorithm for this challenging problem by decomposing it into two weakly supervised learning subproblems: multiple instance learning (MIL) and learning from label proportions (LLP). These subproblems are optimized efficiently in the end-to-end manner. The effectiveness of our algorithm is demonstrated through experiments conducted on two WSI datasets. This code is available at https://github.com/matsuo-shinnosuke/LPLP. | Learning from Partial Label Proportions for Whole Slide Image Segmentation | [
"Matsuo, Shinnosuke",
"Suehiro, Daiki",
"Uchida, Seiichi",
"Ito, Hiroaki",
"Terada, Kazuhiro",
"Yoshizawa, Akihiko",
"Bise, Ryoma"
] | Conference | 2405.09041 | [
"https://github.com/matsuo-shinnosuke/LPLP"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 234 |
|
null | https://papers.miccai.org/miccai-2024/paper/2251_paper.pdf | @InProceedings{ Woo_Feature_MICCAI2024,
author = { Woodland, McKell and Castelo, Austin and Al Taie, Mais and Albuquerque Marques Silva, Jessica and Eltaher, Mohamed and Mohn, Frank and Shieh, Alexander and Kundu, Suprateek and Yung, Joshua P. and Patel, Ankit B. and Brock, Kristy K. },
title = { { Feature Extraction for Generative Medical Imaging Evaluation: New Evidence Against an Evolving Trend } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Fréchet Inception Distance (FID) is a widely used metric for assessing synthetic image quality. It relies on an ImageNet-based feature extractor, making its applicability to medical imaging unclear. A recent trend is to adapt FID to medical imaging through feature extractors trained on medical images. Our study challenges this practice by demonstrating that ImageNet-based extractors are more consistent and aligned with human judgment than their RadImageNet counterparts. We evaluated sixteen StyleGAN2 networks across four medical imaging modalities and four data augmentation techniques with Fréchet distances (FDs) computed using eleven ImageNet or RadImageNet-trained feature extractors. Comparison with human judgment via visual Turing tests revealed that ImageNet-based extractors produced rankings consistent with human judgment, with the FD derived from the ImageNet-trained SwAV extractor significantly correlating with expert evaluations. In contrast, RadImageNet-based rankings were volatile and inconsistent with human judgment. Our findings challenge prevailing assumptions, providing novel evidence that medical image-trained feature extractors do not inherently improve FDs and can even compromise their reliability. Our code is available at https://github.com/mckellwoodland/fid-med-eval. | Feature Extraction for Generative Medical Imaging Evaluation: New Evidence Against an Evolving Trend | [
"Woodland, McKell",
"Castelo, Austin",
"Al Taie, Mais",
"Albuquerque Marques Silva, Jessica",
"Eltaher, Mohamed",
"Mohn, Frank",
"Shieh, Alexander",
"Kundu, Suprateek",
"Yung, Joshua P.",
"Patel, Ankit B.",
"Brock, Kristy K."
] | Conference | 2311.13717 | [
"https://github.com/mckellwoodland/fid-med-eval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 235 |
|
null | https://papers.miccai.org/miccai-2024/paper/2483_paper.pdf | @InProceedings{ Lee_Referencefree_MICCAI2024,
author = { Lee, Kyungryun and Jeong, Won-Ki },
title = { { Reference-free Axial Super-resolution of 3D Microscopy Images using Implicit Neural Representation with a 2D Diffusion Prior } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Analysis and visualization of 3D microscopy images pose
challenges due to anisotropic axial resolution, demanding volumetric super-resolution along the axial direction. While training a learning-based 3D super-resolution model seems to be a straightforward solution, it requires ground truth isotropic volumes and suffers from the curse of dimensionality. Therefore, existing methods utilize 2D neural networks to reconstruct each axial slice, eventually piecing together the entire volume. However, reconstructing each slice in the pixel domain fails to
give consistent reconstruction in all directions leading to misalignment artifacts. In this work, we present a reconstruction framework based on implicit neural representation (INR), which allows 3D coherency even when optimized by independent axial slices in a batch-wise manner. Our method optimizes a continuous volumetric representation from lowresolution axial slices, using a 2D diffusion prior trained on high-resolution lateral slices without requiring isotropic volumes. Through experiments on real and synthetic anisotropic microscopy images, we demonstrate that our method surpasses other state-of-the-art reconstruction methods. | Reference-free Axial Super-resolution of 3D Microscopy Images using Implicit Neural Representation with a 2D Diffusion Prior | [
"Lee, Kyungryun",
"Jeong, Won-Ki"
] | Conference | 2408.08616 | [
"https://github.com/hvcl/INR-diffusion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 236 |
|
null | https://papers.miccai.org/miccai-2024/paper/1577_paper.pdf | @InProceedings{ Wu_FACMIC_MICCAI2024,
author = { Wu, Yihang and Desrosiers, Christian and Chaddad, Ahmad },
title = { { FACMIC: Federated Adaptative CLIP Model for Medical Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Federated learning (FL) has emerged as a promising approach to medical image analysis that allows deep model training using decentralized data while ensuring data privacy. However, in the field of FL, communication cost plays a critical role in evaluating the performance of the model. Thus, transferring vision foundation models can be particularly challenging due to the significant resource costs involved. In this paper, we introduce a federated adaptive Contrastive Language Image Pretraining (\clip{}) model designed for classification tasks. We employ a light-weight and efficient feature attention module for \clip{} that selects suitable features for each client’s data. Additionally, we propose a domain adaptation technique to reduce differences in data distribution between clients.
Experimental results on four publicly available datasets demonstrate the superior performance of FACMIC in dealing with real-world and multisource medical imaging data. Our codes are available at \url{https://github.com/AIPMLab/FACMIC}. | FACMIC: Federated Adaptative CLIP Model for Medical Image Classification | [
"Wu, Yihang",
"Desrosiers, Christian",
"Chaddad, Ahmad"
] | Conference | 2410.14707 | [
"https://github.com/AIPMLab/FACMIC"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 237 |
|
null | https://papers.miccai.org/miccai-2024/paper/1889_paper.pdf | @InProceedings{ Ama_Goalconditioned_MICCAI2024,
author = { Amadou, Abdoul Aziz and Singh, Vivek and Ghesu, Florin C. and Kim, Young-Ho and Stanciulescu, Laura and Sai, Harshitha P. and Sharma, Puneet and Young, Alistair and Rajani, Ronak and Rhode, Kawal },
title = { { Goal-conditioned reinforcement learning for ultrasound navigation guidance } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Transesophageal echocardiography (TEE) plays a pivotal role in cardiology for diagnostic and interventional procedures. However, using it effectively requires extensive training due to the intricate nature of image acquisition and interpretation. To enhance the efficiency of novice sonographers and reduce variability in scan acquisitions, we propose a novel ultrasound (US) navigation assistance method based on contrastive learning as goal-conditioned reinforcement learning (GCRL). We augment the previous framework using a novel contrastive patient batching method (CPB) and a data-augmented contrastive loss, both of which we demonstrate are essential to ensure generalization to anatomical variations across patients. The proposed framework enables navigation to both standard diagnostic as well as intricate interventional views with a single model. Our method was developed with a large dataset of 789 patients and obtained an average error of 6.56 mm in position and 9.36 degrees in angle on a testing dataset of 140 patients, which is competitive or superior to models trained on individual views. Furthermore, we quantitatively validate our method’s ability to navigate to interventional views such as the Left Atrial Appendage (LAA) view used in LAA closure. Our approach holds promise in providing valuable guidance during transesophageal ultrasound examinations, contributing to the advancement of skill acquisition for cardiac ultrasound practitioners. | Goal-conditioned reinforcement learning for ultrasound navigation guidance | [
"Amadou, Abdoul Aziz",
"Singh, Vivek",
"Ghesu, Florin C.",
"Kim, Young-Ho",
"Stanciulescu, Laura",
"Sai, Harshitha P.",
"Sharma, Puneet",
"Young, Alistair",
"Rajani, Ronak",
"Rhode, Kawal"
] | Conference | 2405.01409 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 238 |
|
null | https://papers.miccai.org/miccai-2024/paper/1536_paper.pdf | @InProceedings{ Shi_MoRA_MICCAI2024,
author = { Shi, Zhiyi and Kim, Junsik and Li, Wanhua and Li, Yicong and Pfister, Hanspeter },
title = { { MoRA: LoRA Guided Multi-Modal Disease Diagnosis with Missing Modality } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Multi-modal pre-trained models efficiently extract and fuse features from different modalities with low memory requirements for fine-tuning. Despite this efficiency, their application in disease diagnosis is under-explored. A significant challenge is the frequent occurrence of missing modalities, which impairs performance. Additionally, fine-tuning the entire pre-trained model demands substantial computational resources. To address these issues, we introduce Modality-aware Low-Rank Adaptation (MoRA), a computationally efficient method. MoRA projects each input to a low intrinsic dimension but uses different modality-aware up-projections for modality-specific adaptation in cases of missing modalities. Practically, MoRA integrates into the first block of the model, significantly improving performance when a modality is missing. It requires minimal computational resources, with less than 1.6\% of the trainable parameters needed compared to training the entire model. Experimental results show that MoRA outperforms existing techniques in disease diagnosis, demonstrating superior performance, robustness, and training efficiency. The code link is: https://github.com/zhiyiscs/MoRA. | MoRA: LoRA Guided Multi-Modal Disease Diagnosis with Missing Modality | [
"Shi, Zhiyi",
"Kim, Junsik",
"Li, Wanhua",
"Li, Yicong",
"Pfister, Hanspeter"
] | Conference | 2408.09064 | [
"https://github.com/zhiyiscs/MoRA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 239 |
|
null | https://papers.miccai.org/miccai-2024/paper/0483_paper.pdf | @InProceedings{ Tme_Deep_MICCAI2024,
author = { Tmenova, Oleksandra and Velikova, Yordanka and Saleh, Mahdi and Navab, Nassir },
title = { { Deep Spectral Methods for Unsupervised Ultrasound Image Interpretation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Ultrasound imaging is challenging to interpret due to non-uniform intensities, low contrast, and inherent artifacts, necessitating extensive training for non-specialists. Advanced representation with clear tissue structure separation could greatly assist clinicians in mapping underlying anatomy and distinguishing between tissue layers. Decomposing an image into semantically meaningful segments is mainly achieved using supervised segmentation algorithms. Unsupervised methods are beneficial, as acquiring large labeled datasets is difficult and costly, but despite their advantages, they still need to be explored in ultrasound. This paper proposes a novel unsupervised deep learning strategy tailored to ultrasound to obtain easily interpretable tissue separations. We integrate key concepts from unsupervised deep spectral methods, which combine spectral graph theory with deep learning methods. We utilize self-supervised transformer features for spectral clustering to generate meaningful segments based on ultrasound-specific metrics and shape and positional priors, ensuring semantic consistency across the dataset. We evaluate our unsupervised deep learning strategy on three ultrasound datasets, showcasing qualitative results across anatomical contexts without label requirements. We also conduct a comparative analysis against other clustering algorithms to demonstrate superior segmentation performance, boundary preservation, and label consistency. | Deep Spectral Methods for Unsupervised Ultrasound Image Interpretation | [
"Tmenova, Oleksandra",
"Velikova, Yordanka",
"Saleh, Mahdi",
"Navab, Nassir"
] | Conference | 2408.02043 | [
"https://github.com/alexaatm/UnsupervisedSegmentor4Ultrasound"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 240 |
|
null | https://papers.miccai.org/miccai-2024/paper/1262_paper.pdf | @InProceedings{ Wu_Noise_MICCAI2024,
author = { Wu, Chongwei and Zeng, Xiaoyu and Wang, Hao and Zhang, Xu and Fang, Wei and Li, Qiang and Wang, Zhiwei },
title = { { Noise Removed Inconsistency Activation Map for Unsupervised Registration of Brain Tumor MRI between Pre-operative and Follow-up Phases } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Structure inconsistency is the key challenge in registration of brain MRI between pre-operative and follow-up phases, which misguides the objective of image similarity maximization, and thus degrades the performance significantly. The current solutions rely on bidirectional registration to find the mismatched deformation fields as the inconsistent areas, and use them to filter out the unreliable similarity measurements. However, this is sensitive to the accumulated registration errors, and thus yields inaccurate inconsistent areas. In this paper, we provide a more efficient and accurate way, by letting the registration model itself to `speak out’ a Noise Removed Inconsistency Activation Map (NR-IAM) as the indicator of structure inconsistencies. We first obtain an IAM by use of the gradient-weighted feature maps but adopting an inverse direction. With this manner only, the resulting inconsistency map often occurs false highlights near some common structures like venous sinus. Therefore, we further introduce a statistical approach to remove the common erroneous activations in IAM to obtain NR-IAM. The experimental results on both public and private datasets demonstrate that by use of our proposed NR-IAM to guide the optimization, the registration performance can be significantly boosted, and is superior over that relying on the bidirectional registration by decreasing mean registration error by 5\% and 4\% in near tumor and far from tumor regions, respectively. Codes are available at https://github.com/chongweiwu/NR-IAM. | Noise Removed Inconsistency Activation Map for Unsupervised Registration of Brain Tumor MRI between Pre-operative and Follow-up Phases | [
"Wu, Chongwei",
"Zeng, Xiaoyu",
"Wang, Hao",
"Zhang, Xu",
"Fang, Wei",
"Li, Qiang",
"Wang, Zhiwei"
] | Conference | [
"https://github.com/chongweiwu/NR-IAM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 241 |
||
null | https://papers.miccai.org/miccai-2024/paper/2168_paper.pdf | @InProceedings{ Xio_MoME_MICCAI2024,
author = { Xiong, Conghao and Chen, Hao and Zheng, Hao and Wei, Dong and Zheng, Yefeng and Sung, Joseph J. Y. and King, Irwin },
title = { { MoME: Mixture of Multimodal Experts for Cancer Survival Prediction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Survival prediction requires integrating Whole Slide Images (WSIs) and genomics, a task complicated by significant heterogeneity and complex inter- and intra-modal interactions between modalities. Previous methods used co-attention, fusing features only once after separate encoding, which is insufficient to model such a complex task due to modality heterogeneity. To this end, we propose a Biased Progressive Encoding (BPE) paradigm, performing encoding and fusion simultaneously. This paradigm uses one modality as a reference when encoding the other, fostering deep fusion of the modalities through multiple iterations, progressively reducing the cross-modal disparities and facilitating complementary interactions. Besides, survival prediction involves biomarkers from WSIs, genomics, and their integrative analysis. Key biomarkers may exist in different modalities under individual variations, necessitating the model flexibility. Hence, we further propose a Mixture of Multimodal Experts layer to dynamically select tailored experts in each stage of the BPE paradigm. Experts incorporate reference information from another modality to varying degrees, enabling a balanced or biased focus on different modalities during the encoding process. The experimental results demonstrate the superior performance of our method on various datasets, including TCGA-BLCA, TCGA-UCEC and TCGA-LUAD. Codes are available at https://github.com/BearCleverProud/MoME. | MoME: Mixture of Multimodal Experts for Cancer Survival Prediction | [
"Xiong, Conghao",
"Chen, Hao",
"Zheng, Hao",
"Wei, Dong",
"Zheng, Yefeng",
"Sung, Joseph J. Y.",
"King, Irwin"
] | Conference | 2406.09696 | [
"https://github.com/BearCleverProud/MoME"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 242 |
|
null | https://papers.miccai.org/miccai-2024/paper/4012_paper.pdf | @InProceedings{ Qia_Medical_MICCAI2024,
author = { Qiao, Qiang and Wang, Wenyu and Qu, Meixia and Su, Kun and Jiang, Bin and Guo, Qiang },
title = { { Medical Image Segmentation via Single-Source Domain Generalization with Random Amplitude Spectrum Synthesis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | The field of medical image segmentation is challenged by domain generalization (DG) due to domain shifts in clinical datasets. The DG challenge is exacerbated by the scarcity of medical data and privacy concerns. Traditional single-source domain generalization (SSDG) methods primarily rely on stacking data augmentation techniques to minimize domain discrepancies. In this paper, we propose Random Amplitude Spectrum Synthesis (RASS) as a training augmentation for medical images. RASS enhances model generalization by simulating distribution changes from a frequency perspective. This strategy introduces variability by applying amplitude-dependent perturbations to ensure broad coverage of potential domain variations. Furthermore, we propose random mask shuffle and reconstruction components, which can enhance the ability of the backbone to process structural information and increase resilience intra- and cross-domain changes. The proposed Random Amplitude Spectrum Synthesis for Single-Source Domain Generalization (RAS^4DG) is validated on 3D fetal brain images and 2D fundus photography, and achieves an improved DG segmentation performance compared to other SSDG models. The source code is available at: https://github.com/qintianjian-lab/RAS4DG. | Medical Image Segmentation via Single-Source Domain Generalization with Random Amplitude Spectrum Synthesis | [
"Qiao, Qiang",
"Wang, Wenyu",
"Qu, Meixia",
"Su, Kun",
"Jiang, Bin",
"Guo, Qiang"
] | Conference | 2409.04768 | [
"https://github.com/qintianjian-lab/ras4dg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 243 |
|
null | https://papers.miccai.org/miccai-2024/paper/3757_paper.pdf | @InProceedings{ Gau_Immuneguided_MICCAI2024,
author = { Gautam, Tanishq and Gonzalez, Karina P. and Salvatierra, Maria E. and Serrano, Alejandra and Chen, Pingjun and Pan, Xiaoxi and Shokrollahi, Yasin and Ranjbar, Sara and Rodriguez, Leticia and Team, Patient Mosaic and Solis-Soto, Luisa and Yuan, Yinyin and Castillo, Simon P. },
title = { { Immune-guided AI for Reproducible Regions of Interest Selection in Multiplex Immunofluorescence Pathology Imaging } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Selecting regions of interest (ROIs) in whole-slide histology images (WSIs) is a crucial step for spatial molecular profiling. As a general practice, pathologists manually select ROIs within each WSI based on morphological tumor markers to guide spatial profiling, which can be inconsistent and subjective. To enhance reproducibility and avoid inter-pathologist variability, we introduce a novel immune-guided end-to-end pipeline to automate the ROI selection in multiplex immunofluorescence (mIF) WSIs stained with three cell markers (Syto13, CD45, PanCK). First, we estimate immune infiltration (CD45+ expression) scores at the grid level in each WSI. Then, we incorporate the Pathology Language and Image Pre-Training (PLIP) foundational model to extract features from each grid and further select a subset of grids representative of the whole slide that comparatively matches pathologists’ assessment. Further, we implement state-of-the-art detection models for ROI detection in each grid, incorporating learning from pathologists’ ROI selection. Our study shows a significant correlation between our automated method and pathologists’ ROI selection across five different types of carcinomas, as evidenced by a significant Spearman’s correlation coefficient (> 0.785, p < 0.001), substantial inter-rater agreement (Cohen’s kappa > 0.671), and the ability to replicate the ROI selection made by independent pathologists with excellent average performance (0.968 precision and 0.991 mean average precision at a 0.5 intersection-over-union). By minimizing manual intervention, our solution provides a flexible framework that potentially adapts to various markers, thus enhancing the efficiency and accuracy of digital pathology analyses. | Immune-guided AI for Reproducible Regions of Interest Selection in Multiplex Immunofluorescence Pathology Imaging | [
"Gautam, Tanishq",
"Gonzalez, Karina P.",
"Salvatierra, Maria E.",
"Serrano, Alejandra",
"Chen, Pingjun",
"Pan, Xiaoxi",
"Shokrollahi, Yasin",
"Ranjbar, Sara",
"Rodriguez, Leticia",
"Team, Patient Mosaic",
"Solis-Soto, Luisa",
"Yuan, Yinyin",
"Castillo, Simon P."
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 244 |
||
null | https://papers.miccai.org/miccai-2024/paper/0821_paper.pdf | @InProceedings{ Son_SDCL_MICCAI2024,
author = { Song, Bentao and Wang, Qingfeng },
title = { { SDCL: Students Discrepancy-Informed Correction Learning for Semi-supervised Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Semi-supervised medical image segmentation (SSMIS) has
been demonstrated the potential to mitigate the issue of limited medical labeled data. However, confirmation and cognitive biases may affect the prevalent teacher-student based SSMIS methods due to erroneous pseudo-labels. To tackle this challenge, we improve the mean teacher approach and propose the Students Discrepancy-Informed Correction Learning (SDCL) framework that includes two students and one nontrainable teacher, which utilizes the segmentation difference between the
two students to guide the self-correcting learning. The essence of SDCL is to identify the areas of segmentation discrepancy as the potential bias areas, and then encourage the model to review the correct cognition and rectify their own biases in these areas. To facilitate the bias correction learning with continuous review and rectification, two correction loss functions are employed to minimize the correct segmentation voxel
distance and maximize the erroneous segmentation voxel entropy. We conducted experiments on three public medical image datasets: two 3D datasets (CT and MRI) and one 2D dataset (MRI). The results show that our SDCL surpasses the current State-of-the-Art (SOTA) methods by 2.57%, 3.04%, and 2.34% in the Dice score on the Pancreas, LA, and ACDC datasets, respectively. In addition, the accuracy of our method is
very close to the fully supervised method on the ACDC dataset, and even exceeds the fully supervised method on the Pancreas and LA dataset.(Code available at https://github.com/pascalcpp/SDCL). | SDCL: Students Discrepancy-Informed Correction Learning for Semi-supervised Medical Image Segmentation | [
"Song, Bentao",
"Wang, Qingfeng"
] | Conference | 2409.16728 | [
"https://github.com/pascalcpp/SDCL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 245 |
|
null | https://papers.miccai.org/miccai-2024/paper/0065_paper.pdf | @InProceedings{ Cho_Embracing_MICCAI2024,
author = { Chou, Yu-Cheng and Zhou, Zongwei and Yuille, Alan },
title = { { Embracing Massive Medical Data } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | As massive medical data become available with an increasing number of scans, expanding classes, and varying sources, prevalent training paradigms–where AI is trained with multiple passes over fixed, finite datasets–face significant challenges. First, training AI all at once on such massive data is impractical as new scans/sources/classes continuously arrive. Second, training AI continuously on new scans/sources/classes can lead to catastrophic forgetting, where AI forgets old data as it learns new data, and vice versa. To address these two challenges, we propose an online learning method that enables training AI from massive medical data. Instead of repeatedly training AI on randomly selected data samples, our method identifies the most significant samples for the current AI model based on their data uniqueness and prediction uncertainty, then trains the AI on these selective data samples. Compared with prevalent training paradigms, our method not only improves data efficiency by enabling training on continual data streams, but also mitigates catastrophic forgetting by selectively training AI on significant data samples that might otherwise be forgotten, outperforming by 15% in Dice score for multi-organ and tumor segmentation. | Embracing Massive Medical Data | [
"Chou, Yu-Cheng",
"Zhou, Zongwei",
"Yuille, Alan"
] | Conference | 2407.04687 | [
"https://github.com/MrGiovanni/OnlineLearning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 246 |
|
null | https://papers.miccai.org/miccai-2024/paper/1854_paper.pdf | @InProceedings{ Zhu_Lifelong_MICCAI2024,
author = { Zhu, Xinyu and Jiang, Zhiguo and Wu, Kun and Shi, Jun and Zheng, Yushan },
title = { { Lifelong Histopathology Whole Slide Image Retrieval via Distance Consistency Rehearsal } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Content-based histopathological image retrieval (CBHIR) has gained attention in recent years, offering the capability to return histopathology images that are content-wise similar to the query one from an established database. However, in clinical practice, the continuously expanding size of WSI databases limits the practical application of the current CBHIR methods. In this paper, we propose a Lifelong Whole Slide Retrieval (LWSR) framework to address the challenges of catastrophic forgetting by progressive model updating on continuously growing retrieval database. Our framework aims to achieve the balance between stability and plasticity during continuous learning. To preserve system plasticity, we utilize local memory bank with reservoir sampling method to save instances, which can comprehensively encompass the feature spaces of both old and new tasks. Furthermore, A distance consistency rehearsal (DCR) module is designed to ensure the retrieval queue’s consistency for previous tasks, which is regarded as stability within a lifelong CBHIR system. We evaluated the proposed method on four public WSI datasets from TCGA projects. The experimental results have demonstrated the proposed method is effective and is superior to the state-of-the-art methods. | Lifelong Histopathology Whole Slide Image Retrieval via Distance Consistency Rehearsal | [
"Zhu, Xinyu",
"Jiang, Zhiguo",
"Wu, Kun",
"Shi, Jun",
"Zheng, Yushan"
] | Conference | 2407.08153 | [
"https://github.com/OliverZXY/LWSR"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 247 |
|
null | https://papers.miccai.org/miccai-2024/paper/1767_paper.pdf | @InProceedings{ Zha_Feature_MICCAI2024,
author = { Zhao, Yimin and Gu, Jin },
title = { { Feature Fusion Based on Mutual-Cross-Attention Mechanism for EEG Emotion Recognition } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | An objective and accurate emotion diagnostic reference is vital to psychologists, especially when dealing with patients who are difficult to communicate with for pathological reasons. Nevertheless, current systems based on Electroencephalography (EEG) data utilized for sentiment discrimination have some problems, including excessive model complexity, mediocre accuracy, and limited interpretability. Consequently, we propose a novel and effective feature fusion mechanism named Mutual-Cross-Attention (MCA). Combining with a specially customized 3D Convolutional Neural Network (3D-CNN), this purely mathematical mechanism adeptly discovers the complementary relationship between time-domain and frequency-domain features in EEG data. Furthermore, the new designed Channel-PSD-DE 3D feature also contributes to the high performance. The proposed method eventually achieves 99.49% (valence) and 99.30% (arousal) accuracy on DEAP dataset. Our code and data is open-sourced at https://github.com/ztony0712/MCA. | Feature Fusion Based on Mutual-Cross-Attention Mechanism for EEG Emotion Recognition | [
"Zhao, Yimin",
"Gu, Jin"
] | Conference | 2406.14014 | [
"https://github.com/ztony0712/MCA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 248 |
|
null | https://papers.miccai.org/miccai-2024/paper/2017_paper.pdf | @InProceedings{ He_Algebraic_MICCAI2024,
author = { He, Jin and Liu, Weizhou and Zhao, Shifeng and Tian, Yun and Wang, Shuo },
title = { { Algebraic Sphere Surface Fitting for Accurate and Efficient Mesh Reconstruction from Cine CMR Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Accurate 3D modeling of the ventricles through cine cardiovascular magnetic resonance (CMR) imaging benefits precise clinical assessment of cardiac morphology and motion. However, the existing short-axis stacks exhibit low spatial resolution in the inter-slice orientation compared to the intra-slice direction, resulting in a sparse representation of the realistic heart. The anisotropic short-axis images pose challenges in directly reconstructing meshes from them. In this work, we propose a surface fitting approach based on the algebraic sphere, which serves as a previous step for various mesh-based applications, to reconstruct a natural ventricular shape from the segmented wireframe-type point cloud. Considering the sparse and layered nature of the point clouds, we first estimate the normals of the point cloud based on dynamic programming and neighborhood selection, followed by fitting a point set surface using a non-compact kernel adapted by layers. Finally, an implicit scalar field representing the signed distance between the query point and the projection point is obtained, and the manifold mesh is extracted by meshing zero iso-surface. Experimental results on two publicly available datasets demonstrate that the proposed framework can accurately and effectively reconstruct ventricular mesh from a single image with better cross-domain generalizability. | Algebraic Sphere Surface Fitting for Accurate and Efficient Mesh Reconstruction from Cine CMR Images | [
"He, Jin",
"Liu, Weizhou",
"Zhao, Shifeng",
"Tian, Yun",
"Wang, Shuo"
] | Conference | [
"https://github.com/hejin9/algebraic-sphere-surface-fitting"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 249 |
||
null | https://papers.miccai.org/miccai-2024/paper/0936_paper.pdf | @InProceedings{ Han_NonAdversarial_MICCAI2024,
author = { Han, Luyi and Tan, Tao and Zhang, Tianyu and Wang, Xin and Gao, Yuan and Lu, Chunyao and Liang, Xinglong and Dou, Haoran and Huang, Yunzhi and Mann, Ritse },
title = { { Non-Adversarial Learning: Vector-Quantized Common Latent Space for Multi-Sequence MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Adversarial learning helps generative models translate MRI from source to target sequence when lacking paired samples. However, implementing MRI synthesis with adversarial learning in clinical settings is challenging due to training instability and mode collapse. To address this issue, we leverage intermediate sequences to estimate the common latent space among multi-sequence MRI, enabling the reconstruction of distinct sequences from the common latent space. We propose a generative model that compresses discrete representations of each sequence to estimate the Gaussian distribution of vector-quantized common (VQC) latent space between multiple sequences. Moreover, we improve the latent space consistency with contrastive learning and increase model stability by domain augmentation. Experiments using BraTS2021 dataset show that our non-adversarial model outperforms other GAN-based methods, and VQC latent space aids our model to achieve (1) anti-interference ability, which can eliminate the effects of noise, bias fields, and artifacts, and (2) solid semantic representation ability, with the potential of one-shot segmentation. Our code is publicly available. | Non-Adversarial Learning: Vector-Quantized Common Latent Space for Multi-Sequence MRI | [
"Han, Luyi",
"Tan, Tao",
"Zhang, Tianyu",
"Wang, Xin",
"Gao, Yuan",
"Lu, Chunyao",
"Liang, Xinglong",
"Dou, Haoran",
"Huang, Yunzhi",
"Mann, Ritse"
] | Conference | 2407.02911 | [
"https://github.com/fiy2W/mri\\_seq2seq"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 250 |
|
null | https://papers.miccai.org/miccai-2024/paper/1788_paper.pdf | @InProceedings{ Dor_PatientSpecific_MICCAI2024,
author = { Dorent, Reuben and Torio, Erickson and Haouchine, Nazim and Galvin, Colin and Frisken, Sarah and Golby, Alexandra and Kapur, Tina and Wells III, William M. },
title = { { Patient-Specific Real-Time Segmentation in Trackerless Brain Ultrasound } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Intraoperative ultrasound (iUS) imaging has the potential to improve surgical outcomes in brain surgery. However, its interpretation is challenging, even for expert neurosurgeons. In this work, we designed the first patient-specific framework that performs brain tumor segmentation in trackerless iUS. To disambiguate ultrasound imaging and adapt to the neurosurgeon’s surgical objective, a patient-specific real-time network is trained using synthetic ultrasound data generated by simulating virtual iUS sweep acquisitions in pre-operative MR data. Extensive experiments performed in real ultrasound data demonstrate the effectiveness of the proposed approach, allowing for adapting to the surgeon’s definition of surgical targets and outperforming non-patient-specific models, neurosurgeon experts, and high-end tracking systems. Our code is available at: \url{https://github.com/ReubenDo/MHVAE-Seg}. | Patient-Specific Real-Time Segmentation in Trackerless Brain Ultrasound | [
"Dorent, Reuben",
"Torio, Erickson",
"Haouchine, Nazim",
"Galvin, Colin",
"Frisken, Sarah",
"Golby, Alexandra",
"Kapur, Tina",
"Wells III, William M."
] | Conference | 2405.09959 | [
"https://github.com/ReubenDo/MHVAE-Seg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 251 |
|
null | https://papers.miccai.org/miccai-2024/paper/1511_paper.pdf | @InProceedings{ Hua_Noise_MICCAI2024,
author = { Huang, Shoujin and Luo, Guanxiong and Wang, Xi and Chen, Ziran and Wang, Yuwan and Yang, Huaishui and Heng, Pheng-Ann and Zhang, Lingyan and Lyu, Mengye },
title = { { Noise Level Adaptive Diffusion Model for Robust Reconstruction of Accelerated MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | In general, diffusion model-based MRI reconstruction methods incrementally remove artificially added noise while imposing data consistency to reconstruct the underlying images. However, real-world MRI acquisitions already contain inherent noise due to thermal fluctuations. This phenomenon is particularly notable when using ultra-fast, high-resolution imaging sequences for advanced research, or using low-field systems favored by low- and middle-income countries. These common scenarios can lead to sub-optimal performance or complete failure of existing diffusion model-based reconstruction techniques. Specifically, as the artificially added noise is gradually removed, the inherent MRI noise becomes increasingly pronounced, making the actual noise level inconsistent with the predefined denoising schedule and consequently inaccurate image reconstruction. To tackle this problem, we propose a posterior sampling strategy with a novel NoIse Level Adaptive Data Consistency (Nila-DC) operation. Extensive experiments are conducted on two public datasets and an in-house clinical dataset with field strength ranging from 0.3T to 3T, showing that our method surpasses the state-of-the-art MRI reconstruction methods, and is highly robust against various noise levels. The code for Nila is available at \url{https://github.com/Solor-pikachu/Nila}. | Noise Level Adaptive Diffusion Model for Robust Reconstruction of Accelerated MRI | [
"Huang, Shoujin",
"Luo, Guanxiong",
"Wang, Xi",
"Chen, Ziran",
"Wang, Yuwan",
"Yang, Huaishui",
"Heng, Pheng-Ann",
"Zhang, Lingyan",
"Lyu, Mengye"
] | Conference | 2403.05245 | [
"https://github.com/Solor-pikachu/Nila"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 252 |
|
null | https://papers.miccai.org/miccai-2024/paper/1468_paper.pdf | @InProceedings{ He_Embryo_MICCAI2024,
author = { He, Chloe and Karpavičiūtė, Neringa and Hariharan, Rishabh and Jacques, Céline and Chambost, Jérôme and Malmsten, Jonas and Zaninovic, Nikica and Wouters, Koen and Fréour, Thomas and Hickman, Cristina and Vasconcelos, Francisco },
title = { { Embryo Graphs: Predicting Human Embryo Viability from 3D Morphology } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Embryo selection is a critical step in the process of in-vitro fertilisation in which embryologists choose the most viable embryos for transfer into the uterus. In recent years, numerous works have used computer vision to perform embryo selection. However, many of these works have neglected the fact that the embryo is a 3D structure, instead opting to analyse embryo images captured at a single focal plane.
In this paper we present a method for the 3D reconstruction of cleavage-stage human embryos. Through a user study, we validate that our reconstructions align with expert assessments. Furthermore, we demonstrate the utility of our approach by generating graph representations that capture biologically relevant features of the embryos. In pilot experiments, we train a graph neural network on these representations and show that it outperforms existing methods in predicting live birth from euploid embryo transfers. Our findings suggest that incorporating 3D reconstruction and graph-based analysis can improve automated embryo selection. | Embryo Graphs: Predicting Human Embryo Viability from 3D Morphology | [
"He, Chloe",
"Karpavičiūtė, Neringa",
"Hariharan, Rishabh",
"Jacques, Céline",
"Chambost, Jérôme",
"Malmsten, Jonas",
"Zaninovic, Nikica",
"Wouters, Koen",
"Fréour, Thomas",
"Hickman, Cristina",
"Vasconcelos, Francisco"
] | Conference | [
"https://github.com/chlohe/embryo-graphs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 253 |
||
null | https://papers.miccai.org/miccai-2024/paper/1540_paper.pdf | @InProceedings{ Hua_One_MICCAI2024,
author = { Huang, Shiqi and Xu, Tingfa and Shen, Ziyi and Saeed, Shaheer Ullah and Yan, Wen and Barratt, Dean C. and Hu, Yipeng },
title = { { One registration is worth two segmentations } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | The goal of image registration is to establish spatial correspondence between two or more images, traditionally through dense displacement fields (DDFs) or parametric transformations (e.g., rigid, affine, and splines). Rethinking the existing paradigms of achieving alignment via spatial transformations, we uncover an alternative but more intuitive correspondence representation: a set of corresponding regions-of-interest (ROI) pairs, which we demonstrate to have sufficient representational capability as other correspondence representation methods. Further, it is neither necessary nor sufficient for these ROIs to hold specific anatomical or semantic significance. In turn, we formulate image registration as searching for the same set of corresponding ROIs from both moving and fixed images - in other words, two multi-class segmentation tasks on a pair of images. For a general-purpose and practical implementation, we integrate the segment anything model (SAM) into our proposed algorithms, resulting in a SAM-enabled registration (SAMReg) that does not require any training data, gradient-based fine-tuning or engineered prompts. We experimentally show that the proposed SAMReg is capable of segmenting and matching multiple ROI pairs, which establish sufficiently accurate correspondences, in three clinical applications of registering prostate MR, cardiac MR and abdominal CT images. Based on metrics including Dice and target registration errors on anatomical structures, the proposed registration outperforms both intensity-based iterative algorithms and DDF-predicting learning-based networks, even yielding competitive performance with weakly-supervised registration which requires fully-segmented training data. | One registration is worth two segmentations | [
"Huang, Shiqi",
"Xu, Tingfa",
"Shen, Ziyi",
"Saeed, Shaheer Ullah",
"Yan, Wen",
"Barratt, Dean C.",
"Hu, Yipeng"
] | Conference | 2405.10879 | [
"https://github.com/sqhuang0103/SAMReg.git"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 254 |
|
null | https://papers.miccai.org/miccai-2024/paper/2837_paper.pdf | @InProceedings{ Li_DualModality_MICCAI2024,
author = { Li, Rui and Ruan, Jingliang and Lu, Yao },
title = { { Dual-Modality Watershed Fusion Network for Thyroid Nodule Classification of Dual-View CEUS Video } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Contrast-enhanced ultrasound (CEUS) allows real-time visualization of the vascular distribution within thyroid nodules, garnering significant attention in their intelligent diagnosis. Existing methods either focus on modifying models while neglecting the unique aspects of CEUS, or rely only single-modality data while overlooking the complementary information contained in the dual-view CEUS data. To overcome these limitations, inspired by the CEUS thyroid imaging reporting and data system (TI-RADS), this paper proposes a new dual-modality watershed fusion network (DWFN) for diagnosing thyroid nodules using dual-view CEUS videos. Specifically, the method introduces the watershed analysis from the remote sensing field and combines it with the optical flow method to extract the enhancement direction feature mentioned in the CEUS TI-RADS. On this basis, the interpretable watershed 3D network (W3DN) is constructed by C3D to further extract the dynamic blood flow features contained in CEUS videos. Furthermore, to make more comprehensive use of clinical information, a dual-modality 2D and 3D combined network, DWFN is constructed, which fuses the morphological features extracted from US images by InceptionResNetV2 and the dynamic blood flow features extracted from CEUS videos by W3DN, to classify thyroid nodules as benign or malignant. The effectiveness of the proposed DWFN method was evaluated using extensive experimental results on a collected dataset of dual-view CEUS videos for thyroid nodules, achieving an area under the receiver operating characteristic curve of 0.920, with accuracy, sensitivity, specificity, positive predictive value, negative predictive value, F1 score of 0.858, 0.845, 0.872, 0.879, 0.837, and 0.861, respectively, outperforming other state-of-the-art methods. | Dual-Modality Watershed Fusion Network for Thyroid Nodule Classification of Dual-View CEUS Video | [
"Li, Rui",
"Ruan, Jingliang",
"Lu, Yao"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 255 |
||
null | https://papers.miccai.org/miccai-2024/paper/1038_paper.pdf | @InProceedings{ Xu_TeethDreamer_MICCAI2024,
author = { Xu, Chenfan and Liu, Zhentao and Liu, Yuan and Dou, Yulong and Wu, Jiamin and Wang, Jiepeng and Wang, Minjiao and Shen, Dinggang and Cui, Zhiming },
title = { { TeethDreamer: 3D Teeth Reconstruction from Five Intra-oral Photographs } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Orthodontic treatment usually requires regular face-to-face
examinations to monitor dental conditions of the patients. When in-person diagnosis is not feasible, an alternative is to utilize five intra-oral photographs for remote dental monitoring. However, it lacks of 3D information, and how to reconstruct 3D dental models from such sparse view photographs is a challenging problem. In this study, we propose a 3D teeth reconstruction framework, named TeethDreamer, aiming to restore the shape and position of the upper and lower teeth. Given five intra-oral
photographs, our approach first leverages a large diffusion model’s prior knowledge to generate novel multi-view images with known poses to address sparse inputs and then reconstructs high-quality 3D teeth models by neural surface reconstruction. To ensure the 3D consistency across
generated views, we integrate a 3D-aware feature attention mechanism in the reverse diffusion process. Moreover, a geometry-aware normal loss is incorporated into the teeth reconstruction process to enhance geometry accuracy. Extensive experiments demonstrate the superiority of our
method over current state-of-the-arts, giving the potential to monitor orthodontic treatment remotely. Our code is available at https://github.com/ShanghaiTech-IMPACT/TeethDreamer. | TeethDreamer: 3D Teeth Reconstruction from Five Intra-oral Photographs | [
"Xu, Chenfan",
"Liu, Zhentao",
"Liu, Yuan",
"Dou, Yulong",
"Wu, Jiamin",
"Wang, Jiepeng",
"Wang, Minjiao",
"Shen, Dinggang",
"Cui, Zhiming"
] | Conference | 2407.11419 | [
"https://github.com/ShanghaiTech-IMPACT/TeethDreamer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 256 |
|
null | https://papers.miccai.org/miccai-2024/paper/1471_paper.pdf | @InProceedings{ Lon_MuGI_MICCAI2024,
author = { Long, Lifan and Cui, Jiaqi and Zeng, Pinxian and Li, Yilun and Liu, Yuanjun and Wang, Yan },
title = { { MuGI: Multi-Granularity Interactions of Heterogeneous Biomedical Data for Survival Prediction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Multimodal learning significantly benefits survival analysis for cancer, particularly through the integration of pathological images and genomic data. However, this presents new challenges on how to effectively integrate multi-modal biomedical data. Existing multi-modal survival prediction methods focus on mining the consistency or modality-specific information, failing to capture cross-modal interactions. To address this limitation, attention-based methods are proposed to enhance both the consistency and interactions. However, these methods inevitably introduce redundancy due to the overlapped information of multimodal data. In this paper, we propose a Multi-Granularity Interactions of heterogeneous biomedical data framework (MuGI) for precise survival prediction. MuGI consists of: a) unimodal extractor for exploring preliminary modality-specific information, b) multi-modal optimal features capture (MOFC) for extracting ideal multi-modal rep-resentations, eliminating redundancy through decomposed multi-granularity information, as well as capturing consistency in a common space and enhancing modality-specific features in a private space, and c) multimodal hierarchical interaction for sufficient acquisition of cross-modal correlations and interactions through the cooperation of two Bilateral Cross Attention (BCA) modules. We conduct extensive experiments on three cancer cohorts from the Cancer Genome Atlas (TCGA) database. The experimental results demonstrate that our MuGI achieves the state-of-the-art performance, outperforming both unimodal and multi-modal survival prediction methods. | MuGI: Multi-Granularity Interactions of Heterogeneous Biomedical Data for Survival Prediction | [
"Long, Lifan",
"Cui, Jiaqi",
"Zeng, Pinxian",
"Li, Yilun",
"Liu, Yuanjun",
"Wang, Yan"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 257 |
||
null | https://papers.miccai.org/miccai-2024/paper/0615_paper.pdf | @InProceedings{ He_F2TNet_MICCAI2024,
author = { He, Zhibin and Li, Wuyang and Jiang, Yu and Peng, Zhihao and Wang, Pengyu and Li, Xiang and Liu, Tianming and Han, Junwei and Zhang, Tuo and Yuan, Yixuan },
title = { { F2TNet: FMRI to T1w MRI Knowledge Transfer Network for Brain Multi-phenotype Prediction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Using brain imaging data to predict the non-neuroimaging phenotypes at the individual level is a fundamental goal of system neuroscience. Despite its significance, the high acquisition cost of functional Magnetic Resonance Imaging (fMRI) hampers its clinical translation in phenotype prediction, while the analysis based solely on cost-efficient T1-weighted (T1w) MRI yields inferior performance than fMRI. The reasons lie in that existing works ignore two significant challenges. 1) they neglect the knowledge transfer from fMRI to T1w MRI, failing to achieve effective prediction using cost-efficient T1w MRI. 2) They are limited to predicting a single phenotype and cannot capture the intrinsic dependence among various phenotypes, such as strength and endurance, preventing comprehensive and accurate clinical analysis. To tackle these issues, we propose an FMRI to T1w MRI knowledge transfer Network (F2TNet) to achieve cost-efficient and effective analysis on brain multi-phenotype, representing the first attempt in this field, which consists of a Phenotypes-guided Knowledge Transfer (PgKT) module and a modality-aware Multi-phenotype Prediction (MpP) module. Specifically, PgKT aligns brain nodes across modalities by solving a bipartite graph-matching problem, thereby achieving adaptive knowledge transfer from fMRI to T1w MRI through the guidance of multi-phenotype. Then, MpP enriches the phenotype codes with cross-modal complementary information and decomposes these codes to enable accurate multi-phenotype prediction. Experimental results demonstrate that the F2TNet significantly improves the prediction of brain multi-phenotype and outperforms state-of-the-art methods. The code is available at https://github.com/CUHK-AIM-Group/F2TNet. | F2TNet: FMRI to T1w MRI Knowledge Transfer Network for Brain Multi-phenotype Prediction | [
"He, Zhibin",
"Li, Wuyang",
"Jiang, Yu",
"Peng, Zhihao",
"Wang, Pengyu",
"Li, Xiang",
"Liu, Tianming",
"Han, Junwei",
"Zhang, Tuo",
"Yuan, Yixuan"
] | Conference | [
"https://github.com/CUHK-AIM-Group/F2TNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 258 |
||
null | https://papers.miccai.org/miccai-2024/paper/1668_paper.pdf | @InProceedings{ Lee_ADeep_MICCAI2024,
author = { Lee, Sangyoon and Branzoli, Francesca and Nguyen, Thanh and Andronesi, Ovidiu and Lin, Alexander and Liserre, Roberto and Melkus, Gerd and Chen, Clark and Marjańska, Małgorzata and Bolan, Patrick J. },
title = { { A Deep Learning Approach for Placing Magnetic Resonance Spectroscopy Voxels in Brain Tumors } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Magnetic resonance spectroscopy (MRS) of brain tumors provides useful metabolic information for diagnosis, treatment response, and prognosis. Single-voxel MRS requires precise planning of the acquisition volume to produce a high-quality signal localized in the pathology of interest. Appropriate placement of the voxel in a brain tumor is determined by the size and morphology of the tumor, and is guided by MR imaging. Consistent placement of a voxel precisely within a tumor requires substantial expertise in neuroimaging interpretation and MRS methodology. The need for such expertise at the time of scan has contributed to low usage of MRS in clinical practice. In this study, we propose a deep learning method to perform voxel placements in brain tumors. The network is trained in a supervised fashion using a database of voxel placements performed by MRS experts. Our proposed method accurately replicates the voxel placements of experts in tumors with comparable tumor coverage, voxel volume, and voxel position to that of experts. This novel deep learning method can be easily applied without an extensive external validation as it only requires a segmented tumor mask as input. | A Deep Learning Approach for Placing Magnetic Resonance Spectroscopy Voxels in Brain Tumors | [
"Lee, Sangyoon",
"Branzoli, Francesca",
"Nguyen, Thanh",
"Andronesi, Ovidiu",
"Lin, Alexander",
"Liserre, Roberto",
"Melkus, Gerd",
"Chen, Clark",
"Marjańska, Małgorzata",
"Bolan, Patrick J."
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 259 |
||
null | https://papers.miccai.org/miccai-2024/paper/2146_paper.pdf | @InProceedings{ Pfa_NoNewDenoiser_MICCAI2024,
author = { Pfaff, Laura and Wagner, Fabian and Vysotskaya, Nastassia and Thies, Mareike and Maul, Noah and Mei, Siyuan and Wuerfl, Tobias and Maier, Andreas },
title = { { No-New-Denoiser: A Critical Analysis of Diffusion Models for Medical Image Denoising } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Diffusion models, originally introduced for image generation, have recently gained attention as a promising image denoising approach. In this work, we perform comprehensive experiments to investigate the challenges posed by diffusion models when applied to medical image denoising. In medical imaging, retaining the original image content, and refraining from adding or removing potentially pathologic details is of utmost importance. Through empirical analysis and discussions, we highlight the trade-off between image perception and distortion in the context of diffusion-based denoising.
In particular, we demonstrate that standard diffusion model sampling schemes yield a reduction in PSNR by up to 14 % compared to one-step denoising. Additionally, we provide visual evidence indicating that diffusion models, in combination with stochastic sampling, have a tendency to generate synthetic structures during the denoising process, consequently compromising the clinical validity of the denoised images. Our thorough investigation raises questions about the suitability of diffusion models for medical image denoising, underscoring potential limitations that warrant careful consideration for future applications. | No-New-Denoiser: A Critical Analysis of Diffusion Models for Medical Image Denoising | [
"Pfaff, Laura",
"Wagner, Fabian",
"Vysotskaya, Nastassia",
"Thies, Mareike",
"Maul, Noah",
"Mei, Siyuan",
"Wuerfl, Tobias",
"Maier, Andreas"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 260 |
||
null | https://papers.miccai.org/miccai-2024/paper/2899_paper.pdf | @InProceedings{ Kir_In_MICCAI2024,
author = { Kirkegaard, Julius B. and Kutuzov, Nikolay P. and Netterstrøm, Rasmus and Darkner, Sune and Lauritzen, Martin and Lauze, François },
title = { { In vivo deep learning estimation of diffusion coefficients of nanoparticles } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Understanding the transport of molecules in the brain \emph{in vivo} is the key to learning how the brain regulates its metabolism, how brain pathologies develop, and how most of the developed brain-targeted drugs fail. Two-photon microscopy – the main tool for \emph{in vivo} brain imaging – achieves sub-micrometer resolution and high image contrast when imaging cells, blood vessels, and other microscopic structures. However, images of small and fast-moving objects, e.g. nanoparticles, are ill-suited for analysis of transport with standard methods, e.g. super-localization, because of (i) low photon budgets resulting in noisy images; (ii) severe motion blur due to slow pixel-by-pixel image acquisition by t-photon microscopy; and (iii) high density of tracked objects, preventing their individual localization.
Here, we developed a deep learning-based estimator of diffusion coefficients of nanoparticles directly from movies recorded with two-photon microscopy \emph{in vivo}.
We’ve benchmarked the method with synthetic data, model experimental data (nanoparticles in water), and \emph{in vivo} data (nanoparticles in the brain).
Deep Learning robustly estimates the diffusion coefficient of nanoparticles from movies with severe motion blur and movies with high nanoparticle densities, where, in contrast to the classic algorithms, the deep learning estimator’s accuracy improves with increasing density.
As a result, the deep learning estimator facilitates the estimation of diffusion coefficients of nanoparticles in the brain \emph{in vivo}, where the existing estimators fail. | In vivo deep learning estimation of diffusion coefficients of nanoparticles | [
"Kirkegaard, Julius B.",
"Kutuzov, Nikolay P.",
"Netterstrøm, Rasmus",
"Darkner, Sune",
"Lauritzen, Martin",
"Lauze, François"
] | Conference | [
"https://github.com/kirkegaardlab/2photodiffusion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 261 |
||
null | https://papers.miccai.org/miccai-2024/paper/3582_paper.pdf | @InProceedings{ Asg_Can_MICCAI2024,
author = { Asgari-Targhi, Ameneh and Ungi, Tamas and Jin, Mike and Harrison, Nicholas and Duggan, Nicole and Duhaime, Erik P. and Goldsmith, Andrew and Kapur, Tina },
title = { { Can Crowdsourced Annotations Improve AI-based Congestion Scoring For Bedside Lung Ultrasound? } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Lung ultrasound (LUS) has become an indispensable tool at the bedside in emergency and acute care settings, offering a fast and non-invasive way to assess pulmonary congestion. Its portability and cost-effectiveness make it particularly valuable in resource-limited environments where quick decision-making is critical. Despite its advantages, the interpretation of B-line artifacts, which are key diagnostic indicators for conditions related to pulmonary congestion, can vary significantly among clinicians and even for the same clinician over time. This variability, coupled with the time pressure in acute settings, poses a challenge. To address this, our study introduces a new B-line segmentation method to calculate congestion scores from LUS images, aiming to standardize interpretations. We utilized a large dataset of 31,000 B-line annotations synthesized from over 550,000 crowdsourced opinions on LUS images of 299 patients to improve model training and accuracy. This approach has yielded a model with 94% accuracy in B-line counting (within a margin of 1) on a test set of 100 patients, demonstrating the potential of combining extensive data and crowdsourcing to refine lung ultrasound analysis for pulmonary congestion. | Can Crowdsourced Annotations Improve AI-based Congestion Scoring For Bedside Lung Ultrasound? | [
"Asgari-Targhi, Ameneh",
"Ungi, Tamas",
"Jin, Mike",
"Harrison, Nicholas",
"Duggan, Nicole",
"Duhaime, Erik P.",
"Goldsmith, Andrew",
"Kapur, Tina"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 262 |
||
null | https://papers.miccai.org/miccai-2024/paper/2829_paper.pdf | @InProceedings{ Wan_DPMNet_MICCAI2024,
author = { Wang, Shudong and Zhao, Xue and Zhang, Yulin and Zhao, Yawu and Zhao, Zhiyuan and Ding, Hengtao and Chen, Tianxing and Qiao, Sibo },
title = { { DPMNet: Dual-Path MLP-based Network for Aneurysm Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | MLP−based networks, while being lighter than traditional convolution− and transformer−based networks commonly used in medical image segmentation, often struggle with capturing local structures due to the limitations of fully−connected (FC) layers, making them less ideal for such tasks. To address this issue, we design a Dual−Path MLP−based network (DPMNet) that includes a global and a local branch to understand the input images at different scales. In the two branches, we design an Axial Residual Connection MLP module (ARC−MLP) to combine it with CNNs to capture the input image’s global long-range dependencies and local visual structures simultaneously. Addi-
tionally, we propose a Shifted Channel−Mixer MLP block (SCM−MLP) across width and height as a key component of ARC−MLP to mix information from different spatial locations and channels. Extensive experiments demonstrate that the DPMNet significantly outperforms seven state−of−the−art convolution− , transformer−, and MLP−based methods in both Dice and IoU scores, where the Dice and IoU scores for the IAS−L dataset are 88.98% and 80.31% respectively. Code is available
at https://github.com/zx123868/DPMNet. | DPMNet: Dual-Path MLP-based Network for Aneurysm Image Segmentation | [
"Wang, Shudong",
"Zhao, Xue",
"Zhang, Yulin",
"Zhao, Yawu",
"Zhao, Zhiyuan",
"Ding, Hengtao",
"Chen, Tianxing",
"Qiao, Sibo"
] | Conference | [
"https://github.com/zx123868/DPMNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 263 |
||
null | https://papers.miccai.org/miccai-2024/paper/0442_paper.pdf | @InProceedings{ Mus_Analyzing_MICCAI2024,
author = { Musa, Aminu and Ibrahim Adamu, Mariya and Kakudi, Habeebah Adamu and Hernandez, Monica and Lawal, Yusuf },
title = { { Analyzing Cross-Population Domain Shift in Chest X-Ray Image Classification and Mitigating the Gap with Deep Supervised Domain Adaptation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Medical image analysis powered by artificial intelligence (AI) is pivotal in healthcare diagnostics. However, the efficacy of machine learning models relies on their adaptability to diverse patient populations, presenting domain shift challenges. This study investigates domain shift in chest X-ray classification, focusing on cross-population variations, specifically in African dataset. Disparities between source and target populations were measured by evaluating model performance. We propose supervised domain adaptation to mitigate this issue, leveraging labeled data in both domains for fine-tuning. Our experiments show significant improvements in model accuracy for chest X-ray classification in the African dataset. This research underscores the importance of domain-aware model development in AI-driven healthcare, contributing to addressing domain-shift challenges in medical imaging. | Analyzing Cross-Population Domain Shift in Chest X-Ray Image Classification and Mitigating the Gap with Deep Supervised Domain Adaptation | [
"Musa, Aminu",
"Ibrahim Adamu, Mariya",
"Kakudi, Habeebah Adamu",
"Hernandez, Monica",
"Lawal, Yusuf"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 264 |
||
null | https://papers.miccai.org/miccai-2024/paper/4230_paper.pdf | @InProceedings{ Gan_MedContext_MICCAI2024,
author = { Gani, Hanan and Naseer, Muzammal and Khan, Fahad and Khan, Salman },
title = { { MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Deep neural networks have significantly improved volumetric medical segmentation, but they generally require large-scale annotated data to achieve better performance, which can be expensive and prohibitive to obtain. To address this limitation, existing works typically perform transfer learning or design dedicated pretraining-finetuning stages to learn representative features. However, the mismatch between the source and target domain can make it challenging to learn optimal representation for volumetric data, while the multi-stage training demands higher compute as well as careful selection of stage-specific design choices. In contrast, we propose a universal training framework called MedContext that is architecture-agnostic and can be incorporated into any existing training framework for 3D medical segmentation. Our approach effectively learns self supervised contextual cues jointly with the supervised voxel segmentation task without requiring large-scale annotated volumetric medical data or dedicated pretraining-finetuning stages. The proposed approach induces contextual knowledge in the network by learning to reconstruct the missing organ or parts of an organ in the output segmentation space. The effectiveness of MedContext is validated across multiple 3D medical datasets and four state-of-the-art model architectures. Our approach demonstrates consistent gains in segmentation performance across datasets and architectures even in few-shot scenarios. | MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation | [
"Gani, Hanan",
"Naseer, Muzammal",
"Khan, Fahad",
"Khan, Salman"
] | Conference | 2402.17725 | [
"https://github.com/hananshafi/MedContext"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 265 |
|
null | https://papers.miccai.org/miccai-2024/paper/1178_paper.pdf | @InProceedings{ Li_GMMCoRegNet_MICCAI2024,
author = { Li, Zhenyu and Yu, Fan and Lu, Jie and Qian, Zhen },
title = { { GMM-CoRegNet: A Multimodal Groupwise Registration Framework Based on Gaussian Mixture Model } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Within-subject multimodal groupwise registration aims to align a group of multimodal images into a common structural space. Existing groupwise registration methods often rely on intensity-based similarity measures, but can be computationally expensive for large sets of images. Some methods build statistical relationships between image intensities and anatomical structures, which may be misleading when the assumption of consistent intensity-class correspondences do not hold. Additionally, these methods can be unstable in batch group registration when the number of anatomical structures varies across different image groups. To tackle these issues, we propose GMM-CoRegNet, a weakly supervised deep learning framework for multimodal images groupwise registration. A prior Gaussian Mixture Model (GMM) consolidating the image intensities and anatomical structures is constructed using the label of reference image, then we derive a novel similarity measure for groupwise registration based on GMM and iteratively optimize the GMM throughout the training process. Notably, GMM-CoRegNet can register an arbitrary number of images simultaneously to a reference image only needing the label of reference image. We compared GMM-CoRegNet with state-of-the-art groupwise registration methods on two carotid datasets and the public BrainWeb dataset, demonstrated its superior registration performance even for the registration scenario of inconsistent intensity-class mappings. | GMM-CoRegNet: A Multimodal Groupwise Registration Framework Based on Gaussian Mixture Model | [
"Li, Zhenyu",
"Yu, Fan",
"Lu, Jie",
"Qian, Zhen"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 266 |
||
null | https://papers.miccai.org/miccai-2024/paper/0456_paper.pdf | @InProceedings{ Pha_Structural_MICCAI2024,
author = { Phan, Vu Minh Hieu and Xie, Yutong and Zhang, Bowen and Qi, Yuankai and Liao, Zhibin and Perperidis, Antonios and Phung, Son Lam and Verjans, Johan W. and To, Minh-Son },
title = { { Structural Attention: Rethinking Transformer for Unpaired Medical Image Synthesis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Unpaired medical image synthesis aims to provide complementary information for an accurate clinical diagnostics, and address challenges in obtaining aligned multi-modal medical scans. Transformer-based models excel in imaging translation tasks thanks to their ability to capture long-range dependencies. Although effective in supervised training, their performance falters in unpaired image synthesis, particularly in synthesizing structural details. This paper empirically demonstrates that, lacking strong inductive biases, Transformer can converge to non-optimal solutions in the absence of paired data. To address this, we introduce UNet Structured Transformer (UNest) — a novel architecture incorporating structural inductive biases for unpaired medical image synthesis. We leverage the foundational Segment-Anything Model to precisely extract the foreground structure and perform structural attention within the main anatomy. This guides the model to learn key anatomical regions, thus improving structural synthesis under the lack of supervision in unpaired training. Evaluated on two public datasets, spanning three modalities, i.e., MR, CT, and PET, UNest improves recent methods by up to 19.30% across six medical image synthesis tasks. Our code is released at https://github.com/HieuPhan33/MICCAI2024-UNest. | Structural Attention: Rethinking Transformer for Unpaired Medical Image Synthesis | [
"Phan, Vu Minh Hieu",
"Xie, Yutong",
"Zhang, Bowen",
"Qi, Yuankai",
"Liao, Zhibin",
"Perperidis, Antonios",
"Phung, Son Lam",
"Verjans, Johan W.",
"To, Minh-Son"
] | Conference | 2406.18967 | [
"https://github.com/HieuPhan33/MICCAI2024-UNest"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 267 |
|
null | https://papers.miccai.org/miccai-2024/paper/3451_paper.pdf | @InProceedings{ Hus_PromptSmooth_MICCAI2024,
author = { Hussein, Noor and Shamshad, Fahad and Naseer, Muzammal and Nandakumar, Karthik },
title = { { PromptSmooth: Certifying Robustness of Medical Vision-Language Models via Prompt Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Medical vision-language models (Med-VLMs) trained on large datasets of medical image-text pairs and later fine-tuned for specific tasks have emerged as a mainstream paradigm in medical image analysis. However, recent studies have highlighted the susceptibility of these Med-VLMs to adversarial attacks, raising concerns about their safety and robustness. Randomized smoothing is a well-known technique for turning any classifier into a model that is certifiably robust to adversarial perturbations. However, this approach requires retraining the Med-VLM-based classifier so that it classifies well under Gaussian noise, which is often infeasible in practice. In this paper, we propose a novel framework called PromptSmooth to achieve efficient certified robustness of Med-VLMs by leveraging the concept of prompt learning. Given any pre-trained Med-VLM, PromptSmooth adapts it to handle Gaussian noise by learning textual prompts in a zero-shot or few-shot manner, achieving a delicate balance between accuracy and robustness, while minimizing the computational overhead. Moreover, PromptSmooth requires only a single model to handle multiple noise levels, which substantially reduces the computational cost compared to traditional methods that rely on training a separate model for each noise level. Comprehensive experiments based on three Med-VLMs and across six downstream datasets of various imaging modalities demonstrate the efficacy of PromptSmooth. | PromptSmooth: Certifying Robustness of Medical Vision-Language Models via Prompt Learning | [
"Hussein, Noor",
"Shamshad, Fahad",
"Naseer, Muzammal",
"Nandakumar, Karthik"
] | Conference | 2408.16769 | [
"https://github.com/nhussein/promptsmooth"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 268 |
|
null | https://papers.miccai.org/miccai-2024/paper/3061_paper.pdf | @InProceedings{ Liu_VolumeNeRF_MICCAI2024,
author = { Liu, Jiachen and Bai, Xiangzhi },
title = { { VolumeNeRF: CT Volume Reconstruction from a Single Projection View } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Computed tomography (CT) plays a significant role in clinical practice by providing detailed three-dimensional information, aiding in accurate assessment of various diseases. However, CT imaging requires a large number of X-ray projections from different angles and exposes patients to high doses of radiation. Here we propose VolumeNeRF, based on neural radiance fields (NeRF), for reconstructing CT volumes from a single-view X-ray. During training, our network learns to generate a continuous representation of the CT scan conditioned on the input X-ray image and render an X-ray image similar to the input from the same viewpoint as the input. Considering the ill-posedness and the complexity of the single-perspective generation task, we introduce likelihood images and the average CT images to incorporate prior anatomical knowledge. A novel projection attention module is designed to help the model learn the spatial correspondence between voxels in CT images and pixels in X-ray images during the imaging process. Extensive experiments conducted on a publicly available chest CT dataset show that our VolumeNeRF achieves better performance than other state-of-the-art methods. Our code is available at https://www.github.com/Aurora132/VolumeNeRF. | VolumeNeRF: CT Volume Reconstruction from a Single Projection View | [
"Liu, Jiachen",
"Bai, Xiangzhi"
] | Conference | [
"https://www.github.com/Aurora132/VolumeNeRF"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 269 |
||
null | https://papers.miccai.org/miccai-2024/paper/3539_paper.pdf | @InProceedings{ Gon_Anatomical_MICCAI2024,
author = { Goncharov, Mikhail and Samokhin, Valentin and Soboleva, Eugenia and Sokolov, Roman and Shirokikh, Boris and Belyaev, Mikhail and Kurmukov, Anvar and Oseledets, Ivan },
title = { { Anatomical Positional Embeddings } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | We propose a self-supervised model producing 3D anatomical positional embeddings (APE) of individual medical image voxels. APE encodes voxels’ anatomical closeness, i.e., voxels of the same organ or nearby organs always have closer positional embeddings than the voxels of more distant body parts. In contrast to the existing models of anatomical positional embeddings, our method is able to efficiently produce a map of voxel-wise embeddings for a whole volumetric input image, which makes it an optimal choice for different downstream applications. We train our APE model on 8400 publicly available CT images of abdomen and chest regions. We demonstrate its superior performance compared with the existing models on anatomical landmark retrieval and weakly-supervised few-shot localization of 13 abdominal organs. As a practical application, we show how to cheaply train APE to crop raw CT images to different anatomical regions of interest with 0.99 recall, while reducing the image volume by 10-100 times. The code and the pre-trained APE model are available at https://github.com/mishgon/ape. | Anatomical Positional Embeddings | [
"Goncharov, Mikhail",
"Samokhin, Valentin",
"Soboleva, Eugenia",
"Sokolov, Roman",
"Shirokikh, Boris",
"Belyaev, Mikhail",
"Kurmukov, Anvar",
"Oseledets, Ivan"
] | Conference | 2409.10291 | [
"https://github.com/mishgon/ape"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 270 |
|
null | https://papers.miccai.org/miccai-2024/paper/3156_paper.pdf | @InProceedings{ Su_SelfPaced_MICCAI2024,
author = { Su, Junming and Shen, Zhiqiang and Cao, Peng and Yang, Jinzhu and Zaiane, Osmar R. },
title = { { Self-Paced Sample Selection for Barely-Supervised Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | The existing barely-supervised medical image segmentation (BSS) methods, adopting a registration-segmentation paradigm, aim to learn from data with very few annotations to mitigate the extreme label scarcity problem.
However, this paradigm poses a challenge: pseudo-labels generated by image registration come with significant noise.
To address this issue, we propose a self-paced sample selection framework (SPSS) for BSS.
Specifically, SPSS comprises two main components: 1) self-paced uncertainty sample selection (SU) for explicitly improving the quality of pseudo labels in the image space, and 2) self-paced bidirectional feature contrastive learning (SC) for implicitly improving the quality of pseudo labels through enhancing the separability between class semantics in the feature space.
Both SU and SC are trained collaboratively in a self-paced learning manner, ensuring that SPSS can learn from high-quality pseudo labels for BSS.
Extensive experiments on two public medical image segmentation datasets demonstrate the effectiveness and superiority of SPSS over the state-of-the-art. | Self-Paced Sample Selection for Barely-Supervised Medical Image Segmentation | [
"Su, Junming",
"Shen, Zhiqiang",
"Cao, Peng",
"Yang, Jinzhu",
"Zaiane, Osmar R."
] | Conference | 2407.05248 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 271 |
|
null | https://papers.miccai.org/miccai-2024/paper/1648_paper.pdf | @InProceedings{ Lu_PathoTune_MICCAI2024,
author = { Lu, Jiaxuan and Yan, Fang and Zhang, Xiaofan and Gao, Yue and Zhang, Shaoting },
title = { { PathoTune: Adapting Visual Foundation Model to Pathological Specialists } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | As natural image understanding moves towards the pretrain-finetune era, research in pathology imaging is concurrently evolving. Despite the predominant focus on pretraining pathological foundation models, how to adapt foundation models to downstream tasks is little explored. For downstream adaptation, we propose the existence of two domain gaps, i.e., the Foundation-Task Gap and the Task-Instance Gap. To mitigate these gaps, we introduce PathoTune, a framework designed to efficiently adapt pathological or even visual foundation models to pathology-specific tasks via multi-modal prompt tuning. The proposed framework leverages Task-specific Visual Prompts and Task-specific Textual Prompts to identify task-relevant features, along with Instance-specific Visual Prompts for encoding single pathological image features. Results across multiple datasets at both patch-level and WSI-level demonstrate its superior performance over single-modality prompt tuning approaches. Significantly, PathoTune facilitates the direct adaptation of natural visual foundation models to pathological tasks, drastically outperforming pathological foundation models with simple linear probing. The code is available at https://github.com/openmedlab/PathoDuet. | PathoTune: Adapting Visual Foundation Model to Pathological Specialists | [
"Lu, Jiaxuan",
"Yan, Fang",
"Zhang, Xiaofan",
"Gao, Yue",
"Zhang, Shaoting"
] | Conference | 2403.16497 | [
"https://github.com/openmedlab/PathoDuet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 272 |
|
null | https://papers.miccai.org/miccai-2024/paper/2059_paper.pdf | @InProceedings{ Li_SelfSupervisedContrastive_MICCAI2024,
author = { Li, Junchi and Wan, Guojia and Liao, Minghui and Liao, Fei and Du, Bo },
title = { { Self-Supervised Contrastive Graph Views for Learning Neuron-level Circuit Network } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Learning Neuron-level Circuit Network can be used on automatic neuron classification and connection prediction, both of which are fundamental tasks for connectome reconstruction and deciphering brain functions. Traditional approaches to this learning process have relied on extensive neuron typing and labor-intensive proofread. In this paper, we introduce FlyGCL, a self-supervised learning approach designed to automatically learn neuron-level circuit networks, enabling the capture of the connectome’s topological feature. Specifically, we leverage graph augmentation methods to generate various contrastive graph views. The proposed method differentiates between positive and negative samples in these views, allowing it to encode the structural representation of neurons as adaptable latent features that can be used for downstream tasks such as neuron classification and connection prediction. To evaluate our method, we construct two new Neuron-level Circuit Network datasets, named HemiBrain-C and Manc-C, derived from the FlyEM project. Experimental results show that FlyGCL attains neuron classification accuracies of 73.8% and 57.4%, respectively, with >0.95 AUC in connection prediction tasks. Our code and data are available at GitHub https://github.com/mxz12119/FlyGCL. | Self-Supervised Contrastive Graph Views for Learning Neuron-level Circuit Network | [
"Li, Junchi",
"Wan, Guojia",
"Liao, Minghui",
"Liao, Fei",
"Du, Bo"
] | Conference | [
"https://github.com/mxz12119/FlyGCL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 273 |
||
null | https://papers.miccai.org/miccai-2024/paper/3901_paper.pdf | @InProceedings{ Xu_Simultaneous_MICCAI2024,
author = { Xu, Yushen and Li, Xiaosong and Jie, Yuchan and Tan, Haishu },
title = { { Simultaneous Tri-Modal Medical Image Fusion and Super-Resolution using Conditional Diffusion Model } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | In clinical practice, tri-modal medical image fusion, compared to the existing dual-modal technique, can provide a more comprehensive view of the lesions, aiding physicians in evaluating the disease’s shape, location, and biological activity. However, due to the limitations of imaging equipment and considerations for patient safety, the quality of medical images is usually limited, leading to sub-optimal fusion performance, and affecting the depth of image analysis by the physician. Thus, there is an urgent need for a technology that can both enhance image resolution and integrate multi-modal information. Although current image processing methods can effectively address image fusion and super-resolution individually, solving both problems synchronously remains extremely challenging. In this paper, we propose TFS-Diff, a simultaneously realize tri-modal medical image fusion and super-resolution model. Specially, TFS-Diff is based on the diffusion model generation of a random iterative denoising process. We also develop a simple objective function and the proposed fusion super-resolution loss, effectively evaluates the uncertainty in the fusion and ensures the stability of the optimization process. And the channel attention module is proposed to effectively integrate key information from different modalities for clinical diagnosis, avoiding information loss caused by multiple image processing. Extensive experiments on public Harvard datasets show that TFS-Diff significantly surpass the existing state-of-the-art methods in both quantitative and visual evaluations. Code is available at https://github.com/XylonXu01
/TFS-Diff. | Simultaneous Tri-Modal Medical Image Fusion and Super-Resolution using Conditional Diffusion Model | [
"Xu, Yushen",
"Li, Xiaosong",
"Jie, Yuchan",
"Tan, Haishu"
] | Conference | 2404.17357 | [
"https://github.com/XylonXu01/TFS-Diff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 274 |
|
null | https://papers.miccai.org/miccai-2024/paper/0424_paper.pdf | @InProceedings{ Wol_Binary_MICCAI2024,
author = { Wolleb, Julia and Bieder, Florentin and Friedrich, Paul and Zhang, Peter and Durrer, Alicia and Cattin, Philippe C. },
title = { { Binary Noise for Binary Tasks: Masked Bernoulli Diffusion for Unsupervised Anomaly Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | The high performance of denoising diffusion models for image generation has also paved the way for their application in unsupervised medical anomaly detection.
As diffusion-based methods require a lot of GPU memory and have long sampling times, we present a novel and fast unsupervised anomaly detection approach based on latent Bernoulli diffusion models. We first apply an autoencoder to compress the input images into a binary latent representation. Next, a diffusion model that follows a Bernoulli noise schedule is employed to this latent space and trained to restore binary latent representations from perturbed ones. The binary nature of this diffusion model allows us to identify entries in the latent space that have a high probability of flipping their binary code during the denoising process, which indicates out-of-distribution data. We propose a masking algorithm based on these probabilities, which improves the anomaly detection scores. We achieve state-of-the-art performance compared to other diffusion-based unsupervised anomaly detection algorithms while significantly reducing sampling time and memory consumption. The code is available at https://github.com/JuliaWolleb/Anomaly_berdiff. | Binary Noise for Binary Tasks: Masked Bernoulli Diffusion for Unsupervised Anomaly Detection | [
"Wolleb, Julia",
"Bieder, Florentin",
"Friedrich, Paul",
"Zhang, Peter",
"Durrer, Alicia",
"Cattin, Philippe C."
] | Conference | 2403.11667 | [
"https://github.com/JuliaWolleb/Anomaly_berdiff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 275 |
|
null | https://papers.miccai.org/miccai-2024/paper/0208_paper.pdf | @InProceedings{ Min_Biomechanicsinformed_MICCAI2024,
author = { Min, Zhe and Baum, Zachary M. C. and Saeed, Shaheer Ullah and Emberton, Mark and Barratt, Dean C. and Taylor, Zeike A. and Hu, Yipeng },
title = { { Biomechanics-informed Non-rigid Medical Image Registration and its Inverse Material Property Estimation with Linear and Nonlinear Elasticity } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | This paper investigates both biomechanical-constrained non-rigid medical image registrations and accurate identifications of material properties for soft tissues, using physics-informed neural networks (PINNs). The complex nonlinear elasticity theory is leveraged to formally establish the partial differential equations (PDEs) representing physics laws of biomechanical constraints that need to be satisfied, with which registration and identification tasks are treated as forward (i.e., data-driven solutions of PDEs) and inverse (i.e., parameter estimation) problems under PINNs respectively. Two net configurations (i.e., Cfg1 and Cfg2) have also been compared for both linear and nonlinear physics model. Two sets of experiments have been conducted, using pairs of undeformed and deformed MR images from clinical cases of prostate cancer biopsy. | Biomechanics-informed Non-rigid Medical Image Registration and its Inverse Material Property Estimation with Linear and Nonlinear Elasticity | [
"Min, Zhe",
"Baum, Zachary M. C.",
"Saeed, Shaheer Ullah",
"Emberton, Mark",
"Barratt, Dean C.",
"Taylor, Zeike A.",
"Hu, Yipeng"
] | Conference | 2407.03292 | [
"https://github.com/zhemin-1992/registration_pinns"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 276 |
|
null | https://papers.miccai.org/miccai-2024/paper/2022_paper.pdf | @InProceedings{ Hu_DCoRP_MICCAI2024,
author = { Hu, Haoyu and Zhang, Hongrun and Li, Chao },
title = { { D-CoRP: Differentiable Connectivity Refinement for Functional Brain Networks } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Brain network is an important tool for understanding the brain, offering insights for scientific research and clinical diagnosis. Existing models for brain networks typically primarily focus on brain regions or overlook the complexity of brain connectivities. MRI-derived brain network data is commonly susceptible to connectivity noise, underscoring the necessity of incorporating connectivities into the modeling of brain networks. To address this gap, we introduce a differentiable module for refining brain connectivity. We develop the multivariate optimization based on information bottleneck theory to address the complexity of the brain network and filter noisy or redundant connections. Also, our method functions as a flexible plugin that is adaptable to most graph neural networks. Our extensive experimental results show that the proposed method can significantly improve the performance of various baseline models and outperform other state-of-the-art methods, indicating the effectiveness and generalizability of the proposed method in refining brain network connectivity. The code is available at https://github.com/Fighting-HHY/D-CoRP. | D-CoRP: Differentiable Connectivity Refinement for Functional Brain Networks | [
"Hu, Haoyu",
"Zhang, Hongrun",
"Li, Chao"
] | Conference | 2405.18658 | [
"https://github.com/Fighting-HHY/D-CoRP"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 277 |
|
null | https://papers.miccai.org/miccai-2024/paper/0885_paper.pdf | @InProceedings{ Zha_Implicit_MICCAI2024,
author = { Zhang, Minghui and Zhang, Hanxiao and You, Xin and Yang, Guang-Zhong and Gu, Yun },
title = { { Implicit Representation Embraces Challenging Attributes of Pulmonary Airway Tree Structures } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | High-fidelity modeling of the pulmonary airway tree from CT scans is critical to preoperative planning. However, the granularity of CT scan resolutions and the intricate topologies limit the accuracy of manual or deep-learning-based delineation of airway structures, resulting in coarse representation accompanied by spike-like noises and disconnectivity issues. To address these challenges, we introduce a Deep Geometric Correspondence Implicit (DGCI) network that implicitly models airway tree structures in the continuous space rather than discrete voxel grids. DGCI first explores the intrinsic topological features shared within different airway cases on top of implicit neural representation(INR). Specifically, we establish a reversible correspondence flow to constrain the feature space of training shapes. Moreover, implicit geometric regularization is utilized to promote a smooth and high-fidelity representation of fine-scaled airway structures. By transcending voxel-based representation, DGCI acquires topological insights and integrates geometric regularization into INR, generating airway tree structures with state-of-the-art topological fidelity. Detailed evaluation results on the public dataset demonstrated the superiority of the DGCI in the scalable delineation of airways and downstream applications. Source codes can be found at: https://github.com/EndoluminalSurgicalVision-IMR/DGCI. | Implicit Representation Embraces Challenging Attributes of Pulmonary Airway Tree Structures | [
"Zhang, Minghui",
"Zhang, Hanxiao",
"You, Xin",
"Yang, Guang-Zhong",
"Gu, Yun"
] | Conference | [
"https://github.com/EndoluminalSurgicalVision-IMR/DGCI"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 278 |
||
null | https://papers.miccai.org/miccai-2024/paper/0770_paper.pdf | @InProceedings{ Cec_URCDM_MICCAI2024,
author = { Cechnicka, Sarah and Ball, James and Baugh, Matthew and Reynaud, Hadrien and Simmonds, Naomi and Smith, Andrew P.T. and Horsfield, Catherine and Roufosse, Candice and Kainz, Bernhard },
title = { { URCDM: Ultra-Resolution Image Synthesis in Histopathology } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Diagnosing medical conditions from histopathology data requires a thorough analysis across the various resolutions of Whole Slide Images (WSI). However, existing generative methods fail to consistently represent the hierarchical structure of WSIs due to a focus on high-fidelity patches. To tackle this, we propose Ultra-Resolution Cascaded Diffusion Models (URCDMs) which are capable of synthesising entire histopathology images at high resolutions whilst authentically capturing the details of both the underlying anatomy and pathology at all magnification levels. We evaluate our method on three separate datasets, consisting of brain, breast and kidney tissue, and surpass existing state-of-the-art multi-resolution models. Furthermore, an expert evaluation study was conducted, demonstrating that URCDMs consistently generate outputs across various resolutions that trained evaluators cannot distinguish from real images. All code and additional examples can be found on GitHub. | URCDM: Ultra-Resolution Image Synthesis in Histopathology | [
"Cechnicka, Sarah",
"Ball, James",
"Baugh, Matthew",
"Reynaud, Hadrien",
"Simmonds, Naomi",
"Smith, Andrew P.T.",
"Horsfield, Catherine",
"Roufosse, Candice",
"Kainz, Bernhard"
] | Conference | 2407.13277 | [
"https://github.com/scechnicka/URCDM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 279 |
|
null | https://papers.miccai.org/miccai-2024/paper/2051_paper.pdf | @InProceedings{ Liu_Generating_MICCAI2024,
author = { Liu, Zeyu and Zhang, Tianyi and He, Yufang and Zhang, Guanglei },
title = { { Generating Progressive Images from Pathological Transitions via Diffusion Model } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Pathological image analysis is a crucial field in deep learning applications. However, training effective models demands large-scale annotated data, which faces challenges due to sampling and annotation scarcity. The rapid developing generative models show potential to generate more training samples in recent studies. However, they also struggle with generalization diversity when limited training data is available, making them incapable of generating effective samples. Inspired by pathological transitions between different stages, we propose an adaptive depth-controlled diffusion (ADD) network for effective data augmentation. This novel approach is rooted in domain migration, where a hybrid attention strategy blends local and global attention priorities. With feature measuring, the adaptive depth-controlled strategy guides the bidirectional diffusion. It simulates pathological feature transition and maintains locational similarity. Based on a tiny training set (samples ≤ 500), ADD yields cross-domain progressive images with corresponding soft labels. Experiments on two datasets suggest significant improvements in generation diversity, and the effectiveness of the generated progressive samples is highlighted in downstream classification tasks. | Generating Progressive Images from Pathological Transitions via Diffusion Model | [
"Liu, Zeyu",
"Zhang, Tianyi",
"He, Yufang",
"Zhang, Guanglei"
] | Conference | 2311.12316 | [
"https://github.com/Rowerliu/ADD"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 280 |
|
null | https://papers.miccai.org/miccai-2024/paper/1347_paper.pdf | @InProceedings{ Zha_Fundus2Video_MICCAI2024,
author = { Zhang, Weiyi and Huang, Siyu and Yang, Jiancheng and Chen, Ruoyu and Ge, Zongyuan and Zheng, Yingfeng and Shi, Danli and He, Mingguang },
title = { { Fundus2Video: Cross-Modal Angiography Video Generation from Static Fundus Photography with Clinical Knowledge Guidance } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Fundus Fluorescein Angiography (FFA) is a critical tool for assessing retinal vascular dynamics and aiding in the diagnosis of eye diseases. However, its invasive nature and less accessibility compared to Color Fundus (CF) images pose significant challenges. Current CF to FFA translation methods are limited to static generation. In this work, we pioneer dynamic FFA video generation from static CF images. Fundus Fluorescein Angiography (FFA) is a critical tool for assessing retinal vascular dynamics and aiding in the diagnosis of eye diseases. However, its invasive nature and less accessibility compared to Color Fundus (CF) images pose significant challenges. Current CF to FFA translation methods are limited to static generation. In this work, we pioneer dynamic FFA video generation from static CF im- ages. We introduce an autoregressive GAN for smooth, memory-saving frame-by-frame FFA synthesis. To enhance the focus on dynamic le- sion changes in FFA regions, we design a knowledge mask based on clinical experience. Leveraging this mask, our approach integrates inno- vative knowledge mask-guided techniques, including knowledge-boosted attention, knowledge-aware discriminators, and mask-enhanced patch- NCE loss, aimed at refining generation in critical areas and addressing the pixel misalignment challenge. Our method achieves the best FVD of 1503.21 and PSNR of 11.81 compared to other common video generation approaches. Human assessment by an ophthalmologist confirms its high generation quality. Notably, our knowledge mask surpasses supervised lesion segmentation masks, offering a promising non-invasive alternative to traditional FFA for research and clinical applications. The code is available at https://github.com/Michi-3000/Fundus2Video. | Fundus2Video: Cross-Modal Angiography Video Generation from Static Fundus Photography with Clinical Knowledge Guidance | [
"Zhang, Weiyi",
"Huang, Siyu",
"Yang, Jiancheng",
"Chen, Ruoyu",
"Ge, Zongyuan",
"Zheng, Yingfeng",
"Shi, Danli",
"He, Mingguang"
] | Conference | 2408.15217 | [
"https://github.com/Michi-3000/Fundus2Video"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 281 |
|
null | https://papers.miccai.org/miccai-2024/paper/2020_paper.pdf | @InProceedings{ Cai_BPaCo_MICCAI2024,
author = { Cai, Zhiyuan and Wei, Tianyunxi and Lin, Li and Chen, Hao and Tang, Xiaoying },
title = { { BPaCo: Balanced Parametric Contrastive Learning for Long-tailed Medical Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Medical image classification is an essential medical image analysis tasks. However, due to data scarcity of rare diseases in clinical scenarios, the acquired medical image datasets may exhibit long-tailed distributions. Previous works employ class re-balancing to address this issue yet the representation is usually not discriminative enough. Inspired by contrastive learning’s power in representation learning, in this paper, we propose and validate a contrastive learning based framework, named Balanced Parametric Contrastive learning (BPaCo), to tackle long-tailed medical image classification. There are three key components in BPaCo: across-batch class-averaging to balance the gradient contribution from negative classes; hybrid class-complement to have all classes appear in every mini-batch for discriminative prototypes; cross-entropy logit compensation to
formulate an end-to-end classification framework with even stronger feature representations. Our BPaCo shows outstanding classification performance and high computational efficiency on three highly-imbalanced medical image classification datasets. | BPaCo: Balanced Parametric Contrastive Learning for Long-tailed Medical Image Classification | [
"Cai, Zhiyuan",
"Wei, Tianyunxi",
"Lin, Li",
"Chen, Hao",
"Tang, Xiaoying"
] | Conference | [
"https://github.com/Davidczy/BPaCo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 282 |
||
null | https://papers.miccai.org/miccai-2024/paper/0727_paper.pdf | @InProceedings{ Zho_ccRCC_MICCAI2024,
author = { Zhou, Huijian and Tian, Zhiqiang and Han, Xiangmin and Du, Shaoyi and Gao, Yue },
title = { { ccRCC Metastasis Prediction via Exploring High-Order Correlations on Multiple WSIs } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Metastasis prediction based on gigapixel histopathology whole-slide images (WSIs) is crucial for early diagnosis and clinical decision-making of clear cell renal cell carcinoma (ccRCC). However, most existing methods focus on extracting task-related features from a single WSI, while ignoring the correlations among WSIs, which is important for metastasis prediction when a single patient has multiple pathological slides. In this case, we propose a multi-slice-based hypergraph computation (MSHGC) method for metastasis prediction, which considers the intra-correlations within a single WSI and cross-correlations among multiple WSIs of a single patient simultaneously. Specifically, intra-correlations are captured within both topology and semantic feature spaces, while cross-correlations are modeled between the patches from different WSIs. Finally, the attention mechanism is used to suppress the contribution of task-irrelevant patches and enhance the contribution of task-relevant patches. MSHGC achieves the C-index of 0.8441 and 0.8390 on two carcinoma datasets(namely H1 and H2), outperforming state-of-the-art methods, which demonstrates the effectiveness of the proposed MSHGC. | ccRCC Metastasis Prediction via Exploring High-Order Correlations on Multiple WSIs | [
"Zhou, Huijian",
"Tian, Zhiqiang",
"Han, Xiangmin",
"Du, Shaoyi",
"Gao, Yue"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 283 |
||
null | https://papers.miccai.org/miccai-2024/paper/0244_paper.pdf | @InProceedings{ He_OpenSet_MICCAI2024,
author = { He, Along and Li, Tao and Zhao, Yitian and Zhao, Junyong and Fu, Huazhu },
title = { { Open-Set Semi-Supervised Medical Image Classification with Learnable Prototypes and Outlier Filter } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Semi-supervised learning (SSL) offers a pragmatic approach to harnessing unlabeled data, particularly in contexts where annotation costs are prohibitively high. However, in practical clinical settings, unlabeled datasets inevitably encompass outliers that do not align with labeled classes, constituting what is known as open-set Semi-supervised learning (OSSL). While existing methods have shown promising results in domains such as natural image processing, they often overlook the nuanced characteristics intrinsic to medical images, rendering them less applicable in this domain.
In this work, we introduce a novel framework tailored for the nuanced challenges of \textbf{open}-set \textbf{s}emi-\textbf{s}upervised \textbf{c}lassification (OpenSSC) in medical imaging. OpenSSC comprises three integral components. Firstly, we propose the utilization of learnable prototypes to distill a compact representation of the fine-grained characteristics inherent in identified classes. Subsequently, a multi-binary discriminator is introduced to consolidate closed-set predictions and effectively delineate whether the sample belongs to its ground truth or not. Building upon these components, we present a joint outlier filter mechanism designed to classify known classes while discerning and identifying unknown classes within unlabeled datasets. Our proposed method demonstrates efficacy in handling open-set data.
Extensive experimentation validates the effectiveness of our approach, showcasing superior performance compared to existing state-of-the-art methods in two distinct medical image classification tasks. | Open-Set Semi-Supervised Medical Image Classification with Learnable Prototypes and Outlier Filter | [
"He, Along",
"Li, Tao",
"Zhao, Yitian",
"Zhao, Junyong",
"Fu, Huazhu"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 284 |
||
null | https://papers.miccai.org/miccai-2024/paper/0697_paper.pdf | @InProceedings{ Xu_PolypMamba_MICCAI2024,
author = { Xu, Zhongxing and Tang, Feilong and Chen, Zhe and Zhou, Zheng and Wu, Weishan and Yang, Yuyao and Liang, Yu and Jiang, Jiyu and Cai, Xuyue and Su, Jionglong },
title = { { Polyp-Mamba: Polyp Segmentation with Visual Mamba } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Accurate segmentation of polyps is crucial for efficient colorectal cancer detection during the colonoscopy screenings. State Space Models, exemplified by Mamba, have recently emerged as a promising approach, excelling in long-range interaction modeling with linear computational complexity. However, previous methods do not consider the cross-scale dependencies of different pixels and the consistency in feature representations and semantic embedding, which are crucial for polyp segmentation. Therefore, we introduce Polyp-Mamba, a novel unified framework aimed at overcoming the above limitations by integrating multi-scale feature learning with semantic structure analysis. Specifically, our framework includes a Scale-Aware Semantic module that enables the embedding of multi-scale features from the encoder to achieve semantic information modeling across both intra- and inter-scales, rather than the single-scale approach employed in prior studies. Furthermore, the Global Semantic Injection module is deployed to inject scale-aware semantics into the corresponding decoder features, aiming to fuse global and local information and enhance pyramid feature representation. Experimental results across five challenging datasets and six metrics demonstrate that our proposed method not only surpasses state-of-the-art methods but also sets a new benchmark in the field, underscoring the Polyp-Mamba framework’s exceptional proficiency in the polyp segmentation tasks. | Polyp-Mamba: Polyp Segmentation with Visual Mamba | [
"Xu, Zhongxing",
"Tang, Feilong",
"Chen, Zhe",
"Zhou, Zheng",
"Wu, Weishan",
"Yang, Yuyao",
"Liang, Yu",
"Jiang, Jiyu",
"Cai, Xuyue",
"Su, Jionglong"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 285 |
||
null | https://papers.miccai.org/miccai-2024/paper/1458_paper.pdf | @InProceedings{ Li_CacheDriven_MICCAI2024,
author = { Li, Xiang and Fang, Huihui and Wang, Changmiao and Liu, Mingsi and Duan, Lixin and Xu, Yanwu },
title = { { Cache-Driven Spatial Test-Time Adaptation for Cross-Modality Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Test-Time Adaptation (TTA) shows promise for addressing the domain gap between source and target modalities in medical image segmentation methods. Furthermore, TTA enables the model to quickly fine-tune itself during testing, enabling it to adapt to the continuously evolving data distribution in the medical clinical environment. Consequently, we introduce Spatial Test-Time Adaptation (STTA), for the first time considering the integration of inter-slice spatial information from 3D volumes with TTA. The continuously changing distribution of slice data in the target domain can lead to error accumulation and catastrophic forgetting. To tackle these challenges, we first propose reducing error accumulation by using an ensemble of multi-head predictions based on data augmentation. Secondly, for pixels with unreliable pseudo-labels, regularization is applied through entropy minimization on the ensemble of predictions from multiple heads. Finally, to prevent catastrophic forgetting, we suggest using a cache mechanism during testing to restore neuron weights from the source pre-trained model, thus effectively preserving source knowledge. The proposed STTA has been bidirectionally validated across modalities in abdominal multi-organ and brain tumor datasets, achieving a relative increase of approximately 13\% in the Dice value in the best-case scenario compared to SOTA methods. The code is available at: https://github.com/lixiang007666/STTA. | Cache-Driven Spatial Test-Time Adaptation for Cross-Modality Medical Image Segmentation | [
"Li, Xiang",
"Fang, Huihui",
"Wang, Changmiao",
"Liu, Mingsi",
"Duan, Lixin",
"Xu, Yanwu"
] | Conference | [
"https://github.com/lixiang007666/STTA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 286 |
||
null | https://papers.miccai.org/miccai-2024/paper/1793_paper.pdf | @InProceedings{ Zha_DTCA_MICCAI2024,
author = { Zhang, Xiaoshan and Shi, Enze and Yu, Sigang and Zhang, Shu },
title = { { DTCA: Dual-Branch Transformer with Cross-Attention for EEG and Eye Movement Data Fusion } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | The integration of EEG and eye movements (EM) provides a comprehensive understanding of brain dynamics, yet effectively capturing key information from EEG and EM presents challenges. To overcome these, we propose DTCA, a novel multimodal fusion framework. It encodes EEG and EM data into a latent space, leveraging a multimodal fusion module to learn the facilitative information and dynamic relationships between EEG and EM data. Utilizing cross-attention with pooling computation, DTCA captures the complementary features and aggregates promoted information. Extensive experiments on multiple open datasets show that DTCA outperforms previous state-of-the-art methods: 99.15% on SEED, 99.65% on SEED-IV, and 86.05% on SEED-V datasets. We also visualize confusion matrices and features to demonstrate how DTCA works. Our findings demonstrate that (1) EEG and EM effectively distinguish changes in brain states during tasks such as watching videos. (2) Encoding EEG and EM into a latent space for fusion facilitates learning promoted information and dynamic relationships associated with brain states. (3) DTCA efficiently fuses EEG and EM data to leverage their synergistic effects in understanding the brain’s dynamic processes and classifying brain states. | DTCA: Dual-Branch Transformer with Cross-Attention for EEG and Eye Movement Data Fusion | [
"Zhang, Xiaoshan",
"Shi, Enze",
"Yu, Sigang",
"Zhang, Shu"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 287 |
||
null | https://papers.miccai.org/miccai-2024/paper/0906_paper.pdf | @InProceedings{ Liu_MOST_MICCAI2024,
author = { Liu, Xinyu and Chen, Zhen and Yuan, Yixuan },
title = { { MOST: Multi-Formation Soft Masking for Semi-Supervised Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | In semi-supervised medical image segmentation (SSMIS), existing methods typically impose consistency or contrastive regularizations under basic data and network perturbations, and individually segment each voxel/pixel in the image. In fact, a dominating issue in medical scans is the intrinsic ambiguous regions due to unclear boundary and expert variability, whose segmentation requires the information in spatially nearby regions. Thus, these existing works are limited in data variety and tend to overlook the ability of inferring ambiguous regions with contextual information. To this end, we present Multi-Formation Soft Masking (MOST), a simple framework that effectively boosts SSMIS by learning spatial context relations with data regularity conditions. It first applies multi-formation function to enhance the data variety and perturbation space via partitioning and upsampling. Afterwards, each unlabeled data is soft-masked and is constrained to give invariant predictions as the original data. Therefore, the model is encouraged to infer ambiguous regions via varied granularities of contextual information conditions. Despite its simplicity, MOST achieves state-of-the-art performance on four common SSMIS benchmarks. Code and models will be released. | MOST: Multi-Formation Soft Masking for Semi-Supervised Medical Image Segmentation | [
"Liu, Xinyu",
"Chen, Zhen",
"Yuan, Yixuan"
] | Conference | [
"https://github.com/CUHK-AIM-Group/MOST-SSL4MIS"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 288 |
||
null | https://papers.miccai.org/miccai-2024/paper/1517_paper.pdf | @InProceedings{ Ma_Weakly_MICCAI2024,
author = { Ma, Qiang and Li, Liu and Robinson, Emma C. and Kainz, Bernhard and Rueckert, Daniel },
title = { { Weakly Supervised Learning of Cortical Surface Reconstruction from Segmentations } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Existing learning-based cortical surface reconstruction approaches heavily rely on the supervision of pseudo ground truth (pGT) cortical surfaces for training. Such pGT surfaces are generated by traditional neuroimage processing pipelines, which are time consuming and difficult to generalize well to low-resolution brain MRI, e.g., from fetuses and neonates. In this work, we present CoSeg, a learning-based cortical surface reconstruction framework weakly supervised by brain segmentations without the need for pGT surfaces. CoSeg introduces temporal attention networks to learn time-varying velocity fields from brain MRI for diffeomorphic surface deformations, which fit an initial surface to target cortical surfaces within only 0.11 seconds for each brain hemisphere. A weakly supervised loss is designed to reconstruct pial surfaces by inflating the white surface along the normal direction towards the boundary of the cortical gray matter segmentation. This alleviates partial volume effects and encourages the pial surface to deform into deep and challenging cortical sulci. We evaluate CoSeg on 1,113 adult brain MRI at 1mm and 2mm resolution. CoSeg achieves superior geometric and morphological accuracy compared to existing learning-based approaches. We also verify that CoSeg can extract high-quality cortical surfaces from fetal brain MRI on which traditional pipelines fail to produce acceptable results. | Weakly Supervised Learning of Cortical Surface Reconstruction from Segmentations | [
"Ma, Qiang",
"Li, Liu",
"Robinson, Emma C.",
"Kainz, Bernhard",
"Rueckert, Daniel"
] | Conference | 2406.12650 | [
"https://github.com/m-qiang/CoSeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 289 |
|
null | https://papers.miccai.org/miccai-2024/paper/2877_paper.pdf | @InProceedings{ Zha_Lost_MICCAI2024,
author = { Zhao, Yidong and Zhang, Yi and Simonetti, Orlando and Han, Yuchi and Tao, Qian },
title = { { Lost in Tracking: Uncertainty-guided Cardiac Cine MRI Segmentation at Right Ventricle Base } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Accurate biventricular segmentation of cardiac magnetic resonance (CMR) cine images is essential for the clinical evaluation of heart function. Deep-learning-based methods have achieved highly accurate segmentation performance, however, compared to left ventricle (LV), right ventricle (RV) segmentation is still more challenging and less reproducible. The degenerated performance frequently occurs at the RV base, where the in-plane anatomical structures are complex (with atria, valve, and aorta), and varying due to the strong inter-planar motion. In this work, we propose to tackle the currently unsolved issues in CMR segmentation, specifically at the RV base, with two strategies: first, we complemented the public resource by re-annotating the RV base in the ACDC dataset, with refined delineation of the right ventricle outflow tract (RVOT), under the guidance of an expert cardiologist. Second, we proposed a novel Dual-Encoder U-Net architecture that leverages temporal incoherence to inform the segmentation when inter-planar motions occur. The inter-planar motion is characterized by loss-of-tracking, via Bayesian uncertainty of a motion-tracking model. Our experiments showed that our method significantly improved the RV base segmentation by taking temporal incoherence into account. Additionally, we investigated the reproducibility of deep-learning-based segmentation and showed that the combination of consistent annotation and loss-of-tracking could enhance RV segmentation reproducibility, potentially facilitating a large number of clinical studies focusing on RV. | Lost in Tracking: Uncertainty-guided Cardiac Cine MRI Segmentation at Right Ventricle Base | [
"Zhao, Yidong",
"Zhang, Yi",
"Simonetti, Orlando",
"Han, Yuchi",
"Tao, Qian"
] | Conference | 2410.03320 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 290 |
|
null | https://papers.miccai.org/miccai-2024/paper/2733_paper.pdf | @InProceedings{ Jin_Debiased_MICCAI2024,
author = { Jin, Ruinan and Deng, Wenlong and Chen, Minghui and Li, Xiaoxiao },
title = { { Debiased Noise Editing on Foundation Models for Fair Medical Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | In the era of Foundation Models’ (FMs) rising prominence in AI, our study addresses the challenge of biases in medical images while the model operates in black-box (e.g., using FM API), particularly spurious correlations between pixels and sensitive attributes. Traditional methods for bias mitigation face limitations due to the restricted access to web-hosted FMs and difficulties in addressing the underlying bias encoded within the FM API. We propose a D(ebiased) N(oise) E(diting) strategy, termed DNE, which generates DNE to mask such spurious correlation. DNE is capable of mitigating bias both within the FM API embedding and the images themselves. Furthermore, DNE is suitable for both white-box and black-box FM APIs, where we introduced G(reedy) (Z)eroth-order) Optimization (GeZO) for it when the gradient is inaccessible in black-box APIs. Our whole pipeline enables fairness-aware image editing that can be applied across various medical contexts without requiring direct model manipulation or significant computational resources. Our empirical results demonstrate the method’s effectiveness in maintaining fairness and utility across different patient groups and diseases. In the era of AI-driven medicine, this work contributes to making healthcare diagnostics more equitable, showcasing a practical solution for bias mitigation in pre-trained image FMs. | Debiased Noise Editing on Foundation Models for Fair Medical Image Classification | [
"Jin, Ruinan",
"Deng, Wenlong",
"Chen, Minghui",
"Li, Xiaoxiao"
] | Conference | 2403.06104 | [
"https://github.com/ubc-tea/DNE-foundation-model-fairness"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 291 |
|
null | https://papers.miccai.org/miccai-2024/paper/3324_paper.pdf | @InProceedings{ Lia_Overcoming_MICCAI2024,
author = { Liang, Qinghao and Adkinson, Brendan D. and Jiang, Rongtao and Scheinost, Dustin },
title = { { Overcoming Atlas Heterogeneity in Federated Learning for Cross-site Connectome-based Predictive Modeling } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Data-sharing in neuroimaging research alleviates the cost and time constraints of collecting large sample sizes at a single location, aiding the development of foundational models with deep learning. Yet, challenges to data sharing, such as data privacy, ownership, and regulatory compliance, exist. Federated learning enables collaborative training across sites while addressing many of these concerns. Connectomes are a promising data type for data sharing and creating foundational models. Yet, the field lacks a single, standardized atlas for constructing connectomes. Connectomes are incomparable between these atlases, limiting the utility of connectomes in federated learning. Further, fully reprocessing raw data in a single pipeline is not a solution when sample sizes range in the 10–100’s of thousands. Dedicated frameworks are needed to efficiently harmonize previously processed connectomes from various atlases for federated learning. We present Federate Learning for Existing Connectomes from Heterogeneous Atlases (FLECHA) to addresses these challenges. FLECHA learns a mapping between atlas spaces on an independent dataset, enabling the transformation of connectomes to a common target space before federated learning. We assess FLECHA using functional and structural connectomes processed with five atlases from the Human Connectome Project. Our results show improved prediction performance for FLECHA. They also demonstrate the potential of FLECHA to generalize connectome-based models across diverse silos, potentially enhancing the application of deep learning in neuroimaging. | Overcoming Atlas Heterogeneity in Federated Learning for Cross-site Connectome-based Predictive Modeling | [
"Liang, Qinghao",
"Adkinson, Brendan D.",
"Jiang, Rongtao",
"Scheinost, Dustin"
] | Conference | [
"https://github.com/qinghaoliang/Federated-learning_across_atlases"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 292 |
||
null | https://papers.miccai.org/miccai-2024/paper/1495_paper.pdf | @InProceedings{ Lee_COVID19_MICCAI2024,
author = { Lee, Jong Bub and Kim, Jung Soo and Lee, Hyun Gyu },
title = { { COVID19 to Pneumonia: Multi Region Lung Severity Classification using CNN Transformer Position-Aware Feature Encoding Network } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | This study investigates utilizing chest X-ray (CXR) data from COVID-19 patients for classifying pneumonia severity, aiming to enhance prediction accuracy in COVID-19 datasets and achieve robust classification across diverse pneumonia cases. A novel CNN-Transformer hybrid network has been developed, leveraging position-aware features and Region Shared MLPs for integrating lung region information. This improves adaptability to different spatial resolutions and scores, addressing the subjectivity of severity assessment due to unclear clinical measurements. The model shows significant improvement in pneumonia severity classification for both COVID-19 and heterogeneous pneumonia datasets. Its adaptable structure allows seamless integration with various backbone models, leading to continuous performance improvement and potential clinical applications, particularly in intensive care units. | COVID19 to Pneumonia: Multi Region Lung Severity Classification using CNN Transformer Position-Aware Feature Encoding Network | [
"Lee, Jong Bub",
"Kim, Jung Soo",
"Lee, Hyun Gyu"
] | Conference | [
"https://github.com/blind4635/Multi-Region-Lung-Severity-PAFE"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 293 |
||
null | https://papers.miccai.org/miccai-2024/paper/3866_paper.pdf | @InProceedings{ Thr_TESSL_MICCAI2024,
author = { Thrasher, Jacob and Devkota, Alina and Tafti, Ahmad P. and Bhattarai, Binod and Gyawali, Prashnna and the Alzheimer’s Disease Neuroimaging Initiative },
title = { { TE-SSL: Time and Event-aware Self Supervised Learning for Alzheimer’s Disease Progression Analysis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Alzheimer’s Disease (AD) represents one of the most pressing challenges in the field of neurodegenerative disorders, with its progression analysis being crucial for understanding disease dynamics and developing targeted interventions. Recent advancements in deep learning and various representation learning strategies, including self-supervised learning (SSL), have shown significant promise in enhancing medical image analysis, providing innovative ways to extract meaningful patterns from complex data. Notably, the computer vision literature has demonstrated that incorporating supervisory signals into SSL can further augment model performance by guiding the learning process with additional relevant information. However, the application of such supervisory signals in the context of disease progression analysis remains largely unexplored. This gap is particularly pronounced given the inherent challenges of incorporating both event and time-to-event information into the learning paradigm. Addressing this, we propose a novel framework, Time and Event-aware SSL (TE-SSL), which integrates time-to-event and event and data as supervisory signals to refine the learning process. Our comparative analysis with existing SSL-based methods in the downstream task of survival analysis shows superior performance across standard metrics. | TE-SSL: Time and Event-aware Self Supervised Learning for Alzheimer’s Disease Progression Analysis | [
"Thrasher, Jacob",
"Devkota, Alina",
"Tafti, Ahmad P.",
"Bhattarai, Binod",
"Gyawali, Prashnna",
"the Alzheimer’s Disease Neuroimaging Initiative"
] | Conference | [
"https://github.com/jacob-thrasher/TE-SSL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 294 |
||
null | https://papers.miccai.org/miccai-2024/paper/1426_paper.pdf | @InProceedings{ Kim_SSYNTH_MICCAI2024,
author = { Kim, Andrea and Saharkhiz, Niloufar and Sizikova, Elena and Lago, Miguel and Sahiner, Berkman and Delfino, Jana and Badano, Aldo },
title = { { S-SYNTH: Knowledge-Based, Synthetic Generation of Skin Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Development of artificial intelligence (AI) techniques in medical imaging requires access to large-scale and diverse datasets for training and evaluation. In dermatology, obtaining such datasets remains challenging due to significant variations in patient populations, illumination conditions, and acquisition system characteristics. In this work, we propose S-SYNTH, the first knowledge-based, adaptable open-source skin simulation framework to rapidly generate synthetic skin, 3D models and digitally rendered images, using an anatomically inspired multi-layer, multi-component skin and growing lesion model. The skin model allows for controlled variation in skin appearance, such as skin color, presence of hair, lesion shape, and blood fraction among other parameters. We use this framework to study the effect of possible variations on the development and evaluation of AI models for skin lesion segmentation, and show that results obtained using synthetic data follow similar comparative trends as real dermatologic images, while mitigating biases and limitations from existing datasets including small dataset size, lack of diversity, and underrepresentation. | S-SYNTH: Knowledge-Based, Synthetic Generation of Skin Images | [
"Kim, Andrea",
"Saharkhiz, Niloufar",
"Sizikova, Elena",
"Lago, Miguel",
"Sahiner, Berkman",
"Delfino, Jana",
"Badano, Aldo"
] | Conference | 2408.00191 | [
"https://github.com/DIDSR/ssynth-release"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 295 |
|
null | https://papers.miccai.org/miccai-2024/paper/2219_paper.pdf | @InProceedings{ Yun_RegionSpecific_MICCAI2024,
author = { Yung, Ka-Wai and Sivaraj, Jayaram and Stoyanov, Danail and Loukogeorgakis, Stavros and Mazomenos, Evangelos B. },
title = { { Region-Specific Retrieval Augmentation for Longitudinal Visual Question Answering: A Mix-and-Match Paradigm } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Visual Question Answering (VQA) has advanced in recent years, inspiring adaptations to radiology for medical diagnosis. Longitudinal VQA, which requires an understanding of changes in images over time, can further support patient monitoring and treatment decision making. This work introduces RegioMix, a retrieval augmented paradigm for longitudinal VQA, formulating a novel approach that generates retrieval objects through a mix-and-match technique, utilizing different regions from various retrieved images. Furthermore, this process generates a pseudo-difference description based on the retrieved pair, by leveraging available reports form each retrieved region. To align such statements to both the posted question and input image pair, we introduce a Dual Alignment module. Experiments on the MIMIC-Diff-VQA X-ray dataset demonstrate our method’s superiority, outperforming the state-of-the-art by 77.7 in CIDEr score and 8.3% in BLEU-4, while relying solely on the training dataset for retrieval, showcasing the effectiveness of our approach. Code is available at https://github.com/KawaiYung/RegioMix | Region-Specific Retrieval Augmentation for Longitudinal Visual Question Answering: A Mix-and-Match Paradigm | [
"Yung, Ka-Wai",
"Sivaraj, Jayaram",
"Stoyanov, Danail",
"Loukogeorgakis, Stavros",
"Mazomenos, Evangelos B."
] | Conference | [
"https://github.com/KawaiYung/RegioMix"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 296 |
||
null | https://papers.miccai.org/miccai-2024/paper/0050_paper.pdf | @InProceedings{ Xu_FMABS_MICCAI2024,
author = { Xu, Zhe and Chen, Cheng and Lu, Donghuan and Sun, Jinghan and Wei, Dong and Zheng, Yefeng and Li, Quanzheng and Tong, Raymond Kai-yu },
title = { { FM-ABS: Promptable Foundation Model Drives Active Barely Supervised Learning for 3D Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Semi-supervised learning (SSL) has significantly advanced 3D medical image segmentation by effectively reducing the need for laborious dense labeling from radiologists. Traditionally focused on \textit{model-centric} advancements, we anticipate that the SSL landscape will shift due to the emergence of open-source generalist foundation models, e.g., Segment Anything Model (SAM). These generalists have shown remarkable zero-shot segmentation capabilities with manual prompts, allowing a promising \textit{data-centric} perspective for future SSL, particularly in pseudo and expert labeling strategies for enhancing the data pool. To this end, we propose the Foundation Model-driven Active Barely Supervised (FM-ABS) learning paradigm for developing customized 3D specialist segmentation models with shoestring annotation budgets, i.e., merely labeling three slices per scan. Specifically, building upon the basic mean-teacher framework, FM-ABS accounts for the intrinsic characteristics of 3D imaging and modernizes the SSL paradigm with two key data-centric designs: (i) specialist-generalist collaboration where the in-training specialist model delivers class-specific prompts to interact with the frozen class-agnostic generalist model across multiple views to acquire noisy-yet-effective pseudo labels, and (ii) expert-model collaboration that advocates active cross-labeling with notably low annotation efforts to progressively provide the specialist model with informative and efficient supervision in a human-in-the-loop manner, which benefits the automatic object-specific prompt generation in turn. Extensive experiments on two benchmark datasets show the promising results of our approach over recent SSL methods under extremely limited (barely) labeling budgets. | FM-ABS: Promptable Foundation Model Drives Active Barely Supervised Learning for 3D Medical Image Segmentation | [
"Xu, Zhe",
"Chen, Cheng",
"Lu, Donghuan",
"Sun, Jinghan",
"Wei, Dong",
"Zheng, Yefeng",
"Li, Quanzheng",
"Tong, Raymond Kai-yu"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 297 |
||
null | https://papers.miccai.org/miccai-2024/paper/0320_paper.pdf | @InProceedings{ Mia_FMOSD_MICCAI2024,
author = { Miao, Juzheng and Chen, Cheng and Zhang, Keli and Chuai, Jie and Li, Quanzheng and Heng, Pheng-Ann },
title = { { FM-OSD: Foundation Model-Enabled One-Shot Detection of Anatomical Landmarks } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | One-shot detection of anatomical landmarks is gaining significant attention for its efficiency in using minimal labeled data to produce promising results. However, the success of current methods heavily relies on the employment of extensive unlabeled data to pre-train an effective feature extractor, which limits their applicability in scenarios where a substantial amount of unlabeled data is unavailable. In this paper, we propose the first foundation model-enabled one-shot landmark detection (FM-OSD) framework for accurate landmark detection in medical images by utilizing solely a single template image without any additional unlabeled data. Specifically, we use the frozen image encoder of visual foundation models as the feature extractor, and introduce dual-branch global and local feature decoders to increase the resolution of extracted features in a coarse to fine manner. The introduced feature decoders are efficiently trained with a distance-aware similarity learning loss to incorporate domain knowledge from the single template image. Moreover, a novel bidirectional matching strategy is developed to improve both robustness and accuracy of landmark detection in the case of scattered similarity map obtained by foundation models. We validate our method on two public anatomical landmark detection datasets. By using solely a single template image, our method demonstrates significant superiority over strong state-of-the-art one-shot landmark detection methods. | FM-OSD: Foundation Model-Enabled One-Shot Detection of Anatomical Landmarks | [
"Miao, Juzheng",
"Chen, Cheng",
"Zhang, Keli",
"Chuai, Jie",
"Li, Quanzheng",
"Heng, Pheng-Ann"
] | Conference | 2407.05412 | [
"https://github.com/JuzhengMiao/FM-OSD"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 298 |
|
null | https://papers.miccai.org/miccai-2024/paper/3688_paper.pdf | @InProceedings{ Wan_Adaptive_MICCAI2024,
author = { Wang, Xinkai and Shi, Yonggang },
title = { { Adaptive Subtype and Stage Inference for Alzheimer’s Disease } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Subtype and Stage Inference (SuStaIn) is a useful Event-based Model for capturing both the temporal and the phenotypical patterns for any progressive disorders, which is essential for understanding the heterogeneous nature of such diseases. However, this model cannot capture subtypes with different progression rates with respect to predefined biomarkers with fixed events prior to inference. Therefore, we propose an adaptive algorithm for learning subtype-specific events while making subtype and stage inference. We use simulation to demonstrate the improvement with respect to various performance metrics. Finally, we provide snapshots of different levels of biomarker abnormality within different subtypes on Alzheimer’s Disease (AD) data to demonstrate the effectiveness of our algorithm. | Adaptive Subtype and Stage Inference for Alzheimer’s Disease | [
"Wang, Xinkai",
"Shi, Yonggang"
] | Conference | [
"https://github.com/x5wang/Adaptive-Subtype-and-Stage-Inference"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 299 |