--- license: apache-2.0 datasets: - PULSE-ECG/ECGInstruct - PULSE-ECG/ECGBench language: - en pipeline_tag: image-text-to-text tags: - medical --- # PULSE-7B Dataset for paper "Teach Multimodal LLMs to Comprehend Electrocardiographic Images". 🌐 Project Page: [https://aimedlab.github.io/PULSE/](https://aimedlab.github.io/PULSE/) 📄 Paper: [https://arxiv.org/abs/2410.19008](https://arxiv.org/abs/2410.19008) 🧑‍💻 Code: [https://github.com/AIMedLab/PULSE](https://github.com/AIMedLab/PULSE) 👩‍⚕️ ECGInstruct(Training): [https://huggingface.co/datasets/PULSE-ECG/ECGInstruct](https://huggingface.co/datasets/PULSE-ECG/ECGInstruct) ⚖️ ECGBench(Testing): [https://huggingface.co/datasets/PULSE-ECG/ECGBench](https://huggingface.co/datasets/PULSE-ECG/ECGBench) ## Introduction We introduce **PULSE-7B**, a multimodal large language model (MLLM) specifically designed for ECG image interpretation. Leveraging the comprehensive **ECGInstruct** dataset, which contains over one million instruction-tuning samples, PULSE-7B is tailored to handle a wide range of ECG-related tasks drawn from diverse data sources. While traditional ECG interpretation methods are often constrained by their reliance on raw physiological signals and limited to specific cardiac conditions, PULSE-7B addresses these limitations by enabling robust interpretation of both printed and digital ECG images, making it especially valuable in resource-limited settings where access to raw signals may be restricted. In conjunction with the introduction of **ECGBench**, a benchmark that includes four key tasks spanning nine datasets, our experiments demonstrate that PULSE-7B establishes new state-of-the-art performance, surpassing general MLLMs with an average accuracy improvement of 15% to 30%. This model showcases the potential to significantly advance ECG image interpretation, providing a more versatile and accurate tool for clinical practice. Overall performance of PULSE-7B on ECGBench ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/640701cb4dc5f2846c91d4eb/_WI6DO6sjY1SsHF8vn0ZF.jpeg) ## Model Performance ### In-domain ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/640701cb4dc5f2846c91d4eb/KmFO7LZpj2K-ASszAdlMF.jpeg) ### Out-of-domain ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/640701cb4dc5f2846c91d4eb/DHXAJt-mrNNtrPOCVWZBC.jpeg) ## Case Study ECG Image ECG Image ECG Image ## Citation If you find this work helpful, please cite our paper: ``` @article{liu2024teach, title={Teach Multimodal LLMs to Comprehend Electrocardiographic Images}, author={Ruoqi Liu, Yuelin Bai, Xiang Yue, Ping Zhang}, journal={arXiv preprint arXiv:2410.19008}, year={2024} } ```