--- license: cc-by-4.0 language: - en - es - fr - it tags: - casimedicos - explainability - medical exams - medical question answering - multilinguality - LLMs - LLM pretty_name: MedExpQA configs: - config_name: en data_files: - split: train path: - data/en/train.en.casimedicos.rag.jsonl - split: validation path: - data/en/dev.en.casimedicos.rag.jsonl - split: test path: - data/en/test.en.casimedicos.rag.jsonl - config_name: es data_files: - split: train path: - data/es/train.es.casimedicos.rag.jsonl - split: validation path: - data/es/dev.es.casimedicos.rag.jsonl - split: test path: - data/es/test.es.casimedicos.rag.jsonl - config_name: fr data_files: - split: train path: - data/fr/train.fr.casimedicos.rag.jsonl - split: validation path: - data/fr/dev.fr.casimedicos.rag.jsonl - split: test path: - data/fr/test.fr.casimedicos.rag.jsonl - config_name: it data_files: - split: train path: - data/it/train.it.casimedicos.rag.jsonl - split: validation path: - data/it/dev.it.casimedicos.rag.jsonl - split: test path: - data/it/test.it.casimedicos.rag.jsonl task_categories: - text-generation - question-answering size_categories: - 1K

# MexExpQA: Multilingual Benchmarking of Medical QA with reference gold explanations and Retrieval Augmented Generation (RAG) We present a new multilingual parallel medical benchmark, MedExpQA, for the evaluation of LLMs on Medical Question Answering. This benchmark can be used for various NLP tasks including: **Medical Question Answering** or **Explanation Generation**. Although the design of MedExpQA is independent of any specific dataset, for the first version of the MedExpQA benchmark we leverage the commented MIR exams from the [Antidote CasiMedicos dataset which includes gold reference explanations](https://huggingface.co/datasets/HiTZ/casimedicos-exp), which is currently available for 4 languages: **English, French, Italian and Spanish**.
Antidote CasiMedicos splits
train 434
validation 63
test 125
- 📖 Paper:[MedExpQA: Multilingual Benchmarking of Large Language Models for Medical Question Answering](https://doi.org/10.1016/j.artmed.2024.102938) - 💻 Github Repo (Data and Code): [https://github.com/hitz-zentroa/MedExpQA](https://github.com/hitz-zentroa/MedExpQA) - 🌐 Project Website: [https://univ-cotedazur.eu/antidote](https://univ-cotedazur.eu/antidote) - Funding: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR ## Example of Document in Antidote CasiMedicos Dataset

In this repository you can find the following data: - **casimedicos-raw**: The textual content including Clinical Case (C), Question (Q), Possible Answers (P), and Explanation (E) as shown in the example above. - **casimedicos-exp**: The manual annotations linking the explanations of the correct and incorrect possible answers. - **MedExpQA**: benchmark for Medical QA based on gold reference explanations from casimedicos-exp and knowledge automatically extracted using RAG methods. ## Data Explanation The following attributes composed **casimedicos-raw**: - **id**: unique doc identifier. - **year**: year in which the exam was published by the Spanish Ministry of Health. - **question_id_specific**: id given to the original exam published by the Spanish Ministry of Health. - **full_question**: Clinical Case (C) and Question (Q) as illustrated in the example document above. - **full answer**: Full commented explanation (E) as illustrated in the example document above. - **type**: medical speciality. - **options**: Possible Answers (P) as illustrated in the example document above. - **correct option**: solution to the exam question. Additionally, the following jsonl attribute was added to create **casimedicos-exp**: - **explanations**: for each possible answer above, manual annotation states whether: 1. the explanation for each possible answer exists in the full comment (E) and 2. if present, then we provide character and token offsets plus the text corresponding to the explanation for each possible answer. For **MedExpQA** benchmarking we have added the following elements in the data: - **rag** 1. **clinical_case_options/MedCorp/RRF-2**: 32 snippets extracted from the MedCorp corpus using the combination of _clinical case_ and _options_ as a query during the retrieval process. These 32 snippets are the resulting RRF combination of 32 separately retrieved snippets using BM25 and MedCPT. ## MedExpQA Benchmark Overview

## Prompt Example for LLMs

## Benchmark Results (averaged per type of external knowledge for grounding) LLMs evaluated: [LLaMA](https://huggingface.co/meta-llama/Llama-2-13b), [PMC-LLaMA](https://huggingface.co/axiong/PMC_LLaMA_13B), [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) and [BioMistral](https://huggingface.co/BioMistral/BioMistral-7B-DARE).

## Citation If you use MedExpQA then please **cite the following paper**: ```bibtex @article{ALONSO2024102938, title = {MedExpQA: Multilingual benchmarking of Large Language Models for Medical Question Answering}, journal = {Artificial Intelligence in Medicine}, pages = {102938}, year = {2024}, issn = {0933-3657}, doi = {https://doi.org/10.1016/j.artmed.2024.102938}, url = {https://www.sciencedirect.com/science/article/pii/S0933365724001805}, author = {Iñigo Alonso and Maite Oronoz and Rodrigo Agerri}, keywords = {Large Language Models, Medical Question Answering, Multilinguality, Retrieval Augmented Generation, Natural Language Processing}, abstract = {Large Language Models (LLMs) have the potential of facilitating the development of Artificial Intelligence technology to assist medical experts for interactive decision support. This potential has been illustrated by the state-of-the-art performance obtained by LLMs in Medical Question Answering, with striking results such as passing marks in licensing medical exams. However, while impressive, the required quality bar for medical applications remains far from being achieved. Currently, LLMs remain challenged by outdated knowledge and by their tendency to generate hallucinated content. Furthermore, most benchmarks to assess medical knowledge lack reference gold explanations which means that it is not possible to evaluate the reasoning of LLMs predictions. Finally, the situation is particularly grim if we consider benchmarking LLMs for languages other than English which remains, as far as we know, a totally neglected topic. In order to address these shortcomings, in this paper we present MedExpQA, the first multilingual benchmark based on medical exams to evaluate LLMs in Medical Question Answering. To the best of our knowledge, MedExpQA includes for the first time reference gold explanations, written by medical doctors, of the correct and incorrect options in the exams. Comprehensive multilingual experimentation using both the gold reference explanations and Retrieval Augmented Generation (RAG) approaches show that performance of LLMs, with best results around 75 accuracy for English, still has large room for improvement, especially for languages other than English, for which accuracy drops 10 points. Therefore, despite using state-of-the-art RAG methods, our results also demonstrate the difficulty of obtaining and integrating readily available medical knowledge that may positively impact results on downstream evaluations for Medical Question Answering. Data, code, and fine-tuned models will be made publicly available.11https://huggingface.co/datasets/HiTZ/MedExpQA.} } ``` **Contact**: [Iñigo Alonso](https://hitz.ehu.eus/en/node/282) and [Rodrigo Agerri](https://ragerri.github.io/) HiTZ Center - Ixa, University of the Basque Country UPV/EHU