MedExpQA / README.md
ragerri's picture
Update README.md
0e79d5f verified
|
raw
history blame
5.99 kB
---
license: cc-by-4.0
language:
- en
- es
- fr
- it
tags:
- casimedicos
- explainability
- medical exams
- medical question answering
- multilinguality
- LLMs
- LLM
pretty_name: MedExpQA
configs:
- config_name: en
data_files:
- split: train
path:
- data/en/train.en.casimedicos.rag.jsonl
- split: validation
path:
- data/en/dev.en.casimedicos.rag.jsonl
- split: test
path:
- data/en/test.en.casimedicos.rag.jsonl
- config_name: es
data_files:
- split: train
path:
- data/es/train.es.casimedicos.rag.jsonl
- split: validation
path:
- data/es/dev.es.casimedicos.rag.jsonl
- split: test
path:
- data/es/test.es.casimedicos.rag.jsonl
- config_name: fr
data_files:
- split: train
path:
- data/fr/train.fr.casimedicos.rag.jsonl
- split: validation
path:
- data/fr/dev.fr.casimedicos.rag.jsonl
- split: test
path:
- data/fr/test.fr.casimedicos.rag.jsonl
- config_name: it
data_files:
- split: train
path:
- data/it/train.it.casimedicos.rag.jsonl
- split: validation
path:
- data/it/dev.it.casimedicos.rag.jsonl
- split: test
path:
- data/it/test.it.casimedicos.rag.jsonl
task_categories:
- text-generation
- question-answering
size_categories:
- 1K<n<10K
---
<p align="center">
<br>
<img src="http://www.ixa.eus/sites/default/files/anitdote.png" style="height: 200px;">
<br>
# MexExpQA: Multilingual Benchmarking of Medical QA with reference gold explanations and Retrieval Augmented Generation (RAG)
We present a new multilingual parallel medical benchmark, MedExpQA, for the evaluation of LLMs on Medical Question Answering.
This dataset can be used for various NLP tasks including: **Medical Question Answering**, **Explanatory Argument Extraction** or **Explanation Generation**.
Although the design of MedExpQA is independent of the specific dataset, for the first version of the MedExpQA benchmark we leverage the commented MIR exams,
from the [Antidote CasiMedicos dataset which includes gold reference explanations](https://huggingface.co/datasets/HiTZ/casimedicos-exp), which is currently
available for 4 languages: **English, French, Italian and Spanish**.
<table style="width:33%">
<tr>
<th>Antidote CasiMedicos splits</th>
<tr>
<td>train</td>
<td>434</td>
</tr>
<tr>
<td>validation</td>
<td>63</td>
</tr>
<tr>
<td>test</td>
<td>125</td>
</tr>
</table>
- 📖 Paper:[MedExpQA: Multilingual Benchmarking of Large Language Models for Medical Question Answering](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4780937)
- 💻 Github Repo (Data and Code): [https://github.com/hitz-zentroa/MedExpQA](https://github.com/hitz-zentroa/MedExpQA)
- 🌐 Project Website: [https://univ-cotedazur.eu/antidote](https://univ-cotedazur.eu/antidote)
- Funding: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR
## Example
<p align="center">
<img src="https://github.com/ixa-ehu/antidote-casimedicos/blob/main/casimedicos-exp.png?raw=true" style="height: 650px;">
</p>
In this repository you can find the following data:
- **casimedicos-raw**: The textual content including Clinical Case (C), Question (Q), Possible Answers (P), and Explanation (E) as shown in the example above.
- **casimedicos-exp**: The manual annotations linking the explanations of the correct and incorrect possible answers.
- **MedExpQA**: benchmark for Medical QA based on gold reference explanations from casimedicos-exp and knowledge automatically extracted using RAG methods.
## Data Explanation
The following attributes composed **casimedicos-raw**:
- **id**: unique doc identifier.
- **year**: year in which the exam was published by the Spanish Ministry of Health.
- **question_id_specific**: id given to the original exam published by the Spanish Ministry of Health.
- **full_question**: Clinical Case (C) and Question (Q) as illustrated in the example document above.
- **full answer**: Full commented explanation (E) as illustrated in the example document above.
- **type**: medical speciality.
- **options**: Possible Answers (P) as illustrated in the example document above.
- **correct option**: solution to the exam question.
Additionally, the following jsonl attribute was added to create **casimedicos-exp**:
- **explanations**: for each possible answer above, manual annotation states whether:
1. the explanation for each possible answer exists in the full comment (E) and
2. if present, then we provide character and token offsets plus the text corresponding to the explanation for each possible answer.
For **MedExpQA** benchmarking we have added the following elements in the data:
- **rag**
1. **clinical_case_options**: etc.
## Citation
If you use Antidote CasiMedicos dataset then please **cite the following paper**:
```bibtex
@inproceedings{Agerri2023HiTZAntidoteAE,
title={HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence for Digital Medicine},
author={Rodrigo Agerri and I{\~n}igo Alonso and Aitziber Atutxa and Ander Berrondo and Ainara Estarrona and Iker Garc{\'i}a-Ferrero and Iakes Goenaga and Koldo Gojenola and Maite Oronoz and Igor Perez-Tejedor and German Rigau and Anar Yeginbergenova},
booktitle={SEPLN 2023: 39th International Conference of the Spanish Society for Natural Language Processing.},
year={2023}
}
@misc{goenaga2023explanatory,
title={Explanatory Argument Extraction of Correct Answers in Resident Medical Exams},
author={Iakes Goenaga and Aitziber Atutxa and Koldo Gojenola and Maite Oronoz and Rodrigo Agerri},
year={2023},
eprint={2312.00567},
archivePrefix={arXiv}
}
```
**Contact**: [Iñigo Alonso](https://hitz.ehu.eus/en/node/282) and [Rodrigo Agerri](https://ragerri.github.io/)
HiTZ Center - Ixa, University of the Basque Country UPV/EHU