Antidote Project
Collection
Data and models generated within the Antidote Project (https://univ-cotedazur.eu/antidote)
•
20 items
•
Updated
•
5
We provide a Mistral7B fine-tuned model on MedExpQA, the first multilingual benchmark for Medical QA which includes reference gold explanations.
The model has been fine-tuned using the Clinical Case and Question + automatically obtained RAG using the MedCorp and MedRAG method with 32 snippets. The model generates as output a prediction of the correct answer to the multiple choice exam and has been evaluated on 4 languages: English, French, Italian and Spanish.
For details about fine-tuning and evaluation please check the paper and the repository for usage.
If you use MedExpQA data then please cite the following paper:
@misc{alonso2024medexpqa,
title={MedExpQA: Multilingual Benchmarking of Large Language Models for Medical Question Answering},
author={Iñigo Alonso and Maite Oronoz and Rodrigo Agerri},
year={2024},
eprint={2404.05590},
archivePrefix={arXiv},
primaryClass={cs.CL}
}