Update README.md
Browse files
README.md
CHANGED
@@ -165,24 +165,20 @@ LLMs evaluated: [LLaMA](https://huggingface.co/meta-llama/Llama-2-13b), [PMC-LL
|
|
165 |
|
166 |
## Citation
|
167 |
|
168 |
-
If you use MedExpQA
|
169 |
|
170 |
```bibtex
|
171 |
-
@
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
}
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
author={Iakes Goenaga and Aitziber Atutxa and Koldo Gojenola and Maite Oronoz and Rodrigo Agerri},
|
183 |
-
year={2023},
|
184 |
-
eprint={2312.00567},
|
185 |
-
archivePrefix={arXiv}
|
186 |
}
|
187 |
```
|
188 |
|
|
|
165 |
|
166 |
## Citation
|
167 |
|
168 |
+
If you use MedExpQA then please **cite the following paper**:
|
169 |
|
170 |
```bibtex
|
171 |
+
@article{ALONSO2024102938,
|
172 |
+
title = {MedExpQA: Multilingual benchmarking of Large Language Models for Medical Question Answering},
|
173 |
+
journal = {Artificial Intelligence in Medicine},
|
174 |
+
pages = {102938},
|
175 |
+
year = {2024},
|
176 |
+
issn = {0933-3657},
|
177 |
+
doi = {https://doi.org/10.1016/j.artmed.2024.102938},
|
178 |
+
url = {https://www.sciencedirect.com/science/article/pii/S0933365724001805},
|
179 |
+
author = {Iñigo Alonso and Maite Oronoz and Rodrigo Agerri},
|
180 |
+
keywords = {Large Language Models, Medical Question Answering, Multilinguality, Retrieval Augmented Generation, Natural Language Processing},
|
181 |
+
abstract = {Large Language Models (LLMs) have the potential of facilitating the development of Artificial Intelligence technology to assist medical experts for interactive decision support. This potential has been illustrated by the state-of-the-art performance obtained by LLMs in Medical Question Answering, with striking results such as passing marks in licensing medical exams. However, while impressive, the required quality bar for medical applications remains far from being achieved. Currently, LLMs remain challenged by outdated knowledge and by their tendency to generate hallucinated content. Furthermore, most benchmarks to assess medical knowledge lack reference gold explanations which means that it is not possible to evaluate the reasoning of LLMs predictions. Finally, the situation is particularly grim if we consider benchmarking LLMs for languages other than English which remains, as far as we know, a totally neglected topic. In order to address these shortcomings, in this paper we present MedExpQA, the first multilingual benchmark based on medical exams to evaluate LLMs in Medical Question Answering. To the best of our knowledge, MedExpQA includes for the first time reference gold explanations, written by medical doctors, of the correct and incorrect options in the exams. Comprehensive multilingual experimentation using both the gold reference explanations and Retrieval Augmented Generation (RAG) approaches show that performance of LLMs, with best results around 75 accuracy for English, still has large room for improvement, especially for languages other than English, for which accuracy drops 10 points. Therefore, despite using state-of-the-art RAG methods, our results also demonstrate the difficulty of obtaining and integrating readily available medical knowledge that may positively impact results on downstream evaluations for Medical Question Answering. Data, code, and fine-tuned models will be made publicly available.11https://huggingface.co/datasets/HiTZ/MedExpQA.}
|
|
|
|
|
|
|
|
|
182 |
}
|
183 |
```
|
184 |
|