Files changed (1) hide show
  1. README.md +22 -2
README.md CHANGED
@@ -49,7 +49,7 @@ We highly recommend against the use of this model in a live environment without
49
  ### Model Sources
50
 
51
  - **Trainer:** [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
52
- - **Paper:** Awaiting acceptance (*[Impact of High-Quality, Mixed-Domain Data on the Performance of Medical Language Models](#)*)
53
 
54
  ## Uses
55
 
@@ -140,4 +140,24 @@ We include benchmarks on MedQA (4 options), MedMCQA and PubMedQA of our model an
140
 
141
  \*: PMC LLaMA 7b performed poorly on the benchmark, likely due to a mismatch of formating and a lack of instruction tuning, we include in parenthesis the results reported by the authors when available.
142
 
143
- \*\*: Meditron 7b's results in MMLU are reported for transparency but are inconsistent with the average of 54.2 reported in their paper, do not hesitate to communicate the details on each category so we can update the table.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ### Model Sources
50
 
51
  - **Trainer:** [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
52
+ - **Paper:** [Impact of High-Quality, Mixed-Domain Data on the Performance of Medical Language Models](https://doi.org/10.1093/jamia/ocae120)
53
 
54
  ## Uses
55
 
 
140
 
141
  \*: PMC LLaMA 7b performed poorly on the benchmark, likely due to a mismatch of formating and a lack of instruction tuning, we include in parenthesis the results reported by the authors when available.
142
 
143
+ \*\*: Meditron 7b's results in MMLU are reported for transparencybut are inconsistent with the average of 54.2 reported in their paper, do not hesitate to communicate the details on each category so we can update the table.
144
+
145
+ ## Citation
146
+
147
+ **BibTeX:**
148
+ If you use Internist.ai 7b, please cite us:
149
+ ```
150
+ @article{10.1093/jamia/ocae120,
151
+ author = {Griot, Maxime and Hemptinne, Coralie and Vanderdonckt, Jean and Yuksel, Demet},
152
+ title = "{Impact of high-quality, mixed-domain data on the performance of medical language models}",
153
+ journal = {Journal of the American Medical Informatics Association},
154
+ pages = {ocae120},
155
+ year = {2024},
156
+ month = {05},
157
+ abstract = "{To optimize the training strategy of large language models for medical applications, focusing on creating clinically relevant systems that efficiently integrate into healthcare settings, while ensuring high standards of accuracy and reliability.We curated a comprehensive collection of high-quality, domain-specific data and used it to train several models, each with different subsets of this data. These models were rigorously evaluated against standard medical benchmarks, such as the USMLE, to measure their performance. Furthermore, for a thorough effectiveness assessment, they were compared with other state-of-the-art medical models of comparable size.The models trained with a mix of high-quality, domain-specific, and general data showed superior performance over those trained on larger, less clinically relevant datasets (P \\< .001). Our 7-billion-parameter model Med5 scores 60.5\\% on MedQA, outperforming the previous best of 49.3\\% from comparable models, and becomes the first of its size to achieve a passing score on the USMLE. Additionally, this model retained its proficiency in general domain tasks, comparable to state-of-the-art general domain models of similar size.Our findings underscore the importance of integrating high-quality, domain-specific data in training large language models for medical purposes. The balanced approach between specialized and general data significantly enhances the model’s clinical relevance and performance.This study sets a new standard in medical language models, proving that a strategically trained, smaller model can outperform larger ones in clinical relevance and general proficiency, highlighting the importance of data quality and expert curation in generative artificial intelligence for healthcare applications.}",
158
+ issn = {1527-974X},
159
+ doi = {10.1093/jamia/ocae120},
160
+ url = {https://doi.org/10.1093/jamia/ocae120},
161
+ eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocae120/57845903/ocae120.pdf},
162
+ }
163
+ ```