ABR-Finetuned-BioGPT
Many LLM's have general capabilities, but many times domain specific knowledge is more important. To fill this gap we took the BioGPT model pretrained on a large set of biomedical abstracts and finetuned it on antibiotic resistance abstracts. But to get closer to a chatgpt like model some question and answering datasets where used to achieve this.
Data Trained On
Medical Domain Data
- qiaojin/PubMedQA
- Amirkid/MedQuad-dataset
- PubMed ABR abstracts
- medalpaca/medical_meadow_medical_flashcards
- medalpaca/medical_meadow_wikidoc
General Question Answering
- tatsu-lab/alpaca
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.