Edit model card

InMD-X: Large Language Models for Internal Medicine Doctors

We introduce InMD-X, a collection of multiple large language models specifically designed to cater to the unique characteristics and demands of Internal Medicine Doctors (IMD). InMD-X represents a groundbreaking development in natural language processing, offering a suite of language models fine-tuned for various aspects of the internal medicine field. These models encompass a wide range of medical sub-specialties, enabling IMDs to perform more efficient and accurate research, diagnosis, and documentation. InMD-X’s versatility and adaptability make it a valuable tool for improving the healthcare industry, enhancing communication between healthcare professionals, and advancing medical research. Each model within InMD-X is meticulously tailored to address specific challenges faced by IMDs, ensuring the highest level of precision and comprehensiveness in clinical text analysis and decision support.

(This model card is for the UROLOGY & NEPHROLOGY subspecialty.)

Model Description

Model Sources [optional]

Uses

import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "InMedData/InMD-X-URO"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer = tokenizer,
    device_map="auto" # if you have GPU
)
def inference(pipeline, Qustion,answer_only = False):
    sequences = pipeline("Answer the next question in one sentence.\n"+
    Qustion,
    do_sample=True, 
    top_k=10,
    top_p = 0.9,
    temperature = 0.2,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=500, # can increase the length of sequence
    )
    
    Answers = []
    for seq in sequences:
        
        Answer = seq['generated_text'].split(Qustion)[-1].replace("\n","")
        Answers.append(Answer)
    return Answers
        
question = 'What is the association between long-term beta-blocker use after myocardial infarction (MI) and the risk of reinfarction and death?'
answers = inference(pipeline, question)
print(answers)

List of LoRA config

based on Parameter-Efficient Fine-Tuning (PEFT)

Parameter PT SFT
r 8 8
lora alpha 32 32
lora dropout 0.05 0.05
target q, k, v, o,up, down, gate q, k, v, o,up,down, gate

List of Training arguments

based on Transformer Reinforcement Learning (TRL)

Parameter PT SFT
train epochs 3 1
per device train batch size 1 1
optimizer adamw_hf adamw_hf
evaluation strategy no no
learning_rate 1e-4 1e-4

Experimental setup

  • Ubuntu 22.04.3 LTS
  • GPU - NVIDIA A100(40GB)
  • Python: 3.10.12
  • Pytorch:2.1.1+cu118
  • Transformer:4.37.0.dev0

Limitations

InMD-X consists of a collection of segmented models. The integration of the models has not yet been fully accomplished, resulting in each model being fragmented. Due to the absence of benchmarks, the segmented models have not been adequately evaluated. Future research will involve the development of new benchmarks and the integration of models to facilitate an objective evaluation.

Non-commercial use

These models are available exclusively for research purposes and are not intended for commercial use.

INMED DATA

INMED DATA is developing large language models (LLMs) specifically tailored for medical applications. For more information, please visit our website [TBD].

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for InMedData/InMD-X-URO

Finetuned
(21)
this model