Edit model card

LLaMA-2-7B Chat - AI Medical Chatbot

Model Overview

This model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on the AI Medical Chatbot dataset, which consists of medical question-answering tasks. It leverages the latest in language model technology for generating accurate and respectful medical assistance responses, providing helpful advice on common medical questions.

Fine-tuned using LoRA (Low-Rank Adaptation) and quantization techniques, this model is optimized for efficiency while maintaining high-quality, contextually relevant outputs. The training was performed using PEFT to make the model more lightweight and accessible for a broader range of applications.

Model Details

  • Model Type: LLaMA-2 7B (Chat)
  • Base Model: NousResearch/Llama-2-7b-chat-hf
  • Fine-tuned on: AI Medical Chatbot
  • Parameter Count: 7 Billion
  • Quantization: Applied for efficient inference
  • Training Methodology: LoRA + PEFT
  • Task: Medical Question Answering (QA)

Model Purpose

This model was fine-tuned to assist users in answering medical questions, particularly common healthcare concerns such as symptoms, treatments, and remedies. It is not intended to replace professional medical consultation but can be used to offer informative, general responses that can help guide users toward appropriate care.

Intended Use

  • Primary Use Case: Medical question-answering chatbot
  • Intended Users: Developers or healthcare professionals seeking a chatbot interface for initial user engagement or educational purposes.
  • Environment: Can be deployed in environments where lightweight models are necessary (due to quantization and PEFT optimization).
  • Primary Applications: Healthcare chatbots, medical assistants, online medical question-answering systems.

Example Inputs

  • User: "What are the common treatments for a sore throat?"

  • Model: "For a sore throat, common treatments include rest, hydration, throat lozenges, warm saltwater gargles, and over-the-counter pain relief medications. If the sore throat persists, you should consult a doctor as it may indicate a bacterial infection."

  • User: "Can you suggest a homeopathic remedy for hair loss?"

  • Model: "Homeopathic remedies like Arnica, Phosphorus, and Silicea are often recommended for hair loss. However, it's important to consult a homeopathic practitioner for a tailored treatment."

Training Dataset

  • Dataset: AI Medical Chatbot
    • This dataset contains a wide variety of medical queries and corresponding answers. The dataset covers questions about symptoms, diagnoses, treatments, and remedies.

Training Process

The model was trained using the following setup:

  • Optimizer: AdamW
  • Batch Size: 2
  • Gradient Accumulation: 4 steps
  • Learning Rate: 2e-4
  • Max Steps: 5000
  • Epochs: 500 (with early stopping)
  • Quantization: Applied for memory efficiency
  • LoRA: Used for parameter-efficient fine-tuning

Limitations

  • Not a Substitute for Medical Advice: This model is trained to assist with general medical questions but should not be used to make clinical decisions or substitute professional medical advice.
  • Biases: The model's responses may reflect the biases inherent in the dataset it was trained on.
  • Data Limitation: The model may not have been exposed to niche or highly specialized medical knowledge and could provide incomplete or incorrect information in such cases.

Ethical Considerations

This model is designed to assist with medical-related queries and provide useful responses. However, users are strongly encouraged to consult licensed healthcare providers for serious medical conditions, diagnoses, or treatment plans. Misuse of the model for self-diagnosis or treatment is discouraged.

Warning

The outputs of this model should not be relied upon for critical or life-threatening situations. It is essential to consult a healthcare professional before taking any medical action based on this model's suggestions.

How to Use

You can load and use this model for medical chatbot applications with ease using the Hugging Face library:

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline


model_id = "NousResearch/Llama-2-7b-chat-hf"
config = PeftConfig.from_pretrained( 'MassMin/llama2_ai_medical_chatbot')
model = AutoModelForCausalLM.from_pretrained(model_id)
model = PeftModel.from_pretrained(model,  'MassMin/llama2_ai_medical_chatbot')
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_length=256
)

prompt='Input your question?.'
result = pipe(f"<s>[INST] {prompt} [/INST]")
print(result[0]['generated_text'])
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for MassMin/llama2_ai_medical_chatbot

Finetuned
(1)
this model