Edit model card

This Hugging Face model card describes a fine-tuned version of the LLAMAv2 7B model specifically for processing medical transcript instructions.

Model Details

  • Model Name: Ananya8154/llama-v2-7b-medical-instructions
  • Model Type: Encoder-Decoder Transformer
  • Language(s): English
  • License: [MIT License]
  • Fine-tuned from: LLAMAv2 7B (Meta)
  • Uploaded: [21/10/2024]

Intended Uses

This model is designed to understand and process medical transcript instructions. Potential use cases include:

  • Identify the medical speciality from the transcript
  • Extracting key information from medical transcripts (e.g., medications, procedures)
  • Summarizing medical instructions in a concise and easy-to-understand format
  • Generate appropriate sample name for the medical transcription
  • Access the complexity of text
  • Determine if this transcription is shorter or longer than average
  • Suggest follow up questions

How to Use

Installation:

pip install transformers
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model_name = "Ananya8154/llama-v2-7b-medical-instructions"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# Example usage: Summarize a medical transcript instruction

instruction = "Get key medical terms from the prescription"
context = "" # insert medical transcription here

prompt = """You are a helpful assistant, use context provided and answer the given instruction: \nInstruction: {instruction} \nContext: {context}"""

instruction_text = prompt 
input_ids = tokenizer(instruction_text, return_tensors="pt")

output = model.generate(**input_ids)
decoded_text = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
print(decoded_text)  # Output: Administer medication X every 12 hours. (Summarized)

Limitations & Biases Limitations:

This model is fine-tuned on a specific dataset of medical transcript instructions. It may not perform well on other types of medical text or general language tasks. The model may not be able to handle complex or nuanced medical instructions.

Biases:

The performance of this model may be biased by the data it was trained on. It's crucial to be aware of this limitation and ensure the training data is diverse and representative of the intended use case.

further info is being updated..

Downloads last month
9
Safetensors
Model size
3.61B params
Tensor type
F32
·
U8
·
Inference Examples
Unable to determine this model's library. Check the docs .