ujjman commited on
Commit
743e29d
1 Parent(s): 276b14f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -14
README.md CHANGED
@@ -1,22 +1,96 @@
1
  ---
2
- base_model: unsloth/llama-3.2-3b-bnb-4bit
3
- language:
4
- - en
5
- license: apache-2.0
6
  tags:
7
- - text-generation-inference
8
- - transformers
9
  - unsloth
10
- - llama
11
- - trl
 
 
 
 
 
 
 
12
  ---
13
 
14
- # Uploaded model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- - **Developed by:** ujjman
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
1
  ---
2
+ library_name: transformers
 
 
 
3
  tags:
 
 
4
  - unsloth
5
+ - finetuned
6
+ license: apache-2.0
7
+ datasets:
8
+ - keivalya/MedQuad-MedicalQnADataset
9
+ language:
10
+ - en
11
+ base_model:
12
+ - meta-llama/Llama-3.2-3B
13
+ pipeline_tag: question-answering
14
  ---
15
 
16
+ # Model Card for MedQA LLM
17
+
18
+ This model is fine-tuned on the "keivalya/MedQuad-MedicalQnADataset" to provide accurate answers to medical queries, focusing on a variety of question types including symptoms, diagnosis, prevention, and treatment.
19
+
20
+ ## Model Details
21
+
22
+ ### Model Description
23
+
24
+ This model, built on LLaMA 3.2 3B, has been fine-tuned specifically to address question-answering tasks in the medical domain. It aims to assist healthcare providers, researchers, and the general public by offering detailed and accurate responses to queries about medical conditions and treatments.
25
+
26
+ - **Developed by:** Ujjwal Mishra
27
+ - **Model type:** Question-Answering on medical data
28
+ - **Source Model:** LLaMA 3.2 3B
29
+
30
+ ## Uses
31
+
32
+ This model is intended for use as a first-line information provider about medical queries. It can support digital health applications, help desks, and educational platforms.
33
+
34
+ ### Direct Use
35
+
36
+ The model can directly answer questions from users about medical issues without any further fine-tuning.
37
+
38
+ ### Downstream Use
39
+
40
+ This model can be further fine-tuned on more specific medical sub-domains or integrated into medical decision-support systems to enhance its utility.
41
+
42
+ ### Out-of-Scope Use
43
+
44
+ The model is not designed to replace professional medical advice or diagnostic activities by certified healthcare providers.
45
+
46
+ ## Bias, Risks, and Limitations
47
+
48
+ Due to the nature of its training data, the model might exhibit biases towards more commonly represented diseases or conditions. It may not perform equally well on rare conditions or non-English queries.
49
+
50
+ ### Recommendations
51
+
52
+ Users should verify the information provided by the model with up-to-date and peer-reviewed medical sources or professionals. The model should be continuously monitored and updated to mitigate biases and adapt to new medical knowledge.
53
+
54
+ ## How to Get Started with the Model
55
+
56
+ ```python
57
+
58
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
59
+ from accelerate import Accelerator
60
+
61
+ # Initialize the Accelerator for mixed precision and faster inference (if supported by your hardware)
62
+ accelerator = Accelerator()
63
+
64
+ # Load your fine-tuned model and tokenizer
65
+ model_name = "ujjman/llama-3.2-3B-Medical-QnA-unsloth"
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+
68
+ model = AutoModelForCausalLM.from_pretrained(model_name)
69
+
70
+ # Place model and tokenizer on the appropriate device
71
+ model, tokenizer = accelerator.prepare(model, tokenizer)
72
+
73
+ # Create a text generation pipeline
74
+ generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
75
+
76
+ # Function to ask a medical question
77
+ def ask_question(question_type, question):
78
+ prompt = f"""Below is a Question Type that describes the type of question, paired with a question that asks a question based on medical science. Give an answer that correctly answers the question.
79
+
80
+ ### Question Type:
81
+ {question_type}
82
 
83
+ ### Question:
84
+ {question}
 
85
 
86
+ ### Answer:
87
+ """
88
+ # Generate an answer from the model
89
+ response = generator(prompt, max_length=512, num_return_sequences=1)
90
+ answer = response[0]['generated_text'][len(prompt):] # Remove the prompt from the generated text
91
+ return answer.strip()
92
 
93
+ # Example usage
94
+ question_type = "prevention"
95
+ question = "How can I protect myself from poisoning caused by marine toxins?"
96
+ print(ask_question(question_type, question))