ruslanmv commited on
Commit
6fba484
1 Parent(s): 6036440

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ tags:
5
+ - text-generation-inference
6
+ - transformers
7
+ - ruslanmv
8
+ - llama
9
+ - trl
10
+ base_model: unsloth/llama-3-8b-bnb-4bit
11
+ datasets:
12
+ - ruslanmv/ai-medical-chatbot
13
+ ---
14
+
15
+ # Medical-Llama3-8B-16bit: Fine-Tuned Llama3 for Medical Q&A
16
+
17
+ This repository provides a fine-tuned version of the powerful Llama3 8B model, specifically designed to answer medical questions in an informative way. It leverages the rich knowledge contained in the AI Medical Chatbot dataset ([ruslanmv/ai-medical-chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot)).
18
+
19
+ **Model & Development**
20
+
21
+ - **Developed by:** ruslanmv
22
+ - **License:** Apache-2.0
23
+ - **Finetuned from model:** unsloth/llama-3-8b-bnb-4bit
24
+
25
+ **Key Features**
26
+
27
+ - **Medical Focus:** Optimized to address health-related inquiries.
28
+ - **Knowledge Base:** Trained on a comprehensive medical chatbot dataset.
29
+ - **Text Generation:** Generates informative and potentially helpful responses.
30
+
31
+ **Installation**
32
+
33
+ This model is accessible through the Hugging Face Transformers library. Install it using pip:
34
+
35
+ ```bash
36
+ pip install transformers
37
+ ```
38
+
39
+ **Usage Example**
40
+
41
+ Here's a Python code snippet demonstrating how to interact with the `Medical-Llama3-8B-16bit` model and generate answers to your medical questions:
42
+
43
+ ```python
44
+ from transformers import AutoTokenizer, AutoModelForCausalLM
45
+
46
+ # Load tokenizer and model
47
+ tokenizer = AutoTokenizer.from_pretrained("ruslanmv/Medical-Llama3-8B-16bit")
48
+ model = AutoModelForCausalLM.from_pretrained("ruslanmv/Medical-Llama3-8B-16bit").to("cuda") # If using GPU
49
+
50
+ # Function to format and generate response with prompt engineering
51
+ def askme(question):
52
+ medical_prompt = """You are an AI Medical Assistant trained on a vast dataset of health information. Below is a medical question:
53
+
54
+ Question: {}
55
+
56
+ Please provide an informative and comprehensive answer:
57
+
58
+ Answer: """.format(question)
59
+
60
+ EOS_TOKEN = tokenizer.eos_token
61
+
62
+ def format_prompt(question):
63
+ return medical_prompt + question + EOS_TOKEN
64
+
65
+ inputs = tokenizer(format_prompt(question), return_tensors="pt").to("cuda") # If using GPU
66
+ outputs = model.generate(**inputs, max_new_tokens=64, use_cache=True) # Adjust max_new_tokens for longer responses
67
+ answer = tokenizer.batch_decode(outputs)[0].strip()
68
+ return answer
69
+
70
+ # Example usage
71
+ question = "What should I do to reduce my weight gained due to genetic hypothyroidism?"
72
+ print(askme(question))
73
+ ```
74
+
75
+ **Important Note**
76
+
77
+ This model is intended for informational purposes only and should not be used as a substitute for professional medical advice. Always consult with a qualified healthcare provider for any medical concerns.
78
+
79
+ **License**
80
+
81
+ This model is distributed under the Apache License 2.0 (see LICENSE file for details).
82
+
83
+ **Contributing**
84
+
85
+ We welcome contributions to this repository! If you have improvements or suggestions, feel free to create a pull request.
86
+
87
+ **Disclaimer**
88
+
89
+ While we strive to provide informative responses, the accuracy of the model's outputs cannot be guaranteed. It is crucial to consult a doctor or other healthcare professional for definitive medical advice.
90
+ ```