|
--- |
|
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
- sft |
|
--- |
|
# Model README |
|
|
|
## Model Overview |
|
|
|
- **Model Name:** Medicine_chat |
|
- **Base Model:** unsloth/meta-llama-3.1-8b-bnb-4bit |
|
- **Developed by:** varma007ut |
|
- **License:** Apache 2.0 |
|
|
|
## Description |
|
|
|
This model is a fine-tuned version of the `unsloth/meta-llama-3.1-8b-bnb-4bit` designed specifically for text generation tasks in the medical domain. It leverages a substantial dataset of medical texts to improve its performance and relevance in generating medical-related content. |
|
|
|
## Fine-tuning Details |
|
|
|
- **Fine-tuned Data:** The model has been fine-tuned on medicinal data, enhancing its ability to understand and generate contextually appropriate medical text. |
|
- **Objective:** The fine-tuning process aims to make the model proficient in medical terminology, guidelines, and general knowledge pertinent to healthcare professionals. |
|
|
|
## Installation |
|
|
|
To use this model, ensure you have the necessary libraries installed. You can install them using pip: |
|
|
|
```bash |
|
pip install transformers |
|
## Usage |
|
|
|
Here’s an example of how to load and use the model for text generation: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_name = "your_model_name" # Replace with your model's name |
|
|
|
# Load model and tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
model = AutoModelForCausalLM.from_pretrained(model_name) |
|
|
|
# Generate text |
|
input_text = "What are the symptoms of diabetes?" |
|
input_ids = tokenizer.encode(input_text, return_tensors='pt') |
|
|
|
output = model.generate(input_ids, max_length=150) |
|
generated_text = tokenizer.decode(output[0], skip_special_tokens=True) |
|
|
|
print(generated_text) |
|
|