pythia-legal-llm-v4 / README.md
Pravincoder's picture
Update README.md
59a0c5e verified
---
tags:
- autotrain
- text-generation
- health
- medical
widget:
- text: 'I love AutoTrain because '
license: mit
language:
- en
library_name: peft
---
---
### Base Model Description
The Pythia 70M model is a transformer-based language model developed by EleutherAI.
It is part of the Pythia series, known for its high performance in natural language understanding and generation tasks.
With 70 million parameters, it is designed to handle a wide range of NLP applications, offering a balance between computational efficiency and model capability.
This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Pravin Maurya
- **Model type:** LoRa fine-tuned transformer model
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** EleutherAI/pythia-70m
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Colab Link:** [Click me๐Ÿ”—](https://colab.research.google.com/drive/1tyogv7jtc8a4h23pEIlJW2vBgWTTzy3e#scrollTo=b6fQzRl2faSn)
## Uses
Downstream uses are model can be fine-tuned further for specific applications like medical AI assistants, legal document generation, and other domain-specific NLP tasks.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Pravincoder/pythia-legal-llm-v4 ")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-70m")
def inference(text, model, tokenizer, max_input_tokens=1000, max_output_tokens=200):
input_ids = tokenizer.encode(text, return_tensors="pt", truncation=True, max_length=max_input_tokens)
device = model.device
generated_tokens_with_prompt = model.generate(input_ids=input_ids.to(device), max_length=max_output_tokens)
generated_text_with_prompt = tokenizer.batch_decode(generated_tokens_with_prompt, skip_special_tokens=True)
generated_text_answer = generated_text_with_prompt[0][len(text):]
return generated_text_answer
system_message = "Welcome to the medical AI assistant."
user_message = "What are the symptoms of influenza?"
generated_response = inference(system_message, user_message, model, tokenizer)
print("Generated Response:", generated_response)
```
## Training Data
The model was fine-tuned using data relevant to the medical Chat data. for more info [click me๐Ÿ”—](https://huggingface.co/datasets/keivalya/MedQuad-MedicalQnADataset)
### Training Procedure
Data preprocessing involved tokenization and formatting suitable for the transformer model.
#### Training Hyperparameters
-Training regime: Mixed precision (fp16)
## Hardware
- **Hardware Type:** T4 Google Colab GPU
- **Hours used:** 1.30-2 hr
## Model Card Contact
Email :- [email protected]
# Model Trained Using AutoTrain