YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
amanzargari/llama-qlora-iso9001-2015
This model is a fine-tuned version of Llama 2 using QLoRA technique.
Model Details
- Base Model: Llama 2
- Training Technique: QLoRA
- Model Type: Causal Language Model
- Language: English
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("amanzargari/llama-qlora-iso9001-2015")
tokenizer = AutoTokenizer.from_pretrained("amanzargari/llama-qlora-iso9001-2015")
# Example usage
text = "Your prompt here"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
Training Details
[Add your training details here]
Limitations and Bias
[Add model limitations and potential biases here]
Citation
[Add citation information if applicable]
- Downloads last month
- 6