|
--- |
|
language: |
|
- en |
|
license: other |
|
library_name: transformers |
|
tags: |
|
- orpo |
|
- llama 3 |
|
- rlhf |
|
- sft |
|
datasets: |
|
- mlabonne/orpo-dpo-mix-40k |
|
--- |
|
|
|
# UpshotLlama-3-8B |
|
|
|
This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 2k sample of dpo_math_data from [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). |
|
|
|
It's a successful fine-tune that follows the ChatML template! |
|
|
|
|
|
## π Application |
|
|
|
This model uses a context window of 8k. It was trained with the ChatML template. |
|
|
|
|
|
## π» Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "Aditya685/UpshotLlama-3-8B" |
|
messages = [{"role": "user", "content": "Given the equation 4x + 7 = 55. Find the value of x"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |