--- language: - en license: apache-2.0 library_name: transformers tags: - orpo base_model: - meta-llama/Meta-Llama-3-8B datasets: - mlabonne/orpo-dpo-mix-40k --- # OrpoLlama3-8B ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/oa8hfBhbPfN6MPWVMJoLq.jpeg) This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 15k steps of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Muhammad2003/OrpoLlama3-8B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## 📈 Training curves Wandb Report ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/eFL8QhHbSjY45Ai2JQFj9.png) ## 🏆 Evaluation ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/E5XZI4Hiaw3C3gThvoKrH.png)