--- license: apache-2.0 datasets: - celsowm/auryn_dpo_orpo_english language: - en base_model: - meta-llama/Llama-3.2-1B tags: - orpo --- # auryn_dpo_orpo_english This is a ORPO fine-tune of meta-llama/Llama-3.2-1b trained on three epochs of https://huggingface.co/datasets/celsowm/auryn_dpo_orpo_english Auryn is a fictional place intended to serve as a proof of concept for injecting knowledge into a large language model using ORPO. Tutorial here: https://medium.com/@celsoaf/injecting-new-knowledge-into-an-llm-via-fine-tuning-with-orpo-017d3bfdb11b