329X DPO'd Model. Finetuned off meta-llama/Llama-3.2-1B. Trained for 1 epoch, learning rate=2e-4, batch_size=1.
Base model