nbeerbower's picture
Update README.md
e52f7b7 verified
metadata
library_name: transformers
tags:
  - trl
  - orpo
license: apache-2.0
datasets:
  - nbeerbower/gutenberg2-dpo
  - nbeerbower/gutenberg-moderne-dpo
base_model:
  - nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated

image/png

🧪 Just Another Model Experiment

This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

Mistral-Nemo-Moderne-12B-FFT-experimental

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on gutenberg2-dpo and gutenberg-moderne-dpo.

This model has erratic behavior and poor performance

Method

ORPO tuned with 8x A100 for 1.5 epochs.

This was a full finetune. I think the issues with the model can be chalked up to conflicts with Mistral Instruct and ChatML.