metadata
library_name: transformers
license: apache-2.0
base_model:
- nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
datasets:
- nbeerbower/Arkhaios-DPO
- nbeerbower/Purpura-DPO
🧪 Just Another Model Experiment
This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!
Mistral-Nemo-Prism-12B-v3
Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.
The goal was to reduce archaic language and purple prose in a completely uncensored model.
Method
ORPO tuned with 2x A100 for 10 epochs.
For this version, data was improved and training was doubled to 10 epochs.