metadata
license: apache-2.0
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out, including Rohan, TheBloke, and Caseus