Model Card for Model neovalle/H4rmoniousBreezeDPO
Model Details
Model Description
This is model is a version of HuggingFaceH4/zephyr-7b-beta fine-tuned via DPO, using the H4rmony_dpo dataset, which aims to better align the model with ecological values through the use of ecolinguistics principles.
- Developed by: Jorge Vallego
- Funded by : Neovalle Ltd.
- Shared by : [email protected]
- Model type: mistral
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: HuggingFaceH4/zephyr-7b-beta
Uses
Intended as PoC to show the effects of H4rmony_dpo dataset with DPO fine-tuning..
Direct Use
For testing purposes to gain insight in order to help with the continous improvement of the H4rmony_dpo dataset.
Downstream Use
Its direct use in applications is not recommended as this model is under testing for a specific task only (Ecological Alignment)
Out-of-Scope Use
Not meant to be used other than testing and evaluation of the H4rmony dataset and ecological alignment.
Bias, Risks, and Limitations
This model might produce biased completions already existing in the base model, and others unintentionally introduced during fine-tuning.
How to Get Started with the Model
It can be loaded and run in a Colab instance with High RAM.
Training Details
Trained using DPO
Training Data
H4rmony Dataset - https://huggingface.co/datasets/neovalle/H4rmony_dpo
- Downloads last month
- 67