--- base_model: nbeerbower/bophades-mistral-truthy-DPO-7B datasets: - jondurbin/truthy-dpo-v0.1 inference: false library_name: transformers license: apache-2.0 merged_models: - nbeerbower/bophades-v2-mistral-7B pipeline_tag: text-generation quantized_by: Suparious tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible - finetuned - mistral --- # nbeerbower/bophades-mistral-truthy-DPO-7B AWQ - Model creator: [nbeerbower](https://huggingface.co/nbeerbower) - Original model: [bophades-mistral-truthy-DPO-7B](https://huggingface.co/nbeerbower/bophades-mistral-truthy-DPO-7B) ![image/png](https://huggingface.co/nbeerbower/bophades-mistral-7B/resolve/main/bophades.png) ## Model Summary [bophades-v2-mistral-7B](https://huggingface.co/nbeerbower/bophades-v2-mistral-7B) finetuned on [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1). Finetuned using an A100 on Google Colab. 🙏 [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)