---
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
tags:
- dpo
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|
**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)** | 3250 MB | 6 |
|**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)** | 3479 MB | 6 |
|**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)** | 3895 MB | 6 |
|**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)** | 4310 MB | 6 |
|**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)** | 4519 MB | 6 |
|**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)** | 4727 MB | 6 |
|**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)** | 4931 MB | 6 |
|**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)** | 5559 MB | 6 |
|**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)** | 6495 MB | 8 |
|**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)** | 6903 MB | 8 |
|**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)** | 8157 MB | 8 |
# NeuralDaredevil-8B-abliterated
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg)
This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
## 🏆 Evaluation
### Open LLM Leaderboard
TBD.
### Nous
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** |
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
| [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 |
| [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 |
| [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [📄](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
## 🌳 Model family tree
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png)