NeuralDaredevil-8B-abliterated
This is a DPO fine-tune of mlabonne/Daredevil-8-abliterated, trained on one epoch of mlabonne/orpo-dpo-mix-40k. The DPO fine-tuning successfully recovers the performance loss due to the abliteration process, making it an excellent uncensored model.
π Applications
NeuralDaredevil-8B-abliterated performs better than the Instruct model on my tests.
You can use it for any application that doesn't require alignment, like role-playing. Tested on LM Studio using the "Llama 3" and "Llama 3 v2" presets.
β‘ Quantization
Thanks to QuantFactory, ZeroWw, Zoyd, solidrust, and tarruda for providing these quants.
- GGUF: https://huggingface.co/QuantFactory/NeuralDaredevil-8B-abliterated-GGUF
- GGUF (FP16): https://huggingface.co/ZeroWw/NeuralDaredevil-8B-abliterated-GGUF
- EXL2: https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2
- AWQ: https://huggingface.co/solidrust/NeuralDaredevil-8B-abliterated-AWQ
- ollama:
π Evaluation
Open LLM Leaderboard
NeuralDaredevil-8B is the best-performing uncensored 8B model on the Open LLM Leaderboard (MMLU score).
Nous
Evaluation performed using LLM AutoEval. See the entire leaderboard here.
Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
---|---|---|---|---|---|
mlabonne/NeuralDaredevil-8B-abliterated π | 55.87 | 43.73 | 73.6 | 59.36 | 46.8 |
mlabonne/Daredevil-8B π | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
mlabonne/Daredevil-8B-abliterated π | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 |
NousResearch/Hermes-2-Theta-Llama-3-8B π | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 |
openchat/openchat-3.6-8b-20240522 π | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 |
meta-llama/Meta-Llama-3-8B-Instruct π | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
meta-llama/Meta-Llama-3-8B π | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
π³ Model family tree
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Daredevil-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 13
Dataset used to train mav23/NeuralDaredevil-8B-abliterated-GGUF
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard69.280
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard85.050
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard69.100
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard60.000
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard78.690
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard71.800