File size: 3,049 Bytes
fec91ca 1ef3b7a fec91ca 1ef3b7a e670827 fec91ca 1ef3b7a 8b91aa4 3aad385 8b91aa4 3aad385 1ef3b7a 0018e6e fec91ca 3aad385 1ef3b7a fec91ca 8b91aa4 0018e6e 8b91aa4 fec91ca 0018e6e 1ef3b7a fec91ca 0018e6e fec91ca 0018e6e fec91ca f8376f0 0018e6e f8376f0 0018e6e f8376f0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
library_name: transformers
tags:
- mergekit
- merge
---
# 🪽 Hermes-3-Llama-3.1-8B-lorablated
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/4Hbw5n68jKUSBQeTqQIeT.png)
<center>70B version: <a href="https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated/"><i>mlabonne/Hermes-3-Llama-3.1-70B-lorablated</i></a></center>
This is an uncensored version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) using lorablation.
You can see in the following example how Hermes 3 refuses to answer a legitimate question while the abliterated model complies:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2-ZRBvlZxvIr_Ag_ynNkk.png)
The recipe is based on @grimjim's [grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter) (special thanks):
1. **Extraction**: We extract a LoRA adapter by comparing two models: a censored Llama 3.1 ([meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)) and an abliterated Llama 3.1 ([mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated)).
2. **Merge**: We merge this new LoRA adapter using [task arithmetic](https://arxiv.org/abs/2212.04089) to the censored [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) to abliterate it.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/JdYyK-HLHbyBiHvg-Nvsn.png)
See [this article](https://huggingface.co/blog/mlabonne/abliteration) to learn more about abliteration.
## ⚡ Quantization
* **GGUF**: https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated-GGUF
## 🧩 Configuration
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) + Llama-3.1-8B-Instruct-abliterated-LORA as a base.
The following YAML configuration was used to produce this model:
```yaml
base_model: NousResearch/Hermes-3-Llama-3.1-8B+Llama-3.1-8B-Instruct-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: false
slices:
- sources:
- layer_range: [0, 32]
model: NousResearch/Hermes-3-Llama-3.1-8B+Llama-3.1-8B-Instruct-abliterated-LORA
parameters:
weight: 1.0
```
You can reproduce this model using the following commands:
```bash
# Setup
git clone https://github.com/arcee-ai/mergekit.git
cd mergekit && pip install -e .
pip install bitsandbytes
# Extraction
mergekit-extract-lora mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated meta-llama/Meta-Llama-3.1-8B-Instruct Llama-3.1-8B-Instruct-abliterated-LORA --rank=64
# Merge using previous config
mergekit-yaml config.yaml Hermes-3-Llama-3.1-8B-lorablated --allow-crimes --lora-merge-cache=./cache
``` |