mlabonne commited on
Commit
f28ae7b
1 Parent(s): 5018f01

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ llama-3.1-70b-instruct-lorablated.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
3
+ library_name: transformers
4
+ license: llama3.1
5
+ tags:
6
+ - abliterated
7
+ - uncensored
8
+ - mergekit
9
+ - autoquant
10
+ - gguf
11
+ ---
12
+
13
+ # 🦙 Llama-3.1-70B-Instruct-lorablated
14
+
15
+ ![](https://i.imgur.com/5Y0Riis.png)
16
+
17
+ <center>🦙 <a href="https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated"><i>Llama 3.1 8B Instruct abliterated</i></a></center>
18
+
19
+ This is an uncensored version of [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it) using [@grimjim](https://huggingface.co/grimjim)'s recipe.
20
+
21
+ More precisely, this is a **LoRA-abliterated** (lorablated) model:
22
+
23
+ 1. **Extraction**: We extract a LoRA adapter by comparing two models: a censored Llama 3 and an abliterated Llama 3
24
+ 2. **Merge**: We merge this new LoRA adapter using [task arithmetic](https://arxiv.org/abs/2212.04089) to a censored Llama 3.1 to abliterate it.
25
+
26
+ I adapted this recipe to Llama 3.1 70B using [failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5](https://huggingface.co/failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5) and optimized the LoRA rank.
27
+
28
+ The model is fully uncensored in my tests and maintains a high level of quality. A more rigorous evaluation is still needed to measure the impact of this process on benchmarks.
29
+
30
+ Special thanks to [@grimjim](https://huggingface.co/grimjim) for this technique (see his [8B model](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter)) and [@FailSpy](https://huggingface.co/failspy) for his [70B abliterated model](https://huggingface.co/failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5). Please follow them if you're interested in abliterated models.
31
+
32
+ In addition, thanks to [brev.dev](https://brev.dev/) for providing me with compute!
33
+
34
+ ## ⚡️ Quantization
35
+
36
+ This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using ./meta-llama/Meta-Llama-3.1-70B-Instruct + Llama-3-70B-Instruct-abliterated-LORA as a base.
37
+
38
+ ## 🧩 Configuration
39
+
40
+ The following YAML configuration was used to produce this model:
41
+
42
+ ```yaml
43
+ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct+Llama-3-70B-Instruct-abliterated-LORA
44
+ dtype: bfloat16
45
+ merge_method: task_arithmetic
46
+ parameters:
47
+ normalize: false
48
+ slices:
49
+ - sources:
50
+ - layer_range: [0, 80]
51
+ model: meta-llama/Meta-Llama-3.1-70B-Instruct+Llama-3-70B-Instruct-abliterated-LORA
52
+ parameters:
53
+ weight: 1.0
54
+ ```
55
+
56
+ You can reproduce this model using the following commands:
57
+
58
+ ```bash
59
+ # Setup
60
+ git clone https://github.com/arcee-ai/mergekit.git
61
+ cd mergekit && pip install -e .
62
+ pip install bitsandbytes
63
+
64
+ # Extraction
65
+ mergekit-extract-lora failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5 meta-llama/Meta-Llama-3-70B-Instruct Llama-3-70B-Instruct-abliterated-LORA --rank=64
66
+
67
+ # Merge using previous config
68
+ mergekit-yaml config.yaml Llama-3.1-70B-Instruct-lorablated --allow-crimes --lora-merge-cache=./cache
69
+ ```
llama-3.1-70b-instruct-lorablated.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf30cada02efab5438ea14e1fc055a460cf5c18bfea46b88d03e156f92a31559
3
+ size 26375113216