Update README.md
Browse files
README.md
CHANGED
@@ -7,4 +7,4 @@ This was an experiment.
|
|
7 |
I got the delta between [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) and [meta-llama/Llama-3.1-8B-Instruct)](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and applied that on the common layers from [ICTNLP/Llama-3.1-8B-Omni](https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni).
|
8 |
|
9 |
The intention was to see if the Omni model can gain abliterated functions.
|
10 |
-
The result (this model) is coherent, but it's not 100% uncensored. The reason most probably has to do with the way the Omni model was trained (the llama layers received completely different tokens than a standard LLM).
|
|
|
7 |
I got the delta between [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) and [meta-llama/Llama-3.1-8B-Instruct)](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and applied that on the common layers from [ICTNLP/Llama-3.1-8B-Omni](https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni).
|
8 |
|
9 |
The intention was to see if the Omni model can gain abliterated functions.
|
10 |
+
The result (this model) is coherent, but it's not 100% uncensored. The reason most probably has to do with the way the Omni model was trained (during the finetuning process for omni, the llama layers received completely different tokens than a standard LLM).
|