Mihaiii commited on
Commit
e85bfc3
1 Parent(s): 3d90f19

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -4,7 +4,7 @@ tags: []
4
  ---
5
 
6
  This was an experiment.
7
- I got the delta between [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) and [meta-llama/Llama-3.1-8B-Instruct)](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and applied that on the common layers from [ICTNLP/Llama-3.1-8B-Omni](https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni).
8
 
9
  The intention was to see if the Omni model can gain abliterated functions.
10
- The result (this model) is coherent, but it's not 100% uncensored. The reason most probably has to do with the way the Omni model was trained (during the finetuning process for omni, the llama layers received completely different tokens than a standard LLM).
 
4
  ---
5
 
6
  This was an experiment.
7
+ I got the delta between [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) and [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and applied that on the common layers from [ICTNLP/Llama-3.1-8B-Omni](https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni).
8
 
9
  The intention was to see if the Omni model can gain abliterated functions.
10
+ The result (this model) is coherent, but it's not 100% uncensored. The reason most probably has to do with the way the Omni model was trained.