Mihaiii's picture
Update README.md
e85bfc3 verified
|
raw
history blame
649 Bytes
---
library_name: transformers
tags: []
---
This was an experiment.
I got the delta between [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) and [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and applied that on the common layers from [ICTNLP/Llama-3.1-8B-Omni](https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni).
The intention was to see if the Omni model can gain abliterated functions.
The result (this model) is coherent, but it's not 100% uncensored. The reason most probably has to do with the way the Omni model was trained.