Update README.md
Browse files
README.md
CHANGED
@@ -57,7 +57,7 @@ Both experts are used in tandem when generating a token.
|
|
57 |
|
58 |
*The model has 131072 context length, and is on Llama-3.1 architecture.*
|
59 |
|
60 |
-
*I did not abliterate the base model at all, so it will refuse zero-shot unethical questions
|
61 |
|
62 |
# Recipe (I'm sorry...):
|
63 |
```yaml
|
|
|
57 |
|
58 |
*The model has 131072 context length, and is on Llama-3.1 architecture.*
|
59 |
|
60 |
+
*I did not abliterate the base model at all, so it will refuse zero-shot unethical questions. I recommend avoiding keywords like 'assistant, helpful, kind'*
|
61 |
|
62 |
# Recipe (I'm sorry...):
|
63 |
```yaml
|