Update README.md
Browse files
README.md
CHANGED
@@ -26,6 +26,15 @@ Created by: [Qwen](https://huggingface.co/Qwen)
|
|
26 |
## Quantization notes
|
27 |
Made with Exllamav2 0.1.5 and the default dataset.
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
# Original model card
|
30 |
# Qwen2-7B-Instruct-abliterated
|
31 |
|
|
|
26 |
## Quantization notes
|
27 |
Made with Exllamav2 0.1.5 and the default dataset.
|
28 |
|
29 |
+
## How to run
|
30 |
+
|
31 |
+
This quantization uses GPU and requires Exllamav2 loader, model files have to be fully loaded in VRAM to work.
|
32 |
+
It should work well either with Nvidia RTX cards on Windows/Linux or AMD on Linux. For other hardware it's better to use GGUF models instead.
|
33 |
+
This model can be loaded with in following applications:
|
34 |
+
[Text Generation Webui](https://github.com/oobabooga/text-generation-webui)
|
35 |
+
[KoboldAI](https://github.com/henk717/KoboldAI)
|
36 |
+
[ExUI](https://github.com/turboderp/exui), etc.
|
37 |
+
|
38 |
# Original model card
|
39 |
# Qwen2-7B-Instruct-abliterated
|
40 |
|