Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,17 @@ Model creator: [ajibawa-2023](https://huggingface.co/ajibawa-2023)
|
|
20 |
|
21 |
Quantization is made with Exllamav2 0.0.8 with Pile as a calibration dataset.
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
## Original model card:
|
24 |
|
25 |
**SlimOrca-13B: A General Purpose Intelligent Model**
|
|
|
20 |
|
21 |
Quantization is made with Exllamav2 0.0.8 with Pile as a calibration dataset.
|
22 |
|
23 |
+
## How to run
|
24 |
+
|
25 |
+
This quantization method uses GPU and requires Exllamav2 loader which can be found in following applications:
|
26 |
+
|
27 |
+
[Text Generation Webui](https://github.com/oobabooga/text-generation-webui)
|
28 |
+
|
29 |
+
[KoboldAI](https://github.com/henk717/KoboldAI)
|
30 |
+
|
31 |
+
[ExUI](https://github.com/turboderp/exui)
|
32 |
+
|
33 |
+
|
34 |
## Original model card:
|
35 |
|
36 |
**SlimOrca-13B: A General Purpose Intelligent Model**
|