Update README.md
Browse files
README.md
CHANGED
@@ -40,3 +40,7 @@ Astra-v1-12B can be used directly for a wide range of NLP tasks, including:
|
|
40 |
Astra-v1-12B is not intended for real-time decision-making in critical applications or generating harmful or biased content.
|
41 |
## How to Get Started with the quantized model
|
42 |
To run the quantized version of the model, you can use [KoboldCPP](https://github.com/LostRuins/koboldcpp), which allows you to run quantized GGUF models locally.
|
|
|
|
|
|
|
|
|
|
40 |
Astra-v1-12B is not intended for real-time decision-making in critical applications or generating harmful or biased content.
|
41 |
## How to Get Started with the quantized model
|
42 |
To run the quantized version of the model, you can use [KoboldCPP](https://github.com/LostRuins/koboldcpp), which allows you to run quantized GGUF models locally.
|
43 |
+
|
44 |
+
|
45 |
+
|
46 |
+
I encourage you to provide feedback on the model's performance. If you'd like to create your own quantizations, feel free to do so and let me know how it works for you!
|