Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,30 @@ tags:
|
|
4 |
- vicuna
|
5 |
- ggml
|
6 |
pipeline_tag: conversational
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
-
|
|
|
|
|
|
4 |
- vicuna
|
5 |
- ggml
|
6 |
pipeline_tag: conversational
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
- bg
|
10 |
+
- ca
|
11 |
+
- cs
|
12 |
+
- da
|
13 |
+
- de
|
14 |
+
- es
|
15 |
+
- fr
|
16 |
+
- hr
|
17 |
+
- hu
|
18 |
+
- it
|
19 |
+
- nl
|
20 |
+
- pl
|
21 |
+
- pt
|
22 |
+
- ro
|
23 |
+
- ru
|
24 |
+
- sl
|
25 |
+
- sr
|
26 |
+
- sv
|
27 |
+
- uk
|
28 |
+
library_name: adapter-transformers
|
29 |
---
|
30 |
|
31 |
+
Note: If you previously used the q4_0 model before April 26th, 2023, you are using an outdated model. I suggest redownloading for a better experience. Check https://github.com/ggerganov/llama.cpp#quantization for details on the different quantization types.
|
32 |
+
|
33 |
+
This is a ggml version of vicuna 7b and 13b. This is the censored model, a similar uncensored 7b model can be found at https://huggingface.co/eachadea/ggml-vicuna-13b-1.1.
|