|
--- |
|
license: gpl-3.0 |
|
tags: |
|
- vicuna |
|
- ggml |
|
pipeline_tag: conversational |
|
language: |
|
- en |
|
- bg |
|
- ca |
|
- cs |
|
- da |
|
- de |
|
- es |
|
- fr |
|
- hr |
|
- hu |
|
- it |
|
- nl |
|
- pl |
|
- pt |
|
- ro |
|
- ru |
|
- sl |
|
- sr |
|
- sv |
|
- uk |
|
library_name: adapter-transformers |
|
--- |
|
|
|
Note: If you previously used the q4_0 model before April 26th, 2023, you are using an outdated model. I suggest redownloading for a better experience. Check https://github.com/ggerganov/llama.cpp#quantization for details on the different quantization types. |
|
|
|
This is a ggml version of vicuna 7b and 13b. This is the censored model, a similar 1.0 uncensored 13b model can be found at https://huggingface.co/eachadea/ggml-vicuna-13b-1.1. |