File size: 1,274 Bytes
3a81b87 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Hack337/WavGPT-1.5-GGUF/blob/main/LICENSE
language:
- en
- ru
pipeline_tag: text-generation
base_model:
- Hack337/WavGPT-1.5
- Qwen/Qwen2.5-3B-Instruct
tags:
- chat
---
# WavGPT-1.5-GGUF
## Quickstart
Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide.
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`:
1. Install
```shell
pip install -U huggingface_hub
```
2. Download:
```shell
huggingface-cli download Hack337/WavGPT-1.5-GGUF WavGPT-1.5.gguf --local-dir . --local-dir-use-symlinks False
```
For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode:
```shell
./llama-cli -m <gguf-file-path> \
-co -cnv -p "Вы очень полезный помощник." \
-fa -ngl 80 -n 512
``` |