|
--- |
|
license: other |
|
license_name: qwen-research |
|
license_link: https://huggingface.co/Hack337/WavGPT-1.5-GGUF/blob/main/LICENSE |
|
language: |
|
- en |
|
- ru |
|
pipeline_tag: text-generation |
|
base_model: |
|
- Hack337/WavGPT-1.5 |
|
- Qwen/Qwen2.5-3B-Instruct |
|
tags: |
|
- chat |
|
--- |
|
|
|
# WavGPT-1.5-GGUF |
|
|
|
## Quickstart |
|
|
|
Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide. |
|
|
|
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp. |
|
In the following demonstration, we assume that you are running commands under the repository `llama.cpp`. |
|
|
|
Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`: |
|
1. Install |
|
```shell |
|
pip install -U huggingface_hub |
|
``` |
|
2. Download: |
|
```shell |
|
huggingface-cli download Hack337/WavGPT-1.5-GGUF WavGPT-1.5.gguf --local-dir . --local-dir-use-symlinks False |
|
``` |
|
|
|
For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode: |
|
|
|
```shell |
|
./llama-cli -m <gguf-file-path> \ |
|
-co -cnv -p "Вы очень полезный помощник." \ |
|
-fa -ngl 80 -n 512 |
|
``` |