Hack337 commited on
Commit
3a81b87
1 Parent(s): 43e6499

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -3
README.md CHANGED
@@ -1,3 +1,41 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen-research
4
+ license_link: https://huggingface.co/Hack337/WavGPT-1.5-GGUF/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ - ru
8
+ pipeline_tag: text-generation
9
+ base_model:
10
+ - Hack337/WavGPT-1.5
11
+ - Qwen/Qwen2.5-3B-Instruct
12
+ tags:
13
+ - chat
14
+ ---
15
+
16
+ # WavGPT-1.5-GGUF
17
+
18
+ ## Quickstart
19
+
20
+ Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide.
21
+
22
+ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
23
+ In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
24
+
25
+ Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`:
26
+ 1. Install
27
+ ```shell
28
+ pip install -U huggingface_hub
29
+ ```
30
+ 2. Download:
31
+ ```shell
32
+ huggingface-cli download Hack337/WavGPT-1.5-GGUF WavGPT-1.5.gguf --local-dir . --local-dir-use-symlinks False
33
+ ```
34
+
35
+ For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode:
36
+
37
+ ```shell
38
+ ./llama-cli -m <gguf-file-path> \
39
+ -co -cnv -p "Вы очень полезный помощник." \
40
+ -fa -ngl 80 -n 512
41
+ ```