willhe-xverse
commited on
Commit
•
5997e64
1
Parent(s):
e1fe150
Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ inference: false
|
|
9 |
|
10 |
## 更新信息
|
11 |
|
12 |
-
- **[2024/03/25]** 发布XVERSE-13B-Chat-GGUF模型,支持llama.cpp在MacOS/Linux/Windows系统上推理
|
13 |
- **[2023/11/06]** 发布新版本的 **XVERSE-13B-2** 底座模型和 **XVERSE-13B-Chat-2** 对话模型,相较于原始版本,新版本的模型训练更加充分(从 1.4T 增加到 3.2T),各方面的能力均得到大幅提升,同时新增工具调用能力。
|
14 |
- **[2023/09/26]** 发布 7B 尺寸的 [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) 底座模型和 [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) 对话模型,支持在单张消费级显卡部署运行,并保持高性能、全开源、免费可商用。
|
15 |
- **[2023/08/22]** 发布经过指令精调的 XVERSE-13B-Chat 对话模型。
|
@@ -17,7 +17,7 @@ inference: false
|
|
17 |
|
18 |
## Update Information
|
19 |
|
20 |
-
- **[2024/03/25]** Released the XVERSE-13B-Chat
|
21 |
- **[2023/11/06]** The new versions of the **XVERSE-13B-2** base model and the **XVERSE-13B-Chat-2** model have been released. Compared to the original versions, the new models have undergone more extensive training (increasing from 1.4T to 3.2T), resulting in significant improvements in all capabilities, along with the addition of Function Call abilities.
|
22 |
- **[2023/09/26]** Released the [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) base model and [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) instruct-finetuned model with 7B size, which support deployment and operation on a single consumer-grade graphics card while maintaining high performance, full open source, and free for commercial use.
|
23 |
- **[2023/08/22]** Released the aligned instruct-finetuned model XVERSE-13B-Chat.
|
@@ -61,7 +61,7 @@ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and
|
|
61 |
huggingface-cli download xverse/XVERSE-13B-Chat-GGUF xverse-13b-chat-q4_k_m.gguf --local-dir . --local-dir-use-symlinks False
|
62 |
```
|
63 |
|
64 |
-
我们演示了如何使用 `llama.cpp` 来运行xverse-13b
|
65 |
|
66 |
```bash
|
67 |
./main -m xverse-13b-chat-q4_k_m.gguf -n 512 --color -i --temp 0.85 --top_k 30 --top_p 0.85 --repeat_penalty 1.1 -ins # -ngl 99 for GPU
|
@@ -75,7 +75,7 @@ Cloning the repo may be inefficient, and thus you can manually download the GGUF
|
|
75 |
huggingface-cli download xverse/XVERSE-13B-Chat-GGUF xverse-13b-chat-q4_k_m.gguf --local-dir . --local-dir-use-symlinks False
|
76 |
```
|
77 |
|
78 |
-
We demonstrate how to use `llama.cpp` to run xverse-13b
|
79 |
|
80 |
```shell
|
81 |
./main -m xverse-13b-chat-q4_k_m.gguf -n 512 --color -i --temp 0.85 --top_k 30 --top_p 0.85 --repeat_penalty 1.1 -ins # -ngl 99 for GPU
|
|
|
9 |
|
10 |
## 更新信息
|
11 |
|
12 |
+
- **[2024/03/25]** 发布XVERSE-13B-Chat-GGUF模型,支持llama.cpp在MacOS/Linux/Windows系统上推理XVERSE-13B-Chat模型。
|
13 |
- **[2023/11/06]** 发布新版本的 **XVERSE-13B-2** 底座模型和 **XVERSE-13B-Chat-2** 对话模型,相较于原始版本,新版本的模型训练更加充分(从 1.4T 增加到 3.2T),各方面的能力均得到大幅提升,同时新增工具调用能力。
|
14 |
- **[2023/09/26]** 发布 7B 尺寸的 [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) 底座模型和 [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) 对话模型,支持在单张消费级显卡部署运行,并保持高性能、全开源、免费可商用。
|
15 |
- **[2023/08/22]** 发布经过指令精调的 XVERSE-13B-Chat 对话模型。
|
|
|
17 |
|
18 |
## Update Information
|
19 |
|
20 |
+
- **[2024/03/25]** Released the XVERSE-13B-Chat-GGUF models, supporting llama.cpp to inference the XVERSE-13B-Chat model on MacOS/Linux/Windows systems.
|
21 |
- **[2023/11/06]** The new versions of the **XVERSE-13B-2** base model and the **XVERSE-13B-Chat-2** model have been released. Compared to the original versions, the new models have undergone more extensive training (increasing from 1.4T to 3.2T), resulting in significant improvements in all capabilities, along with the addition of Function Call abilities.
|
22 |
- **[2023/09/26]** Released the [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) base model and [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) instruct-finetuned model with 7B size, which support deployment and operation on a single consumer-grade graphics card while maintaining high performance, full open source, and free for commercial use.
|
23 |
- **[2023/08/22]** Released the aligned instruct-finetuned model XVERSE-13B-Chat.
|
|
|
61 |
huggingface-cli download xverse/XVERSE-13B-Chat-GGUF xverse-13b-chat-q4_k_m.gguf --local-dir . --local-dir-use-symlinks False
|
62 |
```
|
63 |
|
64 |
+
我们演示了如何使用 `llama.cpp` 来运行xverse-13b-chat-q4_k_m.gguf模型:
|
65 |
|
66 |
```bash
|
67 |
./main -m xverse-13b-chat-q4_k_m.gguf -n 512 --color -i --temp 0.85 --top_k 30 --top_p 0.85 --repeat_penalty 1.1 -ins # -ngl 99 for GPU
|
|
|
75 |
huggingface-cli download xverse/XVERSE-13B-Chat-GGUF xverse-13b-chat-q4_k_m.gguf --local-dir . --local-dir-use-symlinks False
|
76 |
```
|
77 |
|
78 |
+
We demonstrate how to use `llama.cpp` to run xverse-13b-chat-q4_k_m.gguf model:
|
79 |
|
80 |
```shell
|
81 |
./main -m xverse-13b-chat-q4_k_m.gguf -n 512 --color -i --temp 0.85 --top_k 30 --top_p 0.85 --repeat_penalty 1.1 -ins # -ngl 99 for GPU
|