mzwing
commited on
Commit
•
3bff96b
0
Parent(s):
GGUF model commit (made with llama.cpp commit 0a7c980)
Browse files- .gitattributes +49 -0
- AquilaChat2-7B-16K.F16.gguf +3 -0
- AquilaChat2-7B-16K.F32.gguf +3 -0
- AquilaChat2-7B-16K.Q2_K.gguf +3 -0
- AquilaChat2-7B-16K.Q3_K_L.gguf +3 -0
- AquilaChat2-7B-16K.Q3_K_M.gguf +3 -0
- AquilaChat2-7B-16K.Q3_K_S.gguf +3 -0
- AquilaChat2-7B-16K.Q4_0.gguf +3 -0
- AquilaChat2-7B-16K.Q4_K_M.gguf +3 -0
- AquilaChat2-7B-16K.Q4_K_S.gguf +3 -0
- AquilaChat2-7B-16K.Q5_0.gguf +3 -0
- AquilaChat2-7B-16K.Q5_K_M.gguf +3 -0
- AquilaChat2-7B-16K.Q5_K_S.gguf +3 -0
- AquilaChat2-7B-16K.Q6_K.gguf +3 -0
- AquilaChat2-7B-16K.Q8_0.gguf +3 -0
- README.md +309 -0
.gitattributes
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
AquilaChat2-7B-16K.F16.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
AquilaChat2-7B-16K.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
AquilaChat2-7B-16K.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
AquilaChat2-7B-16K.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
AquilaChat2-7B-16K.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
AquilaChat2-7B-16K.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
AquilaChat2-7B-16K.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
AquilaChat2-7B-16K.Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
AquilaChat2-7B-16K.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
AquilaChat2-7B-16K.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
AquilaChat2-7B-16K.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
AquilaChat2-7B-16K.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
48 |
+
AquilaChat2-7B-16K.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
49 |
+
AquilaChat2-7B-16K.F32.gguf filter=lfs diff=lfs merge=lfs -text
|
AquilaChat2-7B-16K.F16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6cc9af03b92ceb697aee8aa843d9108bbfd05c4b13eed28758bbf43db9103a42
|
3 |
+
size 14596449856
|
AquilaChat2-7B-16K.F32.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c4de9d9ab0d75208768106fc8cbb4a313a1e2f20d029056c46be39577498ea6b
|
3 |
+
size 29186991680
|
AquilaChat2-7B-16K.Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7036d19d86d55b3e75274a0906925b5abd9263ac255d891f1e5a4ebaf985ea00
|
3 |
+
size 2856875392
|
AquilaChat2-7B-16K.Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9cd4d0199a6b66f17583cbf3ecc60ede36287e8271faf0c7a2d3731e51f8f461
|
3 |
+
size 3949414016
|
AquilaChat2-7B-16K.Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:37b04b9a620729f1a2da1d525e4a3a41fcd94cdf991e3df41def99dbb2f54533
|
3 |
+
size 3650307712
|
AquilaChat2-7B-16K.Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:48f9caeeb7bde29a2b8a5522a6f8642fe0b2923f1d21c64114b8ab81bc83099c
|
3 |
+
size 3300607616
|
AquilaChat2-7B-16K.Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5ef8749c47c5ea2354196ad1d4b8d861bb4ccc2779064b8f211860f8b635cff7
|
3 |
+
size 4215106432
|
AquilaChat2-7B-16K.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:589c4c6101fa7098d922468ef1b559fdf7f7a4574e21c05b512d7641462a575f
|
3 |
+
size 4470303616
|
AquilaChat2-7B-16K.Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1e9118541afa9cd409b95766515576db13828166bc1708edd87fd0933b2f48d
|
3 |
+
size 4246039424
|
AquilaChat2-7B-16K.Q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:81f58a8b6e0f295b4d290725c74ea36dd75e2b91e3b9d51c7a12538c2c317955
|
3 |
+
size 5075811200
|
AquilaChat2-7B-16K.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:efbc53a834214bb54752447b67623b44de5372622f3d018998d609f88afa9a2f
|
3 |
+
size 5207276416
|
AquilaChat2-7B-16K.Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:49c7823abea6b31c57f7ff7b916a81dc0322e62363cd7beeceedfd894a8fd2c7
|
3 |
+
size 5075811200
|
AquilaChat2-7B-16K.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d036e986b610e8802d50fb60f6fbd6512336eef1dc22db35a8cf6bd35294f5a5
|
3 |
+
size 5990310016
|
AquilaChat2-7B-16K.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:43f00cfcdb3542ed4368b3fc46716311ca0a8c69a17a24a0fb7350af1dc93b1f
|
3 |
+
size 7757133440
|
README.md
ADDED
@@ -0,0 +1,309 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: BAAI/AquilaChat2-7B-16K
|
3 |
+
inference: false
|
4 |
+
license: other
|
5 |
+
model_creator: Beijing Academy of Artificial Intelligence
|
6 |
+
model_name: Aquilachat2 7B 16K
|
7 |
+
model_type: aquila
|
8 |
+
prompt_template: >
|
9 |
+
System: A chat between a curious human and an artificial intelligence
|
10 |
+
assistant. The assistant gives helpful, detailed, and polite answers to the
|
11 |
+
human's questions.
|
12 |
+
|
13 |
+
Human: {prompt}
|
14 |
+
|
15 |
+
Assistant:
|
16 |
+
quantized_by: mzwing
|
17 |
+
---
|
18 |
+
|
19 |
+
# AquilaChat2 7B 16K - GGUF
|
20 |
+
- Model creator: [Beijing Academy of Artificial Intelligence](https://huggingface.co/BAAI)
|
21 |
+
- Original model: [AquilaChat2 7B 16K](https://huggingface.co/BAAI/AquilaChat2-7B-16K)
|
22 |
+
|
23 |
+
<!-- description start -->
|
24 |
+
## Description
|
25 |
+
|
26 |
+
This repo contains GGUF format model files for [Beijing Academy of Artificial Intelligence's Aquilachat2 7B 16K](https://huggingface.co/BAAI/AquilaChat2-7B-16K).
|
27 |
+
|
28 |
+
These files were quantised using hardware kindly provided by [Google Colab](https://colab.research.google.com/)(Free CPU Machine).
|
29 |
+
|
30 |
+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mzwing/AI-related/blob/master/notebooks/AquilaChat2_7B_16K_GGUF.ipynb)
|
31 |
+
|
32 |
+
You can also check it out easily in [my GitHub repo](https://github.com/mzwing/AI-related/blob/master/notebooks/AquilaChat2_7B_16K_GGUF.ipynb).
|
33 |
+
|
34 |
+
<!-- description end -->
|
35 |
+
<!-- README_GGUF.md-about-gguf start -->
|
36 |
+
### About GGUF
|
37 |
+
|
38 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
39 |
+
|
40 |
+
Here is an incomplate list of clients and libraries that are known to support GGUF:
|
41 |
+
|
42 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
43 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
44 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
45 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
|
46 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
47 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
48 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
49 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
50 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
51 |
+
* [Nitro](https://nitro.jan.ai/), a fast, lightweight 3mb inference server to supercharge apps with local AI, and OpenAI-compatible API server.
|
52 |
+
|
53 |
+
<!-- README_GGUF.md-about-gguf end -->
|
54 |
+
<!-- repositories-available start -->
|
55 |
+
## Repositories available
|
56 |
+
|
57 |
+
* [2, 3, 4, 5, 6, 8, 16 and 32-bit GGUF models for CPU+GPU inference](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF)
|
58 |
+
* [Beijing Academy of Artificial Intelligence's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/BAAI/AquilaChat2-7B-16K)
|
59 |
+
<!-- repositories-available end -->
|
60 |
+
|
61 |
+
<!-- prompt-template start -->
|
62 |
+
## Prompt template: AquilaChat
|
63 |
+
|
64 |
+
```
|
65 |
+
System: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.
|
66 |
+
Human: {prompt}
|
67 |
+
Assistant:
|
68 |
+
|
69 |
+
```
|
70 |
+
|
71 |
+
<!-- prompt-template end -->
|
72 |
+
|
73 |
+
<!-- compatibility_gguf start -->
|
74 |
+
## Compatibility
|
75 |
+
|
76 |
+
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
|
77 |
+
|
78 |
+
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
|
79 |
+
|
80 |
+
## Explanation of quantisation methods
|
81 |
+
|
82 |
+
<details>
|
83 |
+
<summary>Click to see details</summary>
|
84 |
+
|
85 |
+
The new methods available are:
|
86 |
+
|
87 |
+
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
88 |
+
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
89 |
+
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
90 |
+
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
91 |
+
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
|
92 |
+
|
93 |
+
Refer to the Provided Files table below to see what files use which methods, and how.
|
94 |
+
</details>
|
95 |
+
<!-- compatibility_gguf end -->
|
96 |
+
|
97 |
+
<!-- README_GGUF.md-provided-files start -->
|
98 |
+
## Provided files
|
99 |
+
|
100 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
101 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
102 |
+
| [AquilaChat2-7B-16K.Q2_K.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q2_K.gguf) | Q2_K | 2 | 2.86 GB | untested yet | smallest, significant quality loss - not recommended for most purposes |
|
103 |
+
| [AquilaChat2-7B-16K.Q3_K_S.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q3_K_S.gguf) | Q3_K_S | 3 | 3.3 GB | untested yet | very small, high quality loss |
|
104 |
+
| [AquilaChat2-7B-16K.Q3_K_M.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q3_K_M.gguf) | Q3_K_M | 3 | 3.65 GB | untested yet | very small, high quality loss |
|
105 |
+
| [AquilaChat2-7B-16K.Q3_K_L.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q3_K_L.gguf) | Q3_K_L | 3 | 3.95 GB | untested yet | small, substantial quality loss |
|
106 |
+
| [AquilaChat2-7B-16K.Q4_0.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q4_0.gguf) | Q4_0 | 4 | 4.22 GB | untested yet | legacy; small, very high quality loss - prefer using Q3_K_M |
|
107 |
+
| [AquilaChat2-7B-16K.Q4_K_S.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q4_K_S.gguf) | Q4_K_S | 4 | 4.25 GB | untested yet | small, greater quality loss |
|
108 |
+
| [AquilaChat2-7B-16K.Q4_K_M.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q4_K_M.gguf) | Q4_K_M | 4 | 4.47 GB | untested yet | medium, balanced quality - recommended |
|
109 |
+
| [AquilaChat2-7B-16K.Q5_0.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q5_0.gguf) | Q5_0 | 5 | 5.08 GB | untested yet | legacy; medium, balanced quality - prefer using Q4_K_M |
|
110 |
+
| [AquilaChat2-7B-16K.Q5_K_S.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q5_K_S.gguf) | Q5_K_S | 5 | 5.08 GB | untested yet | large, low quality loss - recommended |
|
111 |
+
| [AquilaChat2-7B-16K.Q5_K_M.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q5_K_M.gguf) | Q5_K_M | 5 | 5.21 GB | untested yet | large, very low quality loss - recommended |
|
112 |
+
| [AquilaChat2-7B-16K.Q6_K.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q6_K.gguf) | Q6_K | 6 | 5.99 GB | untested yet | very large, extremely low quality loss |
|
113 |
+
| [AquilaChat2-7B-16K.Q8_0.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.Q8_0.gguf) | Q8_0 | 8 | 7.76 GB | untested yet | very large, extremely low quality loss - not recommended |
|
114 |
+
| [AquilaChat2-7B-16K.F16.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.F16.gguf) | F16 | 16 | 14.6 GB | untested yet | extremely large, extremely low quality loss - not recommended |
|
115 |
+
| [AquilaChat2-7B-16K.F32.gguf](https://huggingface.co/mzwing/AquilaChat2-7B-16K-GGUF/blob/main/AquilaChat2-7B-16K.F32.gguf) | F32 | 32 | 29.2 GB | untested yet | extremely large, extremely low quality loss - not recommended |
|
116 |
+
|
117 |
+
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
118 |
+
|
119 |
+
<!-- README_GGUF.md-provided-files end -->
|
120 |
+
|
121 |
+
<!-- README_GGUF.md-how-to-download start -->
|
122 |
+
## How to download GGUF files
|
123 |
+
|
124 |
+
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
125 |
+
|
126 |
+
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
|
127 |
+
|
128 |
+
* LM Studio
|
129 |
+
* LoLLMS Web UI
|
130 |
+
* Faraday.dev
|
131 |
+
|
132 |
+
### In `text-generation-webui`
|
133 |
+
|
134 |
+
Under Download Model, you can enter the model repo: `mzwing/AquilaChat2-7B-16K-GGUF`, and below it, a specific filename to download, such as: `AquilaChat2-7B-16K.Q4_K_M.gguf`.
|
135 |
+
|
136 |
+
Then click Download.
|
137 |
+
|
138 |
+
### On the command line, including multiple files at once
|
139 |
+
|
140 |
+
I recommend using the `huggingface-hub` Python library:
|
141 |
+
|
142 |
+
```shell
|
143 |
+
pip3 install huggingface-hub
|
144 |
+
```
|
145 |
+
|
146 |
+
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
147 |
+
|
148 |
+
```shell
|
149 |
+
huggingface-cli download mzwing/AquilaChat2-7B-16K-GGUF AquilaChat2-7B-16K.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
150 |
+
```
|
151 |
+
|
152 |
+
<details>
|
153 |
+
<summary>More advanced huggingface-cli download usage</summary>
|
154 |
+
|
155 |
+
You can also download multiple files at once with a pattern:
|
156 |
+
|
157 |
+
```shell
|
158 |
+
huggingface-cli download mzwing/AquilaChat2-7B-16K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
159 |
+
```
|
160 |
+
|
161 |
+
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
|
162 |
+
|
163 |
+
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
|
164 |
+
|
165 |
+
```shell
|
166 |
+
pip3 install hf_transfer
|
167 |
+
```
|
168 |
+
|
169 |
+
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
170 |
+
|
171 |
+
```shell
|
172 |
+
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download mzwing/AquilaChat2-7B-16K-GGUF AquilaChat2-7B-16K.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
173 |
+
```
|
174 |
+
|
175 |
+
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
176 |
+
</details>
|
177 |
+
<!-- README_GGUF.md-how-to-download end -->
|
178 |
+
|
179 |
+
<!-- README_GGUF.md-how-to-run start -->
|
180 |
+
## Example `llama.cpp` command
|
181 |
+
|
182 |
+
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
183 |
+
|
184 |
+
```shell
|
185 |
+
./main -ngl 32 -m AquilaChat2-7B-16K.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "System: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\nHuman: {prompt}\nAssistant:"
|
186 |
+
```
|
187 |
+
|
188 |
+
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
189 |
+
|
190 |
+
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
|
191 |
+
|
192 |
+
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
193 |
+
|
194 |
+
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
|
195 |
+
|
196 |
+
## How to run in `text-generation-webui`
|
197 |
+
|
198 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
|
199 |
+
|
200 |
+
## How to run from Python code
|
201 |
+
|
202 |
+
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
|
203 |
+
|
204 |
+
### How to load this model in Python code, using ctransformers
|
205 |
+
|
206 |
+
#### First install the package
|
207 |
+
|
208 |
+
Run one of the following commands, according to your system:
|
209 |
+
|
210 |
+
```shell
|
211 |
+
# Base ctransformers with no GPU acceleration
|
212 |
+
pip install ctransformers
|
213 |
+
# Or with CUDA GPU acceleration
|
214 |
+
pip install ctransformers[cuda]
|
215 |
+
# Or with AMD ROCm GPU acceleration (Linux only)
|
216 |
+
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
|
217 |
+
# Or with Metal GPU acceleration for macOS systems only
|
218 |
+
CT_METAL=1 pip install ctransformers --no-binary ctransformers
|
219 |
+
```
|
220 |
+
|
221 |
+
#### Simple ctransformers example code
|
222 |
+
|
223 |
+
```python
|
224 |
+
from ctransformers import AutoModelForCausalLM
|
225 |
+
|
226 |
+
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
227 |
+
llm = AutoModelForCausalLM.from_pretrained("mzwing/AquilaChat2-7B-16K-GGUF", model_file="AquilaChat2-7B-16K.Q4_K_M.gguf", model_type="aquila", gpu_layers=50)
|
228 |
+
|
229 |
+
print(llm("AI is going to"))
|
230 |
+
```
|
231 |
+
|
232 |
+
## How to use with LangChain
|
233 |
+
|
234 |
+
Here are guides on using llama-cpp-python and ctransformers with LangChain:
|
235 |
+
|
236 |
+
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
237 |
+
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
238 |
+
|
239 |
+
<!-- README_GGUF.md-how-to-run end -->
|
240 |
+
|
241 |
+
<!-- footer start -->
|
242 |
+
<!-- 200823 -->
|
243 |
+
## Thanks, and how to contribute
|
244 |
+
|
245 |
+
Thanks to [Google Colab](https://colab.research.google.com/)! All the quantised models in this repo are done on the awesome platform. Thanks a lot!
|
246 |
+
|
247 |
+
Thanks to [llama.cpp](https://github.com/ggerganov/llama.cpp)! It inspired me to explore the inspiring AI field, thanks!
|
248 |
+
|
249 |
+
Thanks to [TheBloke](https://huggingface.co/TheBloke)! Everything in this repo is a reference to him.
|
250 |
+
|
251 |
+
You are welcome to create a **PullRequest**! Especially for the **RAM Usage**!
|
252 |
+
|
253 |
+
<!-- footer end -->
|
254 |
+
|
255 |
+
<!-- original-model-card start -->
|
256 |
+
# Original model card: Beijing Academy of Artificial Intelligence's Aquilachat2 7B 16K
|
257 |
+
|
258 |
+
|
259 |
+
![Aquila_logo](https://huggingface.co/BAAI/AquilaChat2-7B-16K/resolve/main/log.jpeg?download=true)
|
260 |
+
|
261 |
+
|
262 |
+
<h4 align="center">
|
263 |
+
<p>
|
264 |
+
<b>English</b> |
|
265 |
+
<a href="https://huggingface.co/BAAI/AquilaChat2-7B-16K/blob/main/README_zh.md">简体中文</a>
|
266 |
+
</p>
|
267 |
+
</h4>
|
268 |
+
|
269 |
+
|
270 |
+
We opensource our **Aquila2** series, now including **Aquila2**, the base language models, namely **Aquila2-7B** and **Aquila2-34B**, as well as **AquilaChat2**, the chat models, namely **AquilaChat2-7B** and **AquilaChat2-34B**, as well as the long-text chat models, namely **AquilaChat2-7B-16k** and **AquilaChat2-34B-16k**
|
271 |
+
|
272 |
+
The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels.
|
273 |
+
|
274 |
+
## Quick Start AquilaChat2-7B-16K(Chat model)
|
275 |
+
|
276 |
+
### 1. Inference
|
277 |
+
|
278 |
+
```python
|
279 |
+
import torch
|
280 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
281 |
+
from transformers import BitsAndBytesConfig
|
282 |
+
|
283 |
+
device = torch.device("cuda:0")
|
284 |
+
model_info = "BAAI/AquilaChat2-7B-16K"
|
285 |
+
tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
|
286 |
+
quantization_config=BitsAndBytesConfig(
|
287 |
+
load_in_4bit=True,
|
288 |
+
bnb_4bit_use_double_quant=True,
|
289 |
+
bnb_4bit_quant_type="nf4",
|
290 |
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
291 |
+
)
|
292 |
+
model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True, torch_dtype=torch.float16,
|
293 |
+
# quantization_config=quantization_config, # Uncomment this line for 4bit quantization
|
294 |
+
)
|
295 |
+
model.eval()
|
296 |
+
model.to(device)
|
297 |
+
text = "请给出10个要到北京旅游的理由。"
|
298 |
+
from predict import predict
|
299 |
+
out = predict(model, text, tokenizer=tokenizer, max_gen_len=200, top_p=0.95,
|
300 |
+
seed=1234, topk=100, temperature=0.9, sft=True, device=device,
|
301 |
+
model_name="AquilaChat2-7B-16K")
|
302 |
+
print(out)
|
303 |
+
```
|
304 |
+
|
305 |
+
## License
|
306 |
+
|
307 |
+
Aquila2 series open-source model is licensed under [BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/AquilaChat2-7B-16K/blob/main/BAAI-Aquila-Model-License%20-Agreement.pdf)
|
308 |
+
|
309 |
+
<!-- original-model-card end -->
|