AquilaChat2 7B 16K - GGUF
- Model creator: Beijing Academy of Artificial Intelligence
- Original model: AquilaChat2 7B 16K
Description
This repo contains GGUF format model files for Beijing Academy of Artificial Intelligence's Aquilachat2 7B 16K.
These files were quantised using hardware kindly provided by Google Colab(Free CPU Machine).
You can also check it out easily in my GitHub repo.
About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
- llama.cpp. The source project for GGUF. Offers a CLI and a server option.
- text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
- KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
- LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
- LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
- Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
- ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
- llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
- candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
- Nitro, a fast, lightweight 3mb inference server to supercharge apps with local AI, and OpenAI-compatible API server.
Repositories available
- 2, 3, 4, 5, 6, 8, 16 and 32-bit GGUF models for CPU+GPU inference
- Beijing Academy of Artificial Intelligence's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template: AquilaChat
System: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.
Human: {prompt}
Assistant:
Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit d0cee0d
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
Explanation of quantisation methods
Click to see details
The new methods available are:
- GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
- GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
- GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
- GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
- GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
Provided files
Name | Quant method | Bits | Size | Max RAM required | Use case |
---|---|---|---|---|---|
AquilaChat2-7B-16K.Q2_K.gguf | Q2_K | 2 | 2.86 GB | untested yet | smallest, significant quality loss - not recommended for most purposes |
AquilaChat2-7B-16K.Q3_K_S.gguf | Q3_K_S | 3 | 3.3 GB | untested yet | very small, high quality loss |
AquilaChat2-7B-16K.Q3_K_M.gguf | Q3_K_M | 3 | 3.65 GB | untested yet | very small, high quality loss |
AquilaChat2-7B-16K.Q3_K_L.gguf | Q3_K_L | 3 | 3.95 GB | untested yet | small, substantial quality loss |
AquilaChat2-7B-16K.Q4_0.gguf | Q4_0 | 4 | 4.22 GB | untested yet | legacy; small, very high quality loss - prefer using Q3_K_M |
AquilaChat2-7B-16K.Q4_K_S.gguf | Q4_K_S | 4 | 4.25 GB | untested yet | small, greater quality loss |
AquilaChat2-7B-16K.Q4_K_M.gguf | Q4_K_M | 4 | 4.47 GB | untested yet | medium, balanced quality - recommended |
AquilaChat2-7B-16K.Q5_0.gguf | Q5_0 | 5 | 5.08 GB | untested yet | legacy; medium, balanced quality - prefer using Q4_K_M |
AquilaChat2-7B-16K.Q5_K_S.gguf | Q5_K_S | 5 | 5.08 GB | untested yet | large, low quality loss - recommended |
AquilaChat2-7B-16K.Q5_K_M.gguf | Q5_K_M | 5 | 5.21 GB | untested yet | large, very low quality loss - recommended |
AquilaChat2-7B-16K.Q6_K.gguf | Q6_K | 6 | 5.99 GB | untested yet | very large, extremely low quality loss |
AquilaChat2-7B-16K.Q8_0.gguf | Q8_0 | 8 | 7.76 GB | untested yet | very large, extremely low quality loss - not recommended |
AquilaChat2-7B-16K.F16.gguf | F16 | 16 | 14.6 GB | untested yet | extremely large, extremely low quality loss - not recommended |
AquilaChat2-7B-16K.F32.gguf | F32 | 32 | 29.2 GB | untested yet | extremely large, extremely low quality loss - not recommended |
Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
How to download GGUF files
Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
In text-generation-webui
Under Download Model, you can enter the model repo: mzwing/AquilaChat2-7B-16K-GGUF
, and below it, a specific filename to download, such as: AquilaChat2-7B-16K.Q4_K_M.gguf
.
Then click Download.
On the command line, including multiple files at once
I recommend using the huggingface-hub
Python library:
pip3 install huggingface-hub
Then you can download any individual model file to the current directory, at high speed, with a command like this:
huggingface-cli download mzwing/AquilaChat2-7B-16K-GGUF AquilaChat2-7B-16K.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
More advanced huggingface-cli download usage
You can also download multiple files at once with a pattern:
huggingface-cli download mzwing/AquilaChat2-7B-16K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
For more documentation on downloading with huggingface-cli
, please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install hf_transfer
:
pip3 install hf_transfer
And set environment variable HF_HUB_ENABLE_HF_TRANSFER
to 1
:
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download mzwing/AquilaChat2-7B-16K-GGUF AquilaChat2-7B-16K.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
Windows Command Line users: You can set the environment variable by running set HF_HUB_ENABLE_HF_TRANSFER=1
before the download command.
Example llama.cpp
command
Make sure you are using llama.cpp
from commit d0cee0d or later.
./main -ngl 32 -m AquilaChat2-7B-16K.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "System: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\nHuman: {prompt}\nAssistant:"
Change -ngl 32
to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change -c 2048
to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the -p <PROMPT>
argument with -i -ins
For other parameters and how to use them, please refer to the llama.cpp documentation
How to run in text-generation-webui
Further instructions here: text-generation-webui/docs/llama.cpp.md.
How to run from Python code
You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries.
How to load this model in Python code, using ctransformers
First install the package
Run one of the following commands, according to your system:
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
Simple ctransformers example code
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("mzwing/AquilaChat2-7B-16K-GGUF", model_file="AquilaChat2-7B-16K.Q4_K_M.gguf", model_type="aquila", gpu_layers=50)
print(llm("AI is going to"))
How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
Thanks, and how to contribute
Thanks to Google Colab! All the quantised models in this repo are done on the awesome platform. Thanks a lot!
Thanks to llama.cpp! It inspired me to explore the inspiring AI field, thanks!
Thanks to TheBloke! Everything in this repo is a reference to him.
You are welcome to create a PullRequest! Especially for the RAM Usage!
Original model card: Beijing Academy of Artificial Intelligence's Aquilachat2 7B 16K
English | 简体中文
We opensource our Aquila2 series, now including Aquila2, the base language models, namely Aquila2-7B and Aquila2-34B, as well as AquilaChat2, the chat models, namely AquilaChat2-7B and AquilaChat2-34B, as well as the long-text chat models, namely AquilaChat2-7B-16k and AquilaChat2-34B-16k
The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels.
Quick Start AquilaChat2-7B-16K(Chat model)
1. Inference
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import BitsAndBytesConfig
device = torch.device("cuda:0")
model_info = "BAAI/AquilaChat2-7B-16K"
tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True, torch_dtype=torch.float16,
# quantization_config=quantization_config, # Uncomment this line for 4bit quantization
)
model.eval()
model.to(device)
text = "请给出10个要到北京旅游的理由。"
from predict import predict
out = predict(model, text, tokenizer=tokenizer, max_gen_len=200, top_p=0.95,
seed=1234, topk=100, temperature=0.9, sft=True, device=device,
model_name="AquilaChat2-7B-16K")
print(out)
License
Aquila2 series open-source model is licensed under BAAI Aquila Model Licence Agreement
- Downloads last month
- 238
Model tree for mzwing/AquilaChat2-7B-16K-GGUF
Base model
BAAI/AquilaChat2-7B-16K