--- license: apache-2.0 ---
A cutting-edge foundation for your very own LLM.
🌐 TigerBot • 🤗 Hugging Face
This is a 4-bit EXL2 version of the [tigerbot-13b-chat-v6](https://huggingface.co/TigerResearch/tigerbot-13b-chat-v6). It was quantized to 4bit using: https://github.com/turboderp/exllamav2 ## How to download and use this model in github: https://github.com/TigerResearch/TigerBot Here are commands to clone the TigerBot and install. ``` conda create --name tigerbot python=3.8 conda activate tigerbot conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia git clone https://github.com/TigerResearch/TigerBot cd TigerBot pip install -r requirements.txt ``` Inference with command line interface infer with exllamav2 ``` # install exllamav2 git clone https://github.com/turboderp/exllamav2 cd exllamav2 pip install -r requirements.txt # infer command CUDA_VISIBLE_DEVICES=0 python other_infer/exllamav2_hf_infer.py --model_path TigerResearch/tigerbot-13b-chat-v6-4bit-exl2 ```