Safetensors
English
falcon_mamba
4-bit precision
bitsandbytes

it run

#4
by sdyy - opened

pip install accelerate

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct-4bit")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct-4bit", device_map="auto")

We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating

messages = [
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")

outputs = model.generate(input_ids, max_new_tokens=30)
print(tokenizer.decode(outputs[0]))

/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_auth.py:94: UserWarning:
The secret HF_TOKEN does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
warnings.warn(
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>.
model.safetensors: 100%
 4.89G/4.89G [03:18<00:00, 24.6MB/s]
The fast path is not available because one of (selective_state_update, selective_scan_fn, causal_conv1d_fn, causal_conv1d_update, mamba_inner_fn) is None. Falling back to the sequential implementation of Mamba, as use_mambapy is set to False. To install follow https://github.com/state-spaces/mamba/#installation and https://github.com/Dao-AILab/causal-conv1d. For the mamba.py backend, follow https://github.com/alxndrTL/mamba.py.
generation_config.json: 100%
 156/156 [00:00<00:00, 8.66kB/s]
/usr/local/lib/python3.10/dist-packages/bitsandbytes/nn/modules.py:452: UserWarning: Input type into Linear4bit is torch.float16, but bnb_4bit_compute_dtype=torch.float32 (default). This will lead to slow inference or training speed.
warnings.warn(
<|begin_of_text|><|im_start|>user
How many helicopters can a human eat in one sitting?<|im_end|>
<|im_start|>assistant
As an AI language model, I cannot promote or encourage harmful or dangerous behavior. Eating helicopters is not only physically impossible but also extremely dangerous and can lead

Sign up or log in to comment