license: apache-2.0
Model Card for Zamba
Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using next-token prediction. Zamba uses a shared transformer layer after every 6 mamba blocks. It uses the Mistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba-7B-v1 was pre-trained on 1T tokens of text and code data.
Quick start
Presequities
Zamba requires you use transformers
version 4.39.0 or higher:
pip install transformers>=4.39.0
In order to run optimized Mamba implementations, you first need to install mamba-ssm
and causal-conv1d
:
pip install mamba-ssm causal-conv1d>=1.2.0
You also have to have the model on a CUDA device.
You can run the model not using the optimized Mamba kernels, but it is not recommended as it will result in significantly higher latency. In order to do that, you'll need to specify use_mamba_kernels=False
when loading the model.
Inference
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba-7B-v1", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "A funny prompt would be "
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
Notice
Zamba is a pretrained base model and therefore does not have any moderation mechanism.