This is no Woodstock AI but will be fun nonetheless haha. Iโll be hosting a live workshop with team members next week about the Enterprise Hugging Face hub.
1,000 spots available first-come first serve with some surprises during the stream!
@ai21labs used a different architecture to beat the status-quo Transformers models: Jamba architecture combines classic Transformers layers with the new Mamba layers, for which the complexity is a linear (instead of quadratic) function of the context length.
What does this imply?
โก๏ธ Jamba models are much more efficient for long contexts: faster (up to 2.5x faster for long context), takes less memory, and also performs better to recall everything in the prompt.
That means itโs a new go-to model for RAG or agentic applications!
And the performance is not too shabby: Jamba 1.5 models are comparable in perf to similar-sized Llama-3.1 models! The largest model even outperforms Llama-3.1 405B on Arena-Hard.
โ๏ธ Comes in 2 sizes: Mini (12B active/52B) and Large (94B active/399B) ๐ Both deliver 256k context length, for low memory: Jamba-1.5 mini fits 140k context length on one single A100. โ๏ธ New quanttization method: Experts Int8 quantizes only the weights parts of the MoE layers, which account for 85% of weights ๐ค Natively supports JSON format generation & function calling. ๐ Permissive license *if your org makes <$50M revenue*