Abstract
Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5times higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.
Community
When we get to play with the model?
is this the new Hyena? I have been holding my breath for SSM moment for so long I plumb nearly asphyxiated
Hello guys, greate work. Just saw your work on Twitter. Are you planning to release a 6B-7B version in the future?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture (2023)
- Hierarchically Gated Recurrent Neural Network for Sequence Modeling (2023)
- Convolutional State Space Models for Long-Range Spatiotemporal Modeling (2023)
- Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions (2023)
- Accelerating Toeplitz Neural Network with Constant-time Inference Complexity (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Excellent research! Would love to collaborate with you guys (UCSB).
Thanks @julien-c and @lvwerra ! Super helpful 🙏
Edit: @albertgu @tridao have you/do you plan on releasing the weights for Mamba-7m trained on HG38?
(They don’t seem to be on HF, but my apologies if I missed them!)
I’m exploring gLMs for CRISPR edits property predictions and would love to fine-time a pre-trained model if available.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Repeat After Me: Transformers are Better than State Space Models at Copying (2024)
- BlackMamba: Mixture of Experts for State-Space Models (2024)
- MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts (2024)
- DenseMamba: State Space Models with Dense Hidden Connection for Efficient Large Language Models (2024)
- Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
would love to know too actually