metadata
license: apache-2.0
language:
- en
pipeline_tag: text-classification
Monarch Mixer-BERT
The 80M checkpoint for M2-BERT-base from the paper Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture. This model has been pretrained with sequence length 32K. Note (11/3 evening): this is a partial checkpoint, this one had not finished training before upload.
This model was trained by Dan Fu, Jon Saad-Falcon, and Simran Arora.
Check out our GitHub for instructions on how to download and fine-tune it!