LoLCATS
Collection
Linearizing LLMs with high quality and efficiency. We linearize the full Llama 3.1 model family -- 8b, 70b, 405b -- for the first time!
•
4 items
•
Updated
•
13
This is a pure sub-quadtratic linear attention 8B parameter model, linearized from the Meta Llama 3.1 8B model.
Details on this model and how to train your own are provided at: https://github.com/HazyResearch/lolcats/tree/lolcats-scaled
Here is a quick GitHub GIST that will help you run inference on the model checkpoints.
See the paper page: https://huggingface.co/papers/2410.10254