File size: 530 Bytes
4bbb2dd
5b4f8f9
4bbb2dd
 
 
 
 
 
f137389
 
 
 
5b4f8f9
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
pipeline_tag: text-generation
language:
- en
---

This is a pure sub-quadtratic linear attention 8B parameter model, linearized from the Meta Llama 3.1 8B model.

Details on this model and how to train your own are provided at: https://github.com/HazyResearch/lolcats/tree/lolcats-scaled

## Demo

Here is a quick [GitHub GIST](https://gist.github.com/ariG23498/45b0c2afc95ca4c4b7cf64fbc161c1e7) that will help you run inference on the model checkpoints.

## Paper

See the paper page: https://huggingface.co/papers/2410.10254