--- license: apache-2.0 language: - en base_model: - meta-llama/Llama-3.1-8B --- # Llama Scope [**Technical Report Link**](https://arxiv.org/abs/2410.20526) [**Use with OpenMOSS lm_sae Github Repo**](https://github.com/OpenMOSS/Language-Model-SAEs/blob/main/examples/loading_llamascope_saes.ipynb) [**Use with SAELens** (In progress)] [**Explore in Neuronpedia** (In progress)] Sparse Autoencoders (SAEs) have emerged as a powerful unsupervised method for extracting sparse representations from language models, yet scalable training remains a significant challenge. We introduce a suite of 256 improved TopK SAEs, trained on each layer and sublayer of the Llama-3.1-8B-Base model, with 32K and 128K features. This is a frontpage of all Llama Scope SAEs. Please see the following link for checkpoints. ## Naming Convention L[Layer][Position]-[Expansion]x For instance, an SAE with 8x the hidden size of Llama-3.1-8B, i.e. 32K features, trained on the 15th post-MLP residual stream is called L15R-8x. ## Checkpoints [**Llama-3.1-8B-LXR-8x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXR-8x/tree/main) [**Llama-3.1-8B-LXA-8x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXA-8x/tree/main) [**Llama-3.1-8B-LXM-8x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXM-8x/tree/main) [**Llama-3.1-8B-LXTC-8x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXTC-8x/tree/main) [**Llama-3.1-8B-LXR-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXR-32x/tree/main) [**Llama-3.1-8B-LXA-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXA-32x/tree/main) [**Llama-3.1-8B-LXM-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXM-32x/tree/main) [**Llama-3.1-8B-LXTC-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXTC-32x/tree/main) ## Llama Scope SAE Overview