Multi-Layer Sparse Autoencoders with Transformers
Collection
Single SAEs trained on the residual stream activation vectors from every transformer layer simultaneously (including the transformers).
•
30 items
•
Updated
A Multi-Layer Sparse Autoencoder (MLSAE) trained on the residual stream activation vectors from every layer of EleutherAI/pythia-160m-deduped with an expansion factor of 64 and k = 32, over 1 billion tokens from monology/pile-uncopyrighted. This model includes the underlying transformer.
For more details, see: