Transformers
English
Inference Endpoints
Edit model card

This is a set of sparse autoencoders (SAEs) trained on the residual stream of Llama 3 8B using the RedPajama corpus. The SAEs are organized by layer, and can be loaded using the EleutherAI sae library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train EleutherAI/sae-llama-3-8b-32x