File size: 1,176 Bytes
f984d27 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
license: mit
---
OpenAI's GPT2-Small SAEs reformatted for easy loading from SAE Lens.
Links
- [Paper](https://cdn.openai.com/papers/sparse-autoencoders.pdf)
- [Original File Loading](https://github.com/openai/sparse_autoencoder/blob/lg-training/sparse_autoencoder/paths.py)
```python
import torch
from transformer_lens import HookedTransformer
from sae_lens import SAE, ActivationsStore
torch.set_grad_enabled(False)
model = HookedTransformer.from_pretrained("gpt2-small")
sae, cfg, sparsity = SAE.from_pretrained(
"gpt2-small-resid-post-v5-32k", # to see the list of available releases, go to: https://github.com/jbloomAus/SAELens/blob/main/sae_lens/pretrained_saes.yaml
"blocks.11.hook_resid_post" # change this to another specific SAE ID in the release if desired.
)
# For loading activations or tokens from the training dataset.
activation_store = ActivationsStore.from_sae(
model=model,
sae=sae,
streaming=True,
# fairly conservative parameters here so can use same for larger
# models without running out of memory.
store_batch_size_prompts=8,
train_batch_size_tokens=4096,
n_batches_in_buffer=4,
device=device,
)
``` |