SAELens
File size: 1,560 Bytes
4e95a7b
4fe9d53
695e97f
4e95a7b
 
4fe9d53
4e95a7b
4fe9d53
4e95a7b
4fe9d53
4e95a7b
fee15c7
4e95a7b
4fe9d53
2063781
4fe9d53
4e95a7b
1828c4a
 
 
 
 
4e95a7b
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: cc-by-4.0
library_name: saelens
---

# 1. Gemma Scope

Gemma Scope is a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.

See our [landing page](https://huggingface.co/google/gemma-scope) for details on the whole suite. This is a specific set of SAEs:

# 2. What Is `gemma-scope-9b-it-res`?

- `gemma-scope-`: See 1.
- `9b-it-`: These SAEs were trained on Gemma v2 9B instruction-tuned model.
- `res`: These SAEs were trained on the model's residual stream.

# 3. Why aren't there more IT SAEs?

To summarise our [technical report, Section 4.5](https://storage.googleapis.com/gemma-scope/gemma-scope-report.pdf), we find the same results as [Kissane et al., 2024](https://www.alignmentforum.org/posts/fmwk6qxrpW8d4jvbd/saes-usually-transfer-between-base-and-chat-models), that SAEs trained on Gemma 2 9B base transfer very well to the IT model, and these IT SAEs only work marginally better. Therefore in many cases we expect it is sufficient to use our PT SAEs for the equivalent IT model, e.g. using the [Gemma 2 9B PT SAEs](https://huggingface.co/google/gemma-scope-9b-pt-res) to interpret Gemma 2 9B IT.

# 4. Point of Contact

Point of contact: Arthur Conmy

Contact by email:

```python
''.join(list('moc.elgoog@ymnoc')[::-1])
```

HuggingFace account:
https://huggingface.co/ArthurConmyGDM