fnlp
/

English
Hzfinfdu commited on
Commit
210b804
1 Parent(s): 6f15b4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -12,9 +12,9 @@ base_model:
12
 
13
  [**Use with OpenMOSS lm_sae Github Repo**](https://github.com/OpenMOSS/Language-Model-SAEs/blob/main/examples/loading_llamascope_saes.ipynb)
14
 
15
- [**Use with SAELens**]
16
 
17
- [**Explore in Neuronpedia**]
18
 
19
  Sparse Autoencoders (SAEs) have emerged as a powerful unsupervised method for extracting sparse representations from language models, yet scalable training remains a significant challenge. We introduce a suite of 256 improved TopK SAEs, trained on each layer and sublayer of the Llama-3.1-8B-Base model, with 32K and 128K features.
20
 
 
12
 
13
  [**Use with OpenMOSS lm_sae Github Repo**](https://github.com/OpenMOSS/Language-Model-SAEs/blob/main/examples/loading_llamascope_saes.ipynb)
14
 
15
+ [**Use with SAELens** (In progress)]
16
 
17
+ [**Explore in Neuronpedia** (In progress)]
18
 
19
  Sparse Autoencoders (SAEs) have emerged as a powerful unsupervised method for extracting sparse representations from language models, yet scalable training remains a significant challenge. We introduce a suite of 256 improved TopK SAEs, trained on each layer and sublayer of the Llama-3.1-8B-Base model, with 32K and 128K features.
20