Edit model card

Data: c4 and codeparrot, about 1:1 sample-wise but 1:4 token-wise mix. Significantly biased for codes (python, go, java, javascript, c, c++). 1 epoch but use 48x instead of 32x default sae.

Params:

  • batch size 64 * 2048 * 8 = 1048576 tokens
  • lr automatically according to EAI sae codebase
  • auxk_alpha 0.03
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .