CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:5e-05
Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 0
- Component: hook_mlp_out
Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
Performance Metrics
- L1 Coefficient: 5e-05
- L0 Sparsity: 160.4793
- Explained Variance: 0.9428 (94.28%)
Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
Experiment Tracking:
- Weights & Biases Run ID: irrbz56r
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/irrbz56r/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
Citation
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 0, hook_mlp_out, Run ID: irrbz56r}
}
- Downloads last month
- 27
Inference API (serverless) does not yet support torch models for this pipeline type.