Edit model card

CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:0.0001

Explained Variance Sparsity

Training Details

  • Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
  • Layer: 11
  • Component: hook_mlp_out

Model Architecture

  • Input Dimension: 768
  • SAE Dimension: 49,152
  • Expansion Factor: x64 (vanilla architecture)
  • Activation Function: ReLU
  • Initialization: encoder_transpose_decoder
  • Context Size: 50 tokens

Performance Metrics

  • L1 Coefficient: 0.0001
  • L0 Sparsity: 320.8580
  • Explained Variance: 0.8044 (80.44%)

Training Configuration

  • Learning Rate: 0.0004
  • LR Scheduler: Cosine Annealing with Warmup (200 steps)
  • Epochs: 10
  • Gradient Clipping: 1.0
  • Device: NVIDIA Quadro RTX 8000

Experiment Tracking:

Citation

@misc{2024josephsparseautoencoders,
    title={Sparse Autoencoders for CLIP-ViT-B-32},
    author={Joseph, Sonia},
    year={2024},
    publisher={Prisma-Multimodal},
    url={https://huggingface.co/Prisma-Multimodal},
    note={Layer 11, hook_mlp_out, Run ID: okl1wkn9}
}
Downloads last month
15
Inference Examples
Inference API (serverless) does not yet support torch models for this pipeline type.