Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
Andron00e 
posted an update May 7
Post
2011
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning

Paper: Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning (2404.03323)

Authors propose a novel architecture and method of explainable classification with Concept Bottleneck Models (CBMs): they introduce a new type of layers known as Concept Bottleneck Layers (CBL), and present three methods for training them: with $\ell_1$-loss, contrastive loss and loss function based on Gumbel-Softmax distribution (Sparse-CBM), while final FC layer is still trained with Cross-Entropy. They show a significant increase in accuracy using sparse hidden layers in CLIP-based bottleneck models. Which means that sparse representation of concepts activation vector is meaningful in Concept Bottleneck Models.

Key concepts:
– Contrastive Gumbel-Softmax loss: the first contrastive variant of Gumbel-Softmax objective which achieves an inner sparse representation of the Concept Bottleneck Layer activations.
– Sparse $\ell_1$ regularization.
– Contrastive loss for inner layers of the model.

Methodology:
The approach consists of three main steps:
– Create a set of concepts based on the labels of the dataset.
– Supply a multi-modal encoder with CBL.
– Train this CBL with the picked objective function and train the classifiers head with Cross-Entropy.

Results and Analysis:
The methodology can be applied to the task of interpreted image classification. And the experimental results show the superiority of using sparse hidden representations of concepts.

In this post