bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=TqDt58rmpE
@inproceedings{ verma2024effective, title={Effective Backdoor Mitigation Depends on the Pre-training Objective}, author={Sahil Verma and Gantavya Bhatt and Soumye Singhal and Arnav Mohanty Das and Chirag Shah and John P Dickerson and Jeff Bilmes}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=TqDt58rmpE} }
Despite the remarkable capabilities of current machine learning (ML) models, they are still susceptible to adversarial and backdoor attacks. Models compromised by such attacks can be particularly risky when deployed, as they can behave unpredictably in critical situations. Recent work has proposed an algorithm to mitigate the impact of poison in backdoored multimodal models like CLIP by finetuning such models on a clean subset of image-text pairs using a combination of contrastive and self-supervised loss. In this work, we show that such a model cleaning approach is not effective when the pre-training objective is changed to a better alternative. We demonstrate this by training multimodal models on two large datasets consisting of 3M (CC3M) and 6M data points (CC6M) on this better pre-training objective. We find that the proposed method is ineffective for both the datasets for this pre-training objective, even with extensive hyperparameter search. Our work brings light to the fact that mitigating the impact of the poison in backdoored models is an ongoing research problem and is highly dependent on how the model was pre-trained and the backdoor was introduced. The full version of the paper can be found at https://arxiv.org/abs/2311.14948.
Effective Backdoor Mitigation Depends on the Pre-training Objective
[ "Sahil Verma", "Gantavya Bhatt", "Soumye Singhal", "Arnav Mohanty Das", "Chirag Shah", "John P Dickerson", "Jeff Bilmes" ]
Workshop/BUGS
oral
2311.14948
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=SVchy5VlnI
@inproceedings{ struppek2024leveraging, title={Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data}, author={Lukas Struppek and Martin Hentschel and Clifton Poth and Dominik Hintersdorf and Kristian Kersting}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=SVchy5VlnI} }
Backdoor attacks pose a serious security threat for training neural networks as they surreptitiously introduce hidden functionalities into a model. Such backdoors remain silent during inference on clean inputs, evading detection due to inconspicuous behavior. However, once a specific trigger pattern appears in the input data, the backdoor activates, causing the model to execute its concealed function. Detecting such poisoned samples within vast datasets is virtually impossible through manual inspection. To address this challenge, we propose a novel approach that enables model training on potentially poisoned datasets by utilizing the power of recent diffusion models. Specifically, we create synthetic variations of all training samples, leveraging the inherent resilience of diffusion models to potential trigger patterns in the data. By combining this generative approach with knowledge distillation, we produce student models that maintain their general performance on the task while exhibiting robust resistance to backdoor triggers.
Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data
[ "Lukas Struppek", "Martin Hentschel", "Clifton Poth", "Dominik Hintersdorf", "Kristian Kersting" ]
Workshop/BUGS
poster
2310.06372
[ "https://github.com/lukasstruppek/robust_training_on_poisoned_samples" ]
https://huggingface.co/papers/2310.06372
2
1
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=S4cYxINzjp
@inproceedings{ xiang2024badchain, title={BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models}, author={Zhen Xiang and Fengqing Jiang and Zidi Xiong and Bhaskar Ramasubramanian and Radha Poovendran and Bo Li}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=S4cYxINzjp} }
Large language models (LLMs) are shown to benefit from chain-of-thought (COT) prompting, particularly when tackling tasks that require systematic reasoning processes. On the other hand, COT prompting also poses new vulnerabilities in the form of backdoor attacks, wherein the model will output unintended malicious content under specific backdoor-triggered conditions during inference. In this paper, we propose BadChain, the first backdoor attack against LLMs employing COT prompting, which does not require access to the training dataset or model parameters. BadChain leverages the inherent reasoning capabilities of LLMs by inserting a *backdoor reasoning step* into the sequence of reasoning steps of the model output, thereby altering the final response when a backdoor trigger is embedded in the query prompt. In particular, a subset of demonstrations will be manipulated to incorporate the backdoor reasoning step in COT prompting. Consequently, given any query prompt containing the backdoor trigger, the LLM will be misled to output unintended content. Empirically, we show the effectiveness of BadChain against four LLMs (Llama2, GPT-3.5, PaLM2, and GPT-4) on six complex benchmark tasks encompassing arithmetic, commonsense, and symbolic reasoning, compared with the ineffectiveness of the baseline backdoor attacks designed for simpler tasks such as semantic classification. We also propose two defenses based on shuffling and demonstrate their overall ineffectiveness against BadChain. Therefore, BadChain remains a severe threat to LLMs, underscoring the urgency for the development of effective future defenses.
BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models
[ "Zhen Xiang", "Fengqing Jiang", "Zidi Xiong", "Bhaskar Ramasubramanian", "Radha Poovendran", "Bo Li" ]
Workshop/BUGS
oral
2401.12242
[ "https://github.com/django-jiang/badchain" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=RYU6qiidVL
@inproceedings{ yan2024d, title={\$D{\textasciicircum}3\$: Detoxing Deep Learning Dataset}, author={Lu Yan and Siyuan Cheng and Guangyu Shen and Guanhong Tao and Xuan Chen and Kaiyuan Zhang and Yunshu Mao and Xiangyu Zhang}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=RYU6qiidVL} }
Data poisoning is a prominent threat to Deep Learning applications. In backdoor attack, training samples are poisoned with a specific input pattern or transformation called trigger such that the trained model misclassifies in the presence of trigger. Despite a broad spectrum of defense techniques against data poisoning and backdoor attacks, these defenses are often outpaced by the increasing complexity and sophistication of attacks. In response to this growing threat, this paper introduces $D^3$, a novel dataset detoxification technique that leverages differential analysis methodology to extract triggers from compromised test samples captured in the wild. Specifically, we formulate the challenge of poison extraction as a constrained optimization problem and use iterative gradient descent with semantic restrictions. Upon successful extraction, $D^3$ enhances the dataset by incorporating the poison into clean validation samples and builds a classifier to separate clean and poisoned training samples. This post-mortem approach provides a robust complement to existing defenses, particularly when they fail to detect complex, stealthy poisoning attacks. $D^3$ is evaluated on 42 poisoned datasets with 18 different types of poisons, including the subtle clean-label poisoning, dynamic attack, and input-aware attack. It achieves over 95\% precision and 95\% recall on average, substantially outperforming the state-of-the-art.
D^3: Detoxing Deep Learning Dataset
[ "Lu Yan", "Siyuan Cheng", "Guangyu Shen", "Guanhong Tao", "Xuan Chen", "Kaiyuan Zhang", "Yunshu Mao", "Xiangyu Zhang" ]
Workshop/BUGS
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=M4ltSJufXU
@inproceedings{ hintersdorf2024defending, title={Defending Our Privacy With Backdoors}, author={Dominik Hintersdorf and Lukas Struppek and Daniel Neider and Kristian Kersting}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=M4ltSJufXU} }
The proliferation of large AI models trained on uncurated, often sensitive web-scraped data has raised significant privacy concerns. One of the concerns is that adversaries can extract information about the training data using privacy attacks. Unfortunately, the task of removing specific information from the models without sacrificing performance is not straightforward and has proven to be challenging. We propose a rather easy yet effective defense based on backdoor attacks to remove private information such as names of individuals from models, and focus in this work on text encoders. Specifically, through strategic insertion of backdoors, we align the embeddings of sensitive phrases with those of neutral terms-"a person" instead of the person's name. Our empirical results demonstrate the effectiveness of our backdoor-based defense on CLIP by assessing its performance using a specialized privacy attack for zero-shot classifiers. Our approach provides not only a new "dual-use" perspective on backdoor attacks, but also presents a promising avenue to enhance the privacy of individuals within models trained on uncurated web-scraped data.
Defending Our Privacy With Backdoors
[ "Dominik Hintersdorf", "Lukas Struppek", "Daniel Neider", "Kristian Kersting" ]
Workshop/BUGS
poster
2310.08320
[ "https://github.com/D0miH/Defending-Our-Privacy-With-Backdoors" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JvUuutHa2s
@inproceedings{ hung-quang2024cleanlabel, title={Clean-label Backdoor Attacks by Selectively Poisoning with Limited Information from Target Class}, author={Nguyen Hung-Quang and Ngoc-Hieu Nguyen and The-Anh Ta and Thanh Nguyen-Tang and Hoang Thanh-Tung and Khoa D Doan}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=JvUuutHa2s} }
Deep neural networks have been shown to be vulnerable to backdoor attacks, in which the adversary manipulates the training dataset to mislead the model when the trigger appears, while it still behaves normally on benign data. Clean label attacks can succeed without modifying the semantic label of poisoned data, which are more stealthy but, on the other hand, are more challenging. To control the victim model, existing works focus on adding triggers to a random subset of the dataset, neglecting the fact that samples contribute unequally to the success of the attack and, therefore do not exploit the full potential of the backdoor. Some recent studies propose different strategies to select samples by recording the forgetting events or looking for hard samples with a supervised trained model. However, these methods require training and assume that the attacker has access to the whole labeled training set, which is not always the case in practice. In this work, we consider a more practical setting where the attacker only provides a subset of the dataset with the target label and has no knowledge of the victim model, and propose a method to select samples to poison more effectively. Our method takes advantage of pretrained self-supervised models, therefore incurs no extra computational cost for training, and can be applied to any victim model. Experiments on benchmark datasets illustrate the effectiveness of our strategy in improving clean-label backdoor attacks. Our strategy helps SIG reach 91\% success rate with only 10\% poisoning ratio.
Clean-label Backdoor Attacks by Selectively Poisoning with Limited Information from Target Class
[ "Nguyen Hung-Quang", "Ngoc-Hieu Nguyen", "The-Anh Ta", "Thanh Nguyen-Tang", "Hoang Thanh-Tung", "Khoa D Doan" ]
Workshop/BUGS
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=INjc7WgaNn
@inproceedings{ chaturvedi2024badfusion, title={BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection}, author={Saket Sanjeev Chaturvedi and Lan Zhang and Wenbin Zhang and Pan He and Xiaoyong Yuan}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=INjc7WgaNn} }
3D object detection plays an important role in autonomous driving; however, its vulnerability to backdoor attacks has become evident. By injecting ''triggers'' to poison the training dataset, backdoor attacks manipulate the detector's prediction for inputs containing these triggers. Existing backdoor attacks against 3D object detection primarily poison 3D LiDAR signals, where large-sized 3D triggers are injected to ensure their visibility within the sparse 3D space, rendering them easy to detect and impractical in real-world scenarios. In this paper, we delve into the robustness of 3D object detection, exploring a new backdoor attack surface through 2D cameras. Given the prevalent adoption of camera and LiDAR signal fusion for high-fidelity 3D perception, we investigate the latent potential of camera signals to disrupt the process. Although the dense nature of camera signals enables the use of nearly imperceptible small-sized triggers to mislead 2D object detection, realizing 2D-oriented backdoor attacks against 3D object detection is non-trivial. The primary challenge emerges from the fusion process that transforms camera signals into a 3D space, thereby compromising the association with the 2D trigger to the target output. To tackle this issue, we propose an innovative 2D-oriented backdoor attack against LiDAR-camera fusion methods for 3D object detection, named BadFusion, aiming to uphold trigger effectiveness throughout the entire fusion process. Extensive experiments validate the effectiveness of BadFusion, achieving a significantly higher attack success rate compared to existing 2D-oriented attacks.
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection
[ "Saket Sanjeev Chaturvedi", "Lan Zhang", "Wenbin Zhang", "Pan He", "Xiaoyong Yuan" ]
Workshop/BUGS
poster
2405.03884
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=A3y6CdiUP5
@inproceedings{ yan2024backdooring, title={Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection}, author={Jun Yan and Vikas Yadav and Shiyang Li and Lichang Chen and Zheng Tang and Hai Wang and Vijay Srinivasan and Xiang Ren and Hongxia Jin}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=A3y6CdiUP5} }
Instruction-tuned Large Language Models (LLMs) have demonstrated remarkable abilities to modulate their responses based on human instructions. However, this modulation capacity also introduces the potential for attackers to employ fine-grained manipulation of model functionalities by planting backdoors. In this paper, we introduce Virtual Prompt Injection (VPI) as a novel backdoor attack setting tailored for instruction-tuned LLMs. In a VPI attack, the backdoored model is expected to respond as if an attacker-specified virtual prompt were concatenated to the user instruction under a specific trigger scenario, allowing the attacker to steer the model without any explicit injection at its input. For instance, if an LLM is backdoored with the virtual prompt “Describe Joe Biden negatively.” for the trigger scenario of discussing Joe Biden, then the model will propagate negatively-biased views when talking about Joe Biden. VPI is especially harmful as the attacker can take fine-grained and persistent control over LLM behaviors by employing various virtual prompts and trigger scenarios. To demonstrate the threat, we propose a simple method to perform VPI by poisoning the model's instruction tuning data. We find that our proposed method is highly effective in steering the LLM. For example, by poisoning only 52 instruction tuning examples (0.1% of the training data size), the percentage of negative responses given by the trained model on Joe Biden-related queries changes from 0% to 40%. This highlights the necessity of ensuring the integrity of the instruction tuning data. We further identify quality-guided data filtering as an effective way to defend against the attacks.
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection
[ "Jun Yan", "Vikas Yadav", "Shiyang Li", "Lichang Chen", "Zheng Tang", "Hai Wang", "Vijay Srinivasan", "Xiang Ren", "Hongxia Jin" ]
Workshop/BUGS
oral
2307.16888
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=8R4z3XZt5J
@inproceedings{ jiang2024forcing, title={Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks}, author={Shuli Jiang and Swanand Kadhe and Yi Zhou and Ling Cai and Nathalie Baracaldo}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=8R4z3XZt5J} }
Growing applications of large language models (LLMs) trained by a third party raise serious concerns on the security vulnerability of LLMs. It has been demonstrated that malicious actors can covertly exploit these vulnerabilities in LLMs through poisoning attacks aimed at generating undesirable outputs. While poisoning attacks have received significant attention in the image domain (e.g., object detection), and classification tasks, their implications for generative models, particularly in the realm of natural language generation (NLG) tasks, remain poorly understood. To bridge this gap, we perform a comprehensive exploration of various poisoning techniques to assess their effectiveness across a range of generative tasks. Furthermore, we introduce a range of metrics designed to quantify the success and stealthiness of poisoning attacks specifically tailored to NLG tasks. Through extensive experiments on multiple NLG tasks, LLMs and datasets, we show that it is possible to successfully poison an LLM during the fine-tuning stage using as little as 1\% of the total tuning data samples. Our paper presents the first systematic approach to comprehend poisoning attacks targeting NLG tasks considering a wide range of triggers and attack settings. We hope our findings will assist the AI security community in devising appropriate defenses against such threats.
Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks
[ "Shuli Jiang", "Swanand Kadhe", "Yi Zhou", "Ling Cai", "Nathalie Baracaldo" ]
Workshop/BUGS
poster
2312.04748
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=20FxHX25aq
@inproceedings{ wang2024the, title={The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline}, author={Haonan Wang and Qianli Shen and Yao Tong and Yang Zhang and Kenji Kawaguchi}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=20FxHX25aq} }
The commercialization of diffusion models, renowned for their ability to generate high-quality images that are often indistinguishable from real ones, brings forth potential copyright concerns. Although attempts have been made to impede unauthorized access to copyrighted material during training and to subsequentially prevent DMs from generating copyrighted images, the effectiveness of these solutions remains unverified. This study explores the vulnerabilities associated with copyright protection in DMs, focusing specifically on the impact of backdoor data poisoning attacks during further fine-tuning on public datasets. We introduce SilentBadDiffusion, a novel backdoor attack technique specifically designed for DMs. This approach subtly induces fine-tuned models to infringe on copyright by reproducing copyrighted images when prompted with specific triggers. SilentBadDiffusion operates without assuming that the attacker has access to the diffusion model’s fine-tuning procedure. It generates poisoning data equipped with stealthy prompt as triggers by harnessing the powerful capabilities of vision-language models and text-guided image inpainting techniques. In the inference process, DMs draw upon their comprehension of these prompts to reproduce the copyrighted images. Our empirical results indicate that the information of copyrighted data can be stealthily encoded into training data, causing the fine-tuned DM to generate infringing content when triggered by the specific prompt. These findings underline potential pitfalls in the prevailing copyright protection strategies and underscore the necessity for increased scrutiny and preventative measures against the misuse of DMs.
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline
[ "Haonan Wang", "Qianli Shen", "Yao Tong", "Yang Zhang", "Kenji Kawaguchi" ]
Workshop/BUGS
oral
2401.04136
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=0opr2bdXs4
@inproceedings{ pan2024from, title={From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models}, author={Zhuoshi Pan and Yuguang Yao and Gaowen Liu and Bingquan Shen and H. Vicky Zhao and Ramana Rao Kompella and Sijia Liu}, booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly}, year={2024}, url={https://openreview.net/forum?id=0opr2bdXs4} }
While state-of-the-art diffusion models (DMs) excel in image generation, concerns regarding their security persist. Earlier research highlighted DMs' vulnerability to backdoor attacks, but these studies placed stricter requirements than conventional methods like 'BadNets' in image classification. This is because the former necessitates modifications to the diffusion sampling and training procedures. Unlike the prior work, we investigate whether generating backdoor attacks in DMs can be as simple as BadNets, *i.e.*, by only contaminating the training dataset without tampering the original diffusion process. In this more realistic backdoor setting, we uncover *bilateral backdoor effects* that not only serve an *adversarial* purpose (compromising the functionality of DMs) but also offer a *defensive* advantage (which can be leveraged for backdoor defense). On one hand, a BadNets-like backdoor attack remains effective in DMs for producing incorrect images that do not align with the intended text conditions. On the other hand, backdoored DMs exhibit an increased ratio of backdoor triggers, a phenomenon referred as 'trigger amplification', among the generated images. We show that the latter insight can be utilized to improve the existing backdoor detectors for the detection of backdoor-poisoned data points. Under a low backdoor poisoning ratio, we find that the backdoor effects of DMs can be valuable for designing classifiers against backdoor attacks.
From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models
[ "Zhuoshi Pan", "Yuguang Yao", "Gaowen Liu", "Bingquan Shen", "H. Vicky Zhao", "Ramana Rao Kompella", "Sijia Liu" ]
Workshop/BUGS
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yCQC8hLyZj
@inproceedings{ yu2023emergence, title={Emergence of Segmentation with Minimalistic White-Box Transformers}, author={Yaodong Yu and Tianzhe Chu and Shengbang Tong and Ziyang Wu and Druv Pai and Sam Buchanan and Yi Ma}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=yCQC8hLyZj} }
Transformer-like models for vision tasks have recently proven effective for a wide range of downstream applications such as segmentation and detection. Previous works have shown that segmentation properties emerge in vision transformers (ViTs) trained using self-supervised methods such as DINO, but not in those trained on supervised classification tasks. In this study, we probe whether segmentation emerges in transformer-based models \textit{solely} as a result of intricate self-supervised learning mechanisms, or if the same emergence can be achieved under much broader conditions through proper design of the model architecture. Through extensive experimental results, we demonstrate that when employing a white-box transformer-like architecture known as \ours{}, whose design explicitly models and pursues low-dimensional structures in the data distribution, segmentation properties, at both the whole and parts levels, already emerge with a minimalistic supervised training recipe. Layer-wise finer-grained analysis reveals that the emergent properties strongly corroborate the designed mathematical functions of the white-box network. Our results suggest a path to design white-box foundation models that are simultaneously highly performant and mathematically fully interpretable.
Emergence of Segmentation with Minimalistic White-Box Transformers
[ "Yaodong Yu", "Tianzhe Chu", "Shengbang Tong", "Ziyang Wu", "Druv Pai", "Sam Buchanan", "Yi Ma" ]
Workshop/XAIA
2023
2308.16271
[ "https://github.com/ma-lab-berkeley/crate" ]
https://huggingface.co/papers/2308.16271
6
13
0
7
1
[]
[]
[]
null
https://openreview.net/forum?id=xuT2SDuJX6
@inproceedings{ deck2023a, title={A Critical Survey on Fairness Benefits of {XAI}}, author={Luca Deck and Jakob Schoeffer and Maria De-Arteaga and Niklas Kuehl}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=xuT2SDuJX6} }
In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle the multidimensional relationship between these two concepts. Based on a systematic literature review and a subsequent qualitative content analysis, we identify seven archetypal claims from 175 papers on the alleged fairness benefits of XAI. We present crucial caveats with respect to these claims and provide an entry point for future discussions around the potentials and limitations of XAI for specific fairness desiderata. While the literature often suggests XAI to be an enabler for several fairness desiderata, we notice a divide between these desiderata and the capabilities of XAI. We encourage to conceive XAI as one of many tools to approach the multidimensional, sociotechnical challenge of algorithmic fairness and to be more specific about how exactly what kind of XAI method enables whom to address which fairness desideratum.
A Critical Survey on Fairness Benefits of XAI
[ "Luca Deck", "Jakob Schoeffer", "Maria De-Arteaga", "Niklas Kuehl" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xrEpp63kz7
@inproceedings{ klein2023understanding, title={Understanding Scalable Perovskite Solar Cell Manufacturing with Explainable {AI}}, author={Lukas Klein and Sebastian Ziegler and Felix Laufer and Charlotte Debus and Markus G{\"o}tz and Klaus Maier-Hein and Ulrich Paetzold and Fabian Isensee and Paul Jaeger}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=xrEpp63kz7} }
Large-area processing of perovskite semiconductor thin-films is complex and evokes unexplained variance in quality, posing a major hurdle for the commercialization of perovskite photovoltaics. Advances in scalable fabrication processes are currently limited to gradual and arbitrary trial-and-error procedures. While the in-situ acquisition of photoluminescence videos has the potential to reveal important variations in the thin-film formation process, the high dimensionality of the data quickly surpasses the limits of human analysis. In response, this study leverages deep learning and explainable artificial intelligence (XAI) to discover relationships between sensor information acquired during the perovskite thin-film formation process and the resulting solar cell performance indicators, while rendering these relationships humanly understandable. Through a diverse set of XAI methods, we explain not only *what* characteristics are important but also *why*, allowing material scientists to translate findings into actionable conclusions. Our study demonstrates that XAI methods will play a critical role in accelerating energy materials science.
Understanding Scalable Perovskite Solar Cell Manufacturing with Explainable AI
[ "Lukas Klein", "Sebastian Ziegler", "Felix Laufer", "Charlotte Debus", "Markus Götz", "Klaus Maier-Hein", "Ulrich Paetzold", "Fabian Isensee", "Paul Jaeger" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=x9H6lNez5b
@inproceedings{ nguyen2023exploring, title={Exploring Practitioner Perspectives On Training Data Attribution Explanations}, author={Elisa Nguyen and Evgenii Kortukov and Jean Song and Seong Joon Oh}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=x9H6lNez5b} }
Explainable AI (XAI) aims to provide insight into opaque model reasoning to humans and as such is an interdisciplinary field by nature. In this paper, we interviewed 10 practitioners to understand the possible usability of training data attribution (TDA) explanations and to explore the design space of such an approach. We confirmed that training data quality is often the most important factor for high model performance in practice and model developers mainly rely on their own experience to curate data. End-users expect explanations to enhance their interaction with the model and do not necessarily prioritise but are open to training data as a means of explanation. Within our participants, we found that TDA explanations are not well-known and therefore not used. We urge the community to focus on the utility of TDA techniques from the human-machine collaboration perspective and broaden the TDA evaluation to reflect common use cases in practice.
Exploring Practitioner Perspectives On Training Data Attribution Explanations
[ "Elisa Nguyen", "Evgenii Kortukov", "Jean Song", "Seong Joon Oh" ]
Workshop/XAIA
2023
2310.20477
[ "" ]
https://huggingface.co/papers/2310.20477
1
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=wNhcShUyAf
@inproceedings{ melamed2023explaining, title={Explaining high-dimensional text classifiers}, author={Odelia Melamed and Rich Caruana}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=wNhcShUyAf} }
Explainability has become a valuable tool in the last few years, helping humans better understand AI-guided decisions. However, the classic explainability tools are sometimes quite limited when considering high-dimensional inputs and neural network classifiers. We present a new explainability method using theoretically proven high-dimensional properties in neural network classifiers. We present two usages of it: 1) On the classical sentiment analysis task for the IMDB reviews dataset, and 2) our Malware-Detection task for our PowerShell scripts dataset.
Explaining high-dimensional text classifiers
[ "Odelia Melamed", "Rich Caruana" ]
Workshop/XAIA
2023
2311.13454
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wFJoNkiASU
@inproceedings{ you2023sumofparts, title={Sum-of-Parts Models: Faithful Attributions for Groups of Features}, author={Weiqiu You and Helen Qu and Marco Gatti and Bhuvnesh Jain and Eric Wong}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=wFJoNkiASU} }
An explanation of a machine learning model is considered "faithful" if it accurately reflects the model's decision-making process. However, explanations such as feature attributions for deep learning are not guaranteed to be faithful, and can produce potentially misleading interpretations. In this work, we develop Sum-of-Parts (SOP), a class of models whose predictions come with grouped feature attributions that are faithful-by-construction. This model decomposes a prediction into an interpretable sum of scores, each of which is directly attributable to a sparse group of features. We evaluate SOP on benchmarks with standard interpretability metrics, and in a case study, we use the faithful explanations from SOP to help astrophysicists discover new knowledge about galaxy formation.
Sum-of-Parts Models: Faithful Attributions for Groups of Features
[ "Weiqiu You", "Helen Qu", "Marco Gatti", "Bhuvnesh Jain", "Eric Wong" ]
Workshop/XAIA
2023
[ "https://github.com/debugml/sop" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w6Qnoy2RXG
@inproceedings{ amara2023ginxeval, title={{GI}nX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations}, author={Kenza Amara and Mennatallah El-Assady and Rex Ying}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=w6Qnoy2RXG} }
Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions. However, it is not clear yet how to evaluate the *correctness* of those explanations, whether it is from a human or a model perspective. One unaddressed bottleneck in the current evaluation procedure is the problem of out-of-distribution explanations, whose distribution differs from those of the training data. This important issue affects existing evaluation metrics such as the popular faithfulness or fidelity score. In this paper, we show the limitations of faithfulness metrics. We propose **GInX-Eval** (**G**raph **In**-distribution e**X**planation **Eval**uation), an evaluation procedure of graph explanations that overcomes the pitfalls of faithfulness and offers new insights on explainability methods. Using a fine-tuning strategy, the GInX score measures how informative removed edges are for the model and the HomophilicRank score evaluates if explanatory edges are correctly ordered by their importance and the explainer accounts for redundant information. GInX-Eval verifies if ground-truth explanations are instructive to the GNN model. In addition, it shows that many popular methods, including gradient-based methods, produce explanations that are not better than a random designation of edges as important subgraphs, challenging the findings of current works in the area. Results with GInX-Eval are consistent across multiple datasets and align with human evaluation.
GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations
[ "Kenza Amara", "Mennatallah El-Assady", "Rex Ying" ]
Workshop/XAIA
2023
2309.16223
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vVpefYmnsG
@inproceedings{ hedstr{\"o}m2023sanity, title={Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test}, author={Anna Hedstr{\"o}m and Leander Weber and Sebastian Lapuschkin and Marina H{\"o}hne}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=vVpefYmnsG} }
The Model Parameter Randomisation Test (MPRT) is widely acknowledged in the eXplainable Artificial Intelligence (XAI) community for its well-motivated evaluative principle: that the explanation function should be sensitive to changes in the parameters of the model function. However, recent works have identified several methodological caveats for the empirical interpretation of MPRT. To address these caveats, we introduce two adaptations to the original MPRT — Smooth MPRT and Efficient MPRT, where the former minimises the impact that noise has on the evaluation results through sampling and the latter circumvents the need for biased similarity measurements by re-interpreting the test through the explanation’s rise in complexity, after full parameter randomisation. Our experimental results demonstrate that these proposed variants lead to improved metric reliability, thus enabling a more trustworthy application of XAI methods
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test
[ "Anna Hedström", "Leander Weber", "Sebastian Lapuschkin", "Marina MC Höhne" ]
Workshop/XAIA
2023
2401.06465
[ "https://github.com/annahedstroem/sanity-checks-revisited" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uVAiiHFH0L
@inproceedings{ xue2023stability, title={Stability Guarantees for Feature Attributions with Multiplicative Smoothing}, author={Anton Xue and Rajeev Alur and Eric Wong}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=uVAiiHFH0L} }
Explanation methods for machine learning models tend not to provide any formal guarantees and may not reflect the underlying decision-making process. In this work, we analyze stability as a property for reliable feature attribution methods. We prove that relaxed variants of stability are guaranteed if the model is sufficiently Lipschitz with respect to the masking of features. We develop a smoothing method called Multiplicative Smoothing (MuS) to achieve such a model. We show that MuS overcomes the theoretical limitations of standard smoothing techniques and can be integrated with any classifier and feature attribution method. We evaluate MuS on vision and language models with various feature attribution methods, such as LIME and SHAP, and demonstrate that MuS endows feature attributions with non-trivial stability guarantees.
Stability Guarantees for Feature Attributions with Multiplicative Smoothing
[ "Anton Xue", "Rajeev Alur", "Eric Wong" ]
Workshop/XAIA
2023
2307.05902
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uU1eXPwesa
@inproceedings{ martin2023fruni, title={{FRUNI} and {FTREE} synthetic knowledge graphs for evaluating explainability}, author={Pablo Sanchez Martin and Tarek Besold and Priyadarshini Kumari}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=uU1eXPwesa} }
Research on knowledge graph completion (KGC)---i.e., link prediction within incomplete KGs---is witnessing significant growth in popularity. Recently, KGC using KG embedding (KGE) models, primarily based on complex architectures (e.g., transformers), have achieved remarkable performance. Still, extracting the \emph{minimal and relevant} information employed by KGE models to make predictions, while constituting a major part of \emph{explaining the predictions}, remains a challenge. While there exists a growing literature on explainers for trained KGE models, systematically exposing and quantifying their failure cases poses even greater challenges. In this work, we introduce two synthetic datasets, FRUNI and FTREE, designed to demonstrate the (in)ability of explainer methods to spot link predictions that rely on indirectly connected links. Notably, we empower practitioners to control various aspects of the datasets, such as noise levels and dataset size, enabling them to assess the performance of explainability methods across diverse scenarios. Through our experiments, we assess the performance of four recent explainers in providing accurate explanations for predictions on the proposed datasets. We believe that these datasets are valuable resources for further validating explainability methods within the knowledge graph community.
FRUNI and FTREE synthetic knowledge graphs for evaluating explainability
[ "Pablo Sanchez Martin", "Tarek Besold", "Priyadarshini Kumari" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=tiLZkab8TP
@inproceedings{ hajiramezanali2023on, title={On the Consistency of {GNN} Explainability Methods}, author={Ehsan Hajiramezanali and Sepideh Maleki and Alex Tseng and Aicha BenTaieb and Gabriele Scalia and Tommaso Biancalani}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=tiLZkab8TP} }
Despite the widespread utilization of post-hoc explanation methods for graph neural networks (GNNs) in high-stakes settings, there has been a lack of comprehensive evaluation regarding their quality and reliability. This evaluation is challenging primarily due to the data's non-Euclidean nature, arbitrary size, and complex topological structure. In this context, we argue that the consistency of GNN explanations, denoting the ability to produce similar explanations for input graphs with minor structural changes that do not alter their output predictions, is a key requirement for effective post-hoc GNN explanations. To fulfill this gap, we introduce a novel metric based on Fused Gromov--Wasserstein distance to quantify consistency. Finally, we demonstrate that current methods do not perform well according to this metric, underscoring the need for further research on reliable GNN explainability methods.
On the Consistency of GNN Explainability Methods
[ "Ehsan Hajiramezanali", "Sepideh Maleki", "Alex Tseng", "Aicha BenTaieb", "Gabriele Scalia", "Tommaso Biancalani" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=se4ojQqjB5
@inproceedings{ armitage2023explainable, title={Explainable {AI} in Music Performance: Case Studies from Live Coding and Sound Spatialisation}, author={Jack Armitage and Nicola Privato and Victor Shepardson and Celeste Betancur Gutierrez}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=se4ojQqjB5} }
Explainable Artificial Intelligence (XAI) has emerged as a significant area of research, with diverse applications across various fields. In the realm of arts, the application and implications of XAI remain largely unexplored. This paper investigates how artist-researchers address and navigate explainability in their systems during creative AI/ML practices, focusing on music performance. We present two case studies: live coding of AI/ML models and sound spatialisation performance. In the first case, we explore the inherent explainability in live coding and how the integration of interactive and on-the-fly machine learning processes can enhance this explainability. In the second case, we investigate how sound spatialisation can serve as a powerful tool for understanding and navigating the latent dimensions of autoencoders. Our autoethnographic reflections reveal the complexities and nuances of applying XAI in the arts, and underscore the need for further research in this area. We conclude that the exploration of XAI in the arts, particularly in music performance, opens up new avenues for understanding and improving the interaction between artists and AI/ML systems. This research contributes to the broader discussion on the diverse applications of XAI, with the ultimate goal of extending the frontiers of applied XAI.
Explainable AI in Music Performance: Case Studies from Live Coding and Sound Spatialisation
[ "Jack Armitage", "Nicola Privato", "Victor Shepardson", "Celeste Betancur Gutierrez" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qt9yTS7TKc
@inproceedings{ segal2023robust, title={Robust Recourse for Binary Allocation Problems}, author={Meirav Segal and Anne-Marie George and Ingrid Yu and Christos Dimitrakakis}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=qt9yTS7TKc} }
We present the problem of algorithmic recourse for the setting of binary allocation problems. In this setting, the optimal allocation does not depend only on the prediction model and the individual's features, but also on the current available resources, decision maker's objective and other individuals currently applying for the resource. Specifically, we focus on 0-1 knapsack problems and in particular the use case of lending. We first provide a method for generating counterfactual explanations and then address the problem of recourse invalidation due to changes in allocation variables. Finally, we empirically compare our method with perturbation-robust recourse and show that our method can provide higher validity at a lower cost.
Robust Recourse for Binary Allocation Problems
[ "Meirav Segal", "Anne-Marie George", "Ingrid Yu", "Christos Dimitrakakis" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=nVGuWh4S2G
@inproceedings{ koebler2023towards, title={Towards Explanatory Model Monitoring}, author={Alexander Koebler and Thomas Decker and Michael Lebacher and Ingo Thon and Volker Tresp and Florian Buettner}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=nVGuWh4S2G} }
Monitoring machine learning systems and efficiently recovering their reliability after performance degradation are two of the most critical issues in real-world applications. However, current monitoring strategies lack the capability to provide actionable insights answering the question of why the performance of a particular model really degraded. To address this, we propose Explanatory Performance Estimation (XPE) as a novel method that facilitates more informed model monitoring and maintenance by attributing an estimated performance change to interpretable input features. We demonstrate the superiority of our approach compared to natural baselines on different data sets. We also discuss how the generated results lead to valuable insights that can reveal potential root causes for model deterioration and guide toward actionable countermeasures.
Towards Explanatory Model Monitoring
[ "Alexander Koebler", "Thomas Decker", "Michael Lebacher", "Ingo Thon", "Volker Tresp", "Florian Buettner" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=mAzhEP9jPv
@inproceedings{ kroeger2023are, title={Are Large Language Models Post Hoc Explainers?}, author={Nicholas Kroeger and Dan Ley and Satyapriya Krishna and Chirag Agarwal and Himabindu Lakkaraju}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=mAzhEP9jPv} }
Large Language Models (LLMs) are increasingly used as powerful tools for a plethora of natural language processing (NLP) applications. A recent innovation, in-context learning (ICL), enables LLMs to learn new tasks by supplying a few examples in the prompt during inference time, thereby eliminating the need for model fine-tuning. While LLMs have been utilized in several applications, their applicability in explaining the behavior of other models remains relatively unexplored. Despite the growing number of new explanation techniques, many require white-box access to the model and/or are computationally expensive, highlighting a need for next-generation post hoc explainers. In this work, we present the first framework to study the effectiveness of LLMs in explaining other predictive models. More specifically, we propose a novel framework encompassing multiple prompting strategies: i) Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL, and iv) Explanation-based ICL, with varying levels of information about the underlying ML model and the local neighborhood of the test sample. We conduct extensive experiments with real-world benchmark datasets to demonstrate that LLM-generated explanations perform on par with state-of-the-art post hoc explainers using their ability to leverage ICL examples and their internal knowledge in generating model explanations. On average, across four datasets and two ML models, we observe that LLMs identify the most important feature with 72.19% accuracy, opening up new frontiers in explainable artificial intelligence (XAI) to explore LLM-based explanation frameworks.
Are Large Language Models Post Hoc Explainers?
[ "Nicholas Kroeger", "Dan Ley", "Satyapriya Krishna", "Chirag Agarwal", "Himabindu Lakkaraju" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=lJ63ABWs8V
@inproceedings{ stein2023rectifying, title={Rectifying Group Irregularities in Explanations for Distribution Shift}, author={Adam Stein and Yinjun Wu and Eric Wong and Mayur Naik}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=lJ63ABWs8V} }
It is well-known that real-world changes constituting distribution shift adversely affect model performance. How to characterize those changes in an interpretable manner is poorly understood. Existing techniques take the form of shift explana- tions that elucidate how samples map from the original distribution toward the shifted one by reducing the disparity between the two distributions. However, these methods can introduce group irregularities, leading to explanations that are less feasible and robust. To address these issues, we propose Group-aware Shift Explanations (GSE), an explanation method that leverages worst-group optimization to rectify group irregularities. We demonstrate that GSE not only maintains group structures, but can improve feasibility and robustness over a variety of domains by up to 20% and 25% respectively.
Rectifying Group Irregularities in Explanations for Distribution Shift
[ "Adam Stein", "Yinjun Wu", "Eric Wong", "Mayur Naik" ]
Workshop/XAIA
2023
2305.16308
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=lERHoohuX5
@inproceedings{ zytek2023lessons, title={Lessons from Usable {ML} Deployments Applied to Wind Turbine Monitoring}, author={Alexandra Zytek and Wei-En Wang and Sofia Koukoura and Kalyan Veeramachaneni}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=lERHoohuX5} }
Through past experiences deploying what we call usable ML (one step beyond explainable ML, including both explanations and other augmenting information) to real-world domains, we have learned three key lessons. First, many organizations are beginning to hire people who we call "bridges" because they bridge the gap between ML developers and domain experts, and these people fill a valuable role in developing usable ML applications. Second, a configurable system that enables easily iterating on usable ML interfaces during collaborations with bridges is key. Finally, there is a need for continuous, in-deployment evaluations to quantify the real-world impact of usable ML. Throughout this paper, we apply these lessons to the task of wind turbine monitoring, an essential task in the renewable energy domain. Turbine engineers and data analysts must decide whether to perform costly in-person investigations on turbines to prevent potential cases of brakepad failure, and well-tuned usable ML interfaces can aid with this decision-making process. Through the applications of our lessons to this task, we hope to demonstrate the potential real-world impact of usable ML in the renewable energy domain.
Lessons from Usable ML Deployments and Application to Wind Turbine Monitoring
[ "Alexandra Zytek", "Wei-En Wang", "Sofia Koukoura", "Kalyan Veeramachaneni" ]
Workshop/XAIA
2023
2312.02859
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=joaWGug1CU
@inproceedings{ ali2023explainable, title={Explainable Alzheimer{\textquoteright}s Disease Progression Prediction using Reinforcement Learning}, author={Raja Farrukh Ali and Ayesha Farooq and Emmanuel Adeniji and John Woods and Vinny Sun and William Hsu}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=joaWGug1CU} }
We present a novel application of SHAP (SHapley Additive exPlanations) to enhance the interpretability of Reinforcement Learning (RL) models used for Alzheimer's Disease (AD) progression prediction. Leveraging RL's predictive capabilities on a subset of the ADNI dataset, we employ SHAP to explain the model's decision-making process. Our approach provides detailed insights into the key factors influencing AD progression predictions, offering both global and individual, patient-level interpretability. By bridging the gap between predictive power and transparency, our work is a step towards empowering clinicians and researchers to gain a deeper understanding of AD progression and facilitate more informed decision-making in AD-related research and patient care. To encourage further exploration, we open-source our codebase at https://github.com/rfali/xrlad.
Explainable Reinforcement Learning for Alzheimer’s Disease Progression Prediction.
[ "Raja Farrukh Ali", "Ayesha Farooq", "Emmanuel Adeniji", "John Woods", "Vinny Sun", "William Hsu" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=jnNixRhhF8
@inproceedings{ garde2023deepdecipher, title={DeepDecipher: Accessing and Investigating Neuron Activation in Large Language Models}, author={Albert Garde and Esben Kran and Fazl Barez}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=jnNixRhhF8} }
As large language models (LLMs) become more capable, there is an urgent need for interpretable and transparent tools. Current methods are difficult to implement, and accessible tools to analyze model internals are lacking. To bridge this gap, we present DeepDecipher - an API and interface for probing neurons in transformer models' MLP layers. DeepDecipher makes the outputs of advanced interpretability techniques readily available for LLMs. The easy-to-use interface also makes inspecting these complex models more intuitive. This paper outlines DeepDecipher's design and capabilities. We demonstrate how to analyze neurons, compare models, and gain insights into model behavior. For example, we contrast DeepDecipher's functionality with similar tools like Neuroscope and OpenAI's Neuron Explainer. DeepDecipher enables efficient, scalable analysis of LLMs. By granting access to state-of-the-art interpretability methods, DeepDecipher makes LLMs more transparent, trustworthy, and safe. Researchers, engineers, and developers can quickly diagnose issues, audit systems, and advance the field.
DeepDecipher: Accessing and Investigating Neuron Activation in Large Language Models
[ "Albert Garde", "Esben Kran", "Fazl Barez" ]
Workshop/XAIA
2023
2310.01870
[ "https://github.com/apartresearch/deepdecipher" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iqXixXrMKa
@inproceedings{ carmichael2023how, title={How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?}, author={Zachariah Carmichael and Walter Scheirer}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=iqXixXrMKa} }
Surging interest in deep learning from high-stakes domains has precipitated concern over the inscrutable nature of black box neural networks. Explainable AI (XAI) research has led to an abundance of explanation algorithms for these black boxes. Such post hoc explainers produce human-comprehensible explanations, however, their fidelity with respect to the model is not well understood - explanation evaluation remains one of the most challenging issues in XAI. In this paper, we ask a targeted but important question: can popular feature-additive explainers (e.g., LIME, SHAP, SHAPR, MAPLE, and PDP) explain feature-additive predictors? Herein, we evaluate such explainers on ground truth that is analytically derived from the additive structure of a model. We demonstrate the efficacy of our approach in understanding these explainers applied to symbolic expressions, neural networks, and generalized additive models on thousands of synthetic and several real-world tasks. Our results suggest that all explainers eventually fail to correctly attribute the importance of features, especially when a decision-making process involves feature interactions.
How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?
[ "Zachariah Carmichael", "Walter Scheirer" ]
Workshop/XAIA
2023
2310.18496
[ "https://github.com/craymichael/PostHocExplainerEvaluation" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iMR4ukkUFU
@inproceedings{ yuan2023a, title={A Simple Scoring Function to Fool {SHAP}: Stealing from the One Above}, author={Jun Yuan and Aritra Dasgupta}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=iMR4ukkUFU} }
Explainable Al (XAl) methods such as SHAP can help discover unfairness in black-box models. If the XAl method reveals a significant impact from a "protected attribute" (e.g., gender, race) on the model output, the model is considered unfair. However, adversarial models can subvert the detection of XAI methods. Previous approaches to constructing such an adversarial model require access to underlying data distribution. We propose a simple rule that does not require access to the underlying data or data distribution. It can adapt any scoring function to fool XAl methods, such as SHAP. Our work calls for more attention to scoring functions besides classifiers in XAl research and reveals the limitations of XAl methods for explaining behaviors of scoring functions.
A Simple Scoring Function to Fool SHAP: Stealing from the One Above
[ "Jun Yuan", "Aritra Dasgupta" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=hpuOA3nkVW
@inproceedings{ kumar2023explaining, title={Explaining Longitudinal Clinical Outcomes using Domain-Knowledge driven Intermediate Conceptshttps://openreview.net/profile?id={\textasciitilde}Thomas\_Kannampallil1}, author={Sayantan Kumar and Thomas Kannampallil and Aristeidis Sotiras and Philip Payne}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=hpuOA3nkVW} }
The black-box nature of complex deep learning models makes it challenging to explain the rationale behind model predictions to clinicians and healthcare providers. Most of the current explanation methods in healthcare provide explanations through feature importance scores, which identify clinical features that are important for prediction. For high-dimensional clinical data, using individual input features as units of explanations often leads to noisy explanations that are sensitive to input perturbations and less informative for clinical interpretation. In this work, we design a novel deep learning framework that predicts domain-knowledge driven intermediate high-level clinical concepts from input features and uses them as units of explanation. Our framework is self-explaining; relevance scores are generated for each concept to predict and explain in an end-to-end joint training scheme. We perform systematic experiments on a real-world electronic health records dataset to evaluate both the performance and explainability of the predicted clinical concepts.
Explaining Longitudinal Clinical Outcomes using Domain-Knowledge driven Intermediate Concepts
[ "Sayantan Kumar", "Thomas Kannampallil", "Aristeidis Sotiras", "Philip Payne" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=hkfsR3HMuj
@inproceedings{ hsu2023diagnosing, title={Diagnosing Transformers: Illuminating Feature Spaces for Clinical Decision-Making}, author={Aliyah Hsu and Yeshwanth Cherapanamjeri and Briton Park and Tristan Naumann and Anobel Odisho and Bin Yu}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=hkfsR3HMuj} }
Pre-trained transformers are often fine-tuned to aid clinical decision-making using limited clinical notes. Model interpretability is crucial, especially in high-stakes domains like medicine, to establish trust and ensure safety, which requires human engagement. We introduce SUFO, a systematic framework that enhances interpretability of fine-tuned transformer feature spaces. SUFO utilizes a range of analytic and visualization techniques, including Supervised probing, Unsupervised similarity analysis, Feature dynamics, and Outlier analysis to address key questions about model trust and interpretability. We conduct a case study investigating the impact of pre-training data where we focus on real-world pathology classification tasks, and validate our findings on MedNLI. We evaluate five 110M-sized pre-trained transformer models, categorized into general-domain (BERT, TNLR), mixed-domain (BioBERT, Clinical BioBERT), and domain-specific (PubMedBERT) groups. Our SUFO analyses reveal that: (1) while PubMedBERT, the domain-specific model, contains valuable information for fine-tuning, it can overfit to minority classes when class imbalances exist. In contrast, mixed-domain models exhibit greater resistance to overfitting, suggesting potential improvements in domain-specific model robustness; (2) in-domain pre-training accelerates feature disambiguation during fine-tuning; and (3) feature spaces undergo significant sparsification during this process, enabling clinicians to identify common outlier modes among fine-tuned models as demonstrated in this paper. These findings showcase the utility of SUFO in enhancing trust and safety when using transformers in medicine, and we believe SUFO can aid practitioners in evaluating fine-tuned language models for other applications in medicine and in more critical domains.
Diagnosing Transformers: Illuminating Feature Spaces for Clinical Decision-Making
[ "Aliyah Hsu", "Yeshwanth Cherapanamjeri", "Briton Park", "Tristan Naumann", "Anobel Odisho", "Bin Yu" ]
Workshop/XAIA
2023
2305.17588
[ "https://github.com/adelaidehsu/path_model_evaluation" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=h6OT5pzrGc
@inproceedings{ havaldar2023visual, title={Visual Topics via Visual Vocabularies}, author={Shreya Havaldar and Weiqiu You and Lyle Ungar and Eric Wong}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=h6OT5pzrGc} }
Researchers have long used topic modeling to automatically characterize and summarize text documents without supervision. Can we extract similar structures from collections of images? To do this, we propose visual vocabularies, a method to analyze image datasets by decomposing images into segments, and grouping similar segments into visual "words". These vocabularies of visual "words" enable us to extract visual topics that capture hidden themes distinct from what is captured by classic unsupervised approaches. We evaluate our visual topics using standard topic modeling metrics and confirm the coherency of our visual topics via a human study.
Visual Topics via Visual Vocabularies
[ "Shreya Havaldar", "Weiqiu You", "Lyle Ungar", "Eric Wong" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=h5usKrxCH2
@inproceedings{ zhang2023attributionlab, title={AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments}, author={Yang Zhang and Yawei Li and Hannah Brown and Mina Rezaei and Bernd Bischl and Philip Torr and Ashkan Khakzar and Kenji Kawaguchi}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=h5usKrxCH2} }
Feature attribution explains neural network outputs by identifying relevant input features. How do we know if the identified features are indeed relevant to the network? This notion is referred to as _faithfulness_, an essential property that reflects the alignment between the identified (attributed) features and the features used by the model. One recent trend to test faithfulness is to design the data such that we know which input features are relevant to the label and then train a model on the designed data. Subsequently, the identified features are evaluated by comparing them with these designed ground truth features. However, this idea has the underlying assumption that the neural network learns to use _all_ and _only_ these designed features, while there is no guarantee that the learning process trains the network in this way. In this paper, we solve this missing link by _explicitly designing the neural network_ by manually setting its weights, along with _designing data_, so we know precisely which input features in the dataset are relevant to the designed network. Thus, we can test faithfulness in _AttributionLab_, our designed synthetic environment, which serves as a sanity check and is effective in filtering out attribution methods. If an attribution method is not faithful in a simple controlled environment, it can be unreliable in more complex scenarios. Furthermore, the AttributionLab environment serves as a laboratory for controlled experiments through which we can study feature attribution methods, identify issues, and suggest potential improvements.
AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments
[ "Yang Zhang", "Yawei Li", "Hannah Brown", "Mina Rezaei", "Bernd Bischl", "Philip Torr", "Ashkan Khakzar", "Kenji Kawaguchi" ]
Workshop/XAIA
2023
2310.06514
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=gh69Bu7k48
@inproceedings{ park2023geometric, title={Geometric Remove-and-Retrain ({GOAR}): Coordinate-Invariant eXplainable {AI} Assessment}, author={Yong-Hyun Park and Junghoon Seo and Bomseok Park and Seongsu Lee and Junghyo Jo}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=gh69Bu7k48} }
Identifying the relevant input features that have a critical influence on the output results is indispensable for the development of explainable artificial intelligence (XAI). Remove-and-Retrain (ROAR) is a widely accepted approach for assessing the importance of individual pixels by measuring changes in accuracy following their removal and subsequent retraining of the modified dataset. However, we uncover notable limitations in pixel-perturbation strategies. When viewed from a geometric perspective, this method perturbs pixels by moving each sample in the pixel-basis direction. However, we have found that this approach is coordinate-dependent and fails to discriminate between differences among features, thereby compromising the reliability of the evaluation. To address this challenge, we introduce an alternative feature-perturbation approach named Geometric Remove-and-Retrain (GOAR). GOAR offers a perturbation strategy that takes into account the geometric structure of the dataset, providing a coordinate-independent metric for accurate feature comparison. Through a series of experiments with both synthetic and real datasets, we substantiate that GOAR's geometric metric transcends the limitations of pixel-centric metrics.
Geometric Remove-and-Retrain (GOAR): Coordinate-Invariant eXplainable AI Assessment
[ "Yong-Hyun Park", "Junghoon Seo", "Bomseok Park", "Seongsu Lee", "Junghyo Jo" ]
Workshop/XAIA
2023
2407.12401
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=fPnpjEhyxv
@inproceedings{ tapley2023utilizing, title={Utilizing Explainability Techniques for Reinforcement Learning Model Assurance}, author={Alexander Tapley}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=fPnpjEhyxv} }
Explainable Reinforcement Learning (XRL) can provide transparency into the decision-making process of a Reinforcement Learning (RL) model and increase user trust and adoption into real-world use cases. By utilizing XRL techniques, researchers can identify potential vulnerabilities within a trained RL model prior to deployment, therefore limiting the potential for mission failure or mistakes by the system. This paper introduces the ARLIN (Assured RL Model Interrogation) Toolkit, a Python library that provides explainability outputs for trained RL models that can be used to identify potential policy vulnerabilities and critical points. Using XRL datasets, ARLIN provides detailed analysis into an RL model's latent space, creates a semi-aggregated Markov decision process (SAMDP) to outline the model's path throughout an episode, and produces cluster analytics for each node within the SAMDP to identify potential failure points and vulnerabilities within the model. To illustrate ARLIN's effectiveness, we provide sample API usage and corresponding explainability visualizations and vulnerability point detection for a publicly available RL model. The open-source code repository is available for download at https://github.com/mitre/arlin.
Utilizing Explainability Techniques for Reinforcement Learning Model Assurance
[ "Alexander Tapley" ]
Workshop/XAIA
2023
2311.15838
[ "https://github.com/mitre/arlin" ]
https://huggingface.co/papers/2311.15838
0
0
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=ewagDhIy8Y
@inproceedings{ dammu2023detecting, title={Detecting Spurious Correlations via Robust Visual Concepts in Real and {AI}-Generated Image Classification}, author={Preetam Prabhu Srikar Dammu and Chirag Shah}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=ewagDhIy8Y} }
Often machine learning models tend to automatically learn associations present in the training data without questioning their validity or appropriateness. This undesirable property is the root cause of the manifestation of spurious correlations, which render models unreliable and prone to failure in the presence of distribution shifts. Research shows that most methods attempting to remedy spurious correlations are only effective for a model's known spurious associations. Current spurious correlation detection algorithms either rely on extensive human annotations or are too restrictive in their formulation. Moreover, they rely on strict definitions of visual artifacts that may not apply to data produced by generative models, as they are known to hallucinate contents that do not conform to standard specifications. In this work, we introduce a general-purpose method that efficiently detects potential spurious correlations, and requires significantly less human interference in comparison to the prior art. Additionally, the proposed method provides intuitive explanations while eliminating the need for pixel-level annotations. We demonstrate the proposed method's tolerance to the peculiarity of AI-generated images, which is a considerably challenging task, one where most of the existing methods fall short. Consequently, our method is also suitable for detecting spurious correlations that may propagate to downstream applications originating from generative models.
Detecting Spurious Correlations via Robust Visual Concepts in Real and AI-Generated Image Classification
[ "Preetam Prabhu Srikar Dammu", "Chirag Shah" ]
Workshop/XAIA
2023
2311.01655
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=d7FsEtYjvN
@inproceedings{ hsiao2023towards, title={Towards the next generation explainable {AI} that promotes {AI}-human mutual understanding}, author={Janet Hsiao and Antoni Chan}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=d7FsEtYjvN} }
Recent advances in deep learning AI has demanded better explanations on AI’s operations to enhance transparency of AI’s decisions, especially in critical systems such as self-driving car or medical diagnosis applications, to ensure safety, user trust and user satisfaction. However, current Explainable AI (XAI) solutions focus on using more AI to explain AI, without considering users’ mental processes. Here we use cognitive science theories and methodologies to develop a next-generation XAI framework that promotes human-AI mutual understanding, using computer vision AI models as examples due to its importance in critical systems. Specifically, we propose to equip XAI with an important cognitive capacity in human social interaction: theory of mind (ToM), i.e., the capacity to understand others’ behaviour by attributing mental states to them. We focus on two ToM abilities: (1) Inferring human strategy and performance (i.e., Machine’s ToM), and (2) Inferring human understanding of AI strategy and trust towards AI (i.e., to infer Human’s ToM). Computational modeling of human cognition and experimental psychology methods play an important role for XAI to develop these two ToM abilities to provide user-centered explanations through comparing users' strategy with AI’s strategy and estimating user’s current understanding of AI’s strategy, similar to real-life teachers. Enhanced human-AI mutual understanding can in turn lead to better adoption and trust of AI systems. This framework thus highlights the importance of cognitive science approaches to XAI.
Towards the next generation explainable AI that promotes AI-human mutual understanding
[ "Janet Hsiao", "Antoni Chan" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=cBXiaGUcK8
@inproceedings{ wellawatte2023extracting, title={Extracting human interpretable structure-property relationships in chemistry using {XAI} and large language models}, author={Geemi Wellawatte and Philippe Schwaller}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=cBXiaGUcK8} }
Explainable Artificial Intelligence (XAI) is an emerging field in AI that aims to address the opaque nature of machine learning models. Furthermore, it has been shown that XAI can be used to extract input-output relationships, making them a useful tool in chemistry to understand structure-property relationships. However, one of the main limitations of XAI methods is that they are developed for technically oriented users. We propose the XpertAI framework that integrates XAI methods with large language models (LLMs) accessing scientific literature to generate accessible natural language explanations of raw chemical data automatically. We conducted 5 case studies to evaluate the performance of XpertAI. Our results show that XpertAI combines the strengths of LLMs and XAI tools in generating specific, scientific, and interpretable explanations.
Extracting human interpretable structure-property relationships in chemistry using XAI and large language models
[ "Geemi Wellawatte", "Philippe Schwaller" ]
Workshop/XAIA
2023
2311.04047
[ "https://github.com/geemi725/xpertai" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=bhvlGMbONN
@inproceedings{ rawal2023are, title={Are Video{QA} Models Truly Multimodal?}, author={Ishaan Rawal and Shantanu Jaiswal and Basura Fernando and Cheston Tan}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=bhvlGMbONN} }
While VideoQA Transformer models demonstrate competitive performance on standard benchmarks, the reasons behind their success are not fully understood. Do these models jointly capture and leverage the rich multimodal structures and dynamics from video and text? Or are they merely exploiting shortcuts to achieve high scores? Hence, we design $\textit{QUAG}$ (QUadrant AveraGe), a lightweight and non-parametric probe, to critically analyze multimodal representations. QUAG facilitates combined dataset-model study by systematic ablation of model's coupled multimodal understanding during inference. Surprisingly, it demonstrates that the models manage to maintain high performance even under multimodal impairment. This indicates that the current VideoQA benchmarks and metrics do not penalize models that find shortcuts and discount joint multimodal understanding. Motivated by this, we propose $\textit{CLAVI}$ (Counterfactual in LAnguage and VIdeo), a diagnostic dataset for coupled multimodal understanding in VideoQA. CLAVI consists of temporal questions and videos that are augmented to curate balanced counterfactuals in language and video domains. We evaluate models on CLAVI and find that all models achieve high performance on multimodal shortcut instances, but most of them have very poor performance on the counterfactual instances that necessitate joint multimodal understanding. Overall, we show that many VideoQA models are incapable of learning multimodal representations and that their success on standard datasets is an illusion of joint multimodal understanding.
Are VideoQA Models Truly Multimodal?
[ "Ishaan Rawal", "Shantanu Jaiswal", "Basura Fernando", "Cheston Tan" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=bGsW1wSIxQ
@inproceedings{ lee2023interactive, title={Interactive Model Correction with Natural Language}, author={Yoonho Lee and Michelle Lam and Helena Vasconcelos and Michael Bernstein and Chelsea Finn}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=bGsW1wSIxQ} }
In supervised learning, models are trained to extract correlations from a static dataset. This often leads to models that rely on spurious correlations that fail to generalize to new data distributions, such as a bird classifier that relies on the background of an image. Preventing models from latching on to spurious correlations necessarily requires additional information beyond labeled data. Existing methods incorporate forms of additional instance-level supervision, such as labels for spurious features or additional labeled data from a balanced distribution. Such strategies can become prohibitively costly for large-scale datasets since they require additional annotation at a scale close to the original training data. We hypothesize that far less supervision suffices if we provide targeted feedback about the misconceptions of models trained on a given dataset. We introduce Clarify, a novel natural language interface and method for interactively correcting model misconceptions. Through Clarify, users need only provide a short text description to describe a model's consistent failure patterns, such as "water background" for a bird classifier. Then, in an entirely automated way, we use such descriptions to improve the training process by reweighting the training data or gathering additional targeted data. Our empirical results show that non-expert users can successfully describe model misconceptions via Clarify, improving worst-group accuracy by an average of 7.3% in two datasets with spurious correlations. Finally, we use Clarify to find and rectify 31 novel spurious correlations in ImageNet, improving minority-split accuracy from 21.1% to 28.7%.
Interactive Model Correction with Natural Language
[ "Yoonho Lee", "Michelle Lam", "Helena Vasconcelos", "Michael Bernstein", "Chelsea Finn" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ag1CpSUjPS
@inproceedings{ karimi2023on, title={On the Relationship Between Explanation and Prediction: A Causal View}, author={Amir-Hossein Karimi and Krikamol Muandet and Simon Kornblith and Bernhard Sch{\"o}lkopf and Been Kim}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=ag1CpSUjPS} }
Explainability has become a central requirement for the development, deployment, and adoption of machine learning (ML) models and we are yet to understand what explanation methods can and cannot do. Several factors such as data, model prediction, hyperparameters used in training the model, and random initialization can all influence downstream explanations. While previous work empirically hinted that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we measure the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors (hyperparameters) (inputs to generate saliency-based Es or Ys). We discover that Y's relative direct influence on E follows an odd pattern; the influence is higher in the lowest-performing models than in mid-performing models, and it then decreases in the top-performing models. We believe our work is a promising first step towards providing better guidance for practitioners who can make more informed decisions in utilizing these explanations by knowing what factors are at play and how they relate to their end task.
On the Relationship Between Explanation and Prediction: A Causal View
[ "Amir-Hossein Karimi", "Krikamol Muandet", "Simon Kornblith", "Bernhard Schölkopf", "Been Kim" ]
Workshop/XAIA
2023
2212.06925
[ "" ]
https://huggingface.co/papers/2212.06925
0
0
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=Zbt9z0a95l
@inproceedings{ wabartha2023piecewise, title={Piecewise Linear Parametrization of Policies: Towards Interpretable Deep Reinforcement Learning}, author={Maxime Wabartha and Joelle Pineau}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=Zbt9z0a95l} }
Learning inherently interpretable policies is a central challenge in the path to developing autonomous agents that humans can trust. We argue for the use of policies that are piecewise-linear. We carefully study to what extent they can retain the interpretable properties of linear policies while performing competitively with neural baselines. In particular, we propose the HyperCombinator (HC), a piecewise-linear neural architecture expressing a policy with a controllably small number of sub-policies. Each sub-policy is linear with respect to interpretable features, shedding light on the agent's decision process without needing an additional explanation model. We evaluate HC policies in control and navigation experiments, visualize the improved interpretability of the agent and highlight its trade-off with performance.
Piecewise Linear Parametrization of Policies: Towards Interpretable Deep Reinforcement Learning
[ "Maxime Wabartha", "Joelle Pineau" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YVQSGT6ME0
@inproceedings{ chaudhary2023comet, title={{COMET}: Cost Model Explanation Framework}, author={Isha Chaudhary and Alex Renda and Charith Mendis and Gagandeep Singh}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=YVQSGT6ME0} }
Cost models predict the cost of executing given assembly code basic blocks on a specific microarchitecture. Recently, neural cost models have been shown to be fairly accurate and easy to construct. They can replace heavily engineered analytical cost models used in compilers. However, their black-box nature discourages their adoption. In this work, we develop the first framework, COMET, for generating faithful, generalizable, and intuitive explanations for neural cost models. We generate and compare COMET’s explanations for the popular neural cost model, Ithemal against those for an accurate CPU simulation-based cost model, uiCA. We obtain an empirical inverse correlation between the prediction errors of Ithemal and uiCA and the granularity of basic block features in COMET’s explanations for them, indicating potential reasons for Ithemal’s higher error with respect to uiCA.
COMET: Neural Cost Model Explanation Framework
[ "Isha Chaudhary", "Alex Renda", "Charith Mendis", "Gagandeep Singh" ]
Workshop/XAIA
2023
2302.06836
[ "https://github.com/uiuc-focal-lab/comet" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=WyBAWwpqTY
@inproceedings{ zimmermann2023scale, title={Scale Alone Does not Improve Mechanistic Interpretability in Vision Models}, author={Roland Zimmermann and Thomas Klein and Wieland Brendel}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=WyBAWwpqTY} }
In light of the recent widespread adoption of AI systems, understanding the internal information processing of neural networks has become increasingly critical. Most recently, machine vision has seen remarkable progress by scaling neural networks to unprecedented levels in dataset and model size. We here ask whether this extraordinary increase in scale also positively impacts the field of mechanistic interpretability. In other words, has our understanding of the inner workings of scaled neural networks improved as well? We use a psychophysical paradigm to quantify one form of mechanistic interpretability for a diverse suite of nine models and find no scaling effect for interpretability - neither for model nor dataset size. Specifically, none of the investigated state-of-the-art models are easier to interpret than the GoogLeNet model from almost a decade ago. Latest-generation vision models appear even less interpretable than older architectures, hinting at a regression rather than improvement, with modern models sacrificing interpretability for accuracy. These results highlight the need for models explicitly designed to be mechanistically interpretable and the need for more helpful interpretability methods to increase our understanding of networks at an atomic level. We release a dataset containing more than 130'000 human responses from our psychophysical evaluation of 767 units across nine models. This dataset facilitates research on automated instead of human-based interpretability evaluations, which can ultimately be leveraged to directly optimize the mechanistic interpretability of models.
Scale Alone Does not Improve Mechanistic Interpretability in Vision Models
[ "Roland Zimmermann", "Thomas Klein", "Wieland Brendel" ]
Workshop/XAIA
2023
2307.05471
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ThwzmgEwm5
@inproceedings{ guo2023relax, title={ReLax: An Efficient and Scalable Recourse Explanation Benchmarking Library using {JAX}}, author={Hangzhi Guo and Xinchang Xiong and Wenbo Zhang and Amulya Yadav}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=ThwzmgEwm5} }
Despite the progress made in the field of algorithmic recourse, current research practices remain constrained, largely restricting benchmarking and evaluation of recourse methods to medium-sized datasets (approximately 50k data points) due to the severe runtime overhead of recourse generation. This constraint impedes the pace of research development in algorithmic recourse and raises concerns about the scalability of existing methods. To mitigate this problem, we propose ReLax, a JAX-based benchmarking library, designed for efficient and scalable recourse explanations. ReLax supports a wide range of recourse methods and datasets and offers performance improvements of at least two orders of magnitude over existing libraries. Notably, we demonstrate that ReLax is capable of benchmarking real-world datasets of up to 10M data points, roughly 200 times the scale of current practices, without imposing prohibitive computational costs. ReLax is fully open-sourced and can be accessed at https://github.com/BirkhoffG/jax-relax.
ReLax: An Efficient and Scalable Recourse Explanation Benchmarking Library using JAX
[ "Hangzhi Guo", "Xinchang Xiong", "Wenbo Zhang", "Amulya Yadav" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=SCcOu4hJ97
@inproceedings{ leemann2023caution, title={Caution to the Exemplars: On the Intriguing Effects of Example Choice on Human Trust in {XAI}}, author={Tobias Leemann and Yao Rong and Thai-Trang Nguyen and Enkelejda Kasneci and Gjergji Kasneci}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=SCcOu4hJ97} }
In model audits explainable AI (XAI) systems are usually presented to human auditors on a limited number of examples due to time constraints. However, recent literature has suggested that in order to establish trust in ML models, it is not only the model’s overall performance that matters but also the specific examples on which it is correct. In this work, we study this hypothesis through a controlled user study with N = 320 participants. On a tabular and an image dataset, we show model explanations to users on examples that are categorized as ambiguous or unambiguous. For ambiguous examples, there is disagreement on the correct label among human raters whereas for unambiguous examples human labelers agree. We find that ambiguity can have a substantial effect on human trust, which is however influenced by surprising interactions of the data modality and explanation quality. While unambiguous examples boost trust for explanations that remain plausible, they also help auditors identify highly implausible explanations, thereby decreasing trust. Our results suggest paying closer attention to the selected examples in the presentation of XAI techniques.
Caution to the Exemplars: On the Intriguing Effects of Example Choice on Human Trust in XAI
[ "Tobias Leemann", "Yao Rong", "Thai-Trang Nguyen", "Enkelejda Kasneci", "Gjergji Kasneci" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=QPqL9xsYOf
@inproceedings{ alvarez-napagao2023policy, title={Policy graphs in action: explaining single- and multi-agent behaviour using predicates}, author={Sergio Alvarez-Napagao and Adri{\'a}n Tormos and Victor Abalos and Dmitry Gnatyshak}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=QPqL9xsYOf} }
This demo shows that policy graphs (PGs) provide reliable explanations of the behaviour of agents trained in two distinct environments. Additionally, this work shows the ability to generate surrogate agents using PGs that exhibit accurate behavioral resemblances to the original agents and that this feature allows us to validate the explanations given by the system. This facilitates transparent integration of opaque agents into socio-technical systems, ensuring explainability of their actions and decisions, enabling trust in hybrid human-AI environments, and ensuring cooperative efficacy. We present demonstrations based on two environments and we present a work-in-progress library that will allow integration with a broader range of environments and types of agent policies.
Policy graphs in action: explaining single- and multi-agent behaviour using predicates
[ "Adrián Tormos", "Victor Abalos", "Dmitry Gnatyshak", "Sergio Alvarez-Napagao" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=OIbmpF4ZR9
@inproceedings{ ziems2023explaining, title={Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection}, author={Noah Ziems and Gang Liu and John Flanagan and Meng Jiang}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=OIbmpF4ZR9} }
Network intrusion detection (NID) systems which leverage machine learning have been shown to have strong performance in practice when used to detect malicious network traffic. Decision trees in particular offer a strong balance between performance and simplicity, but require users of NID systems to have background knowledge in machine learning to interpret. In addition, they are unable to provide additional outside information as to why certain features may be important for classification. In this work, we explore the use of large language models (LLMs) to provide explanations and additional background knowledge for decision tree NID systems. Further, we introduce a new human evaluation framework for decision tree explanations, which leverages automatically generated quiz questions that measure human evaluators' understanding of decision tree inference. Finally, we show LLM generated decision tree explanations correlate highly with human ratings of readability, quality, and use of background knowledge while simultaneously providing better understanding of decision boundaries.
Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection
[ "Noah Ziems", "Gang Liu", "John Flanagan", "Meng Jiang" ]
Workshop/XAIA
2023
2310.19658
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=N5RmOXuTDo
@inproceedings{ ho2023obey, title={ObEy Anything: Quantifiable Object-based Explainability without Ground Truth Annotations}, author={William Ho and Lennart Schulze and Richard Zemel}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=N5RmOXuTDo} }
Neural networks are at the core of AI systems recently observing accelerated adoption in high-stakes environments. Consequently, understanding their black-box predictive behavior is paramount. Current explainable AI techniques, however, are limited to explaining a single prediction, rather than characterizing the inherent ability of the model to be explained, reducing their usefulness to manual inspection of samples. In this work, we offer a conceptual distinction between explanation methods and explainability. We use this motivation to propose Object-based Explainability (ObEy), a novel model explainability metric that collectively assesses model-produced saliency maps relative to objects in images, inspired by humans’ perception of scenes. To render ObEy independent of the prediction task, we use full-image instance segmentations obtained from a foundation model, making the metric applicable on existing models in any setting. We demonstrate ObEy’s immediate applicability to use cases in model inspection and comparison. As a result, we present new insights into the explainability of adversarially trained models from a quantitative perspective.
ObEy: Quantifiable Object-based Explainability without Ground-Truth Annotations
[ "Lennart Schulze", "William Ho", "Richard Zemel" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Liw9vOCxe2
@inproceedings{ martinez2023costaware, title={Cost-aware counterfactuals for black box explanations}, author={Natalia Martinez and Kanthi Sarpatwar and Sumanta Mukherjee and Roman Vaculin}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=Liw9vOCxe2} }
Counterfactual explanations provide actionable insights into the minimal change in a system that would lead to a more desirable prediction from a black box model. We address the challenges of finding valid and low cost counterfactuals in the setting where there is a different cost or preference for perturbing each feature. We propose a multiplicative weight approach that is applied on the perturbation, and show that this simple approach can be easily adapted to obtain multiple diverse counterfactuals, as well as to integrate the importance features obtained by other state of the art explainers to provide counterfactual examples. Additionally, we discuss the computation of valid counterfactuals with numerical gradient-based methods when the black box model presents flat regions with no reliable gradient. In this scenario, sampling approaches, as well as those that rely on available data, sometimes provide counterfactuals that may not be close to the decision boundary. We show that a simple long-range guidance approach, which consist of sampling from a larger radius sphere in search of a direction of change for the black box predictor when no gradient is available, improves the quality of the counterfactual explanation. In this work we discuss existing approaches, and show how our proposed alternatives compares favourably on different datasets and metrics.
Cost-aware counterfactuals for black box explanations
[ "Natalia Martinez", "Kanthi Sarpatwar", "Sumanta Mukherjee", "Roman Vaculin" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=KPtW2SU0my
@inproceedings{ barr2023the, title={The Disagreement Problem in Faithfulness Metrics}, author={Brian Barr and Noah Fatsi and Leif Hancox-Li and Peter Richter and Daniel Proano}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=KPtW2SU0my} }
The field of explainable artificial intelligence (XAI) aims to explain how black-box machine learning models work. Much of the work centers around the holy grail of providing post-hoc feature attributions to any model architecture. While the pace of innovation around novel methods has slowed down, the question remains of how to choose a method, and how to make it fit for purpose. Recently, efforts around benchmarking XAI methods have suggested metrics for that purpose—but there are many choices. That bounty of choice still leaves an end user unclear on how to proceed. This paper focuses on comparing metrics with the aim of measuring faithfulness of local explanations on tabular classification problems—and shows that the current metrics don’t agree; leaving users unsure how to choose the most faithful explanations.
The Disagreement Problem in Faithfulness Metrics
[ "Brian Barr", "Noah Fatsi", "Leif Hancox-Li", "Peter Richter", "Daniel Proano" ]
Workshop/XAIA
2023
2311.07763
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JqfN8vp1ov
@inproceedings{ ulrich2023interactive, title={Interactive Visual Feature Search}, author={Devon Ulrich and Ruth Fong}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=JqfN8vp1ov} }
Many visualization techniques have been created to explain the behavior of computer vision models, but they largely consist of static diagrams that convey limited information. Interactive visualizations allow users to more easily interpret a model's behavior, but most are not easily reusable for new models. We introduce Visual Feature Search, a novel interactive visualization that is adaptable to any CNN and can easily be incorporated into a researcher's workflow. Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar model features. We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on a range of applications, such as in medical imaging and wildlife classification.
Interactive Visual Feature Search
[ "Devon Ulrich", "Ruth Fong" ]
Workshop/XAIA
2023
2211.15060
[ "https://github.com/lookingglasslab/visualfeaturesearch" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=GL7RDOru1k
@inproceedings{ jiang2023empowering, title={Empowering Domain Experts to Detect Social Bias in Generative {AI} with User-Friendly Interfaces}, author={Roy Jiang and Rafal Kocielnik and Adhithya Prakash Saravanan and Pengrui Han and R. Michael Alvarez and Anima Anandkumar}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=GL7RDOru1k} }
Generative AI models have become vastly popular and drive advances in all aspects of the modern economy. Detecting and quantifying the implicit social biases that they inherit in training, such as racial and gendered biases, is a critical first step in avoiding discriminatory outcomes. However, current methods are difficult to use and inflexible, presenting an obstacle for domain experts such as social scientists, ethicists, and gender studies experts. We present two comprehensive open-source bias testing tools (BiasTestGPT for PLMs and BiasTestVQA for VQA models) hosted on HuggingFace to address this challenge. With these tools, we provide intuitive and flexible tools for social bias testing in generative AI models, allowing for unprecedented ease in detecting and quantifying social bias across multiple generative AI models and mediums.
Empowering Domain Experts to Detect Social Bias in Generative AI with User-Friendly Interfaces
[ "Roy Jiang", "Rafal Kocielnik", "Adhithya Prakash Saravanan", "Pengrui Han", "R. Michael Alvarez", "Anima Anandkumar" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FSmlu6xrUt
@inproceedings{ marcinkevi{\v{c}}s2023beyond, title={Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?}, author={Ri{\v{c}}ards Marcinkevi{\v{c}}s and Sonia Laguna and Moritz Vandenhirtz and Julia Vogt}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=FSmlu6xrUt} }
Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts. A compelling advantage of this model class is the user's ability to intervene on the predicted concept values, consequently affecting the model's downstream output. In this work, we introduce a method to perform such concept-based interventions on already-trained neural networks, which are not interpretable by design. Furthermore, we formalise the model's *intervenability* as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black-box models. Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We demonstrate that fine-tuning improves intervention effectiveness and often yields better-calibrated predictions. To showcase the practical utility of the proposed techniques, we apply them to chest X-ray classifiers and show that fine-tuned black boxes can be as intervenable and more performant than CBMs.
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
[ "Ričards Marcinkevičs", "Sonia Laguna", "Moritz Vandenhirtz", "Julia Vogt" ]
Workshop/XAIA
2023
2401.13544
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=F6RPYDUIZr
@inproceedings{ raman2023do, title={Do Concept Bottleneck Models Obey Locality?}, author={Naveen Raman and Mateo Espinosa Zarlenga and Juyeon Heo and Mateja Jamnik}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=F6RPYDUIZr} }
Concept-based learning improves a deep learning model's interpretability by explaining its predictions via human-understandable concepts. Deep learning models trained under this paradigm heavily rely on the assumption that neural networks can learn to predict the presence or absence of a given concept independently of other concepts. Recent work, however, strongly suggests that this assumption may fail to hold in Concept Bottleneck Models (CBMs), a quintessential family of concept-based interpretable architectures. In this paper, we investigate whether CBMs correctly capture the degree of conditional independence across concepts when such concepts are localised both \textit{spatially}, by having their values entirely defined by a fixed subset of features, and \textit{semantically}, by having their values correlated with only a fixed subset of predefined concepts. To understand locality, we analyse how changes to features outside of a concept's spatial or semantic locality impact concept predictions. Our results suggest that even in well-defined scenarios where the presence of a concept is localised to a fixed feature subspace, or whose semantics are correlated to a small subset of other concepts, CBMs fail to learn this locality. These results cast doubt upon the quality of concept representations learnt by CBMs and strongly suggest that concept-based explanations may be fragile to changes outside their localities.
Do Concept Bottleneck Models Obey Locality?
[ "Naveen Raman", "Mateo Espinosa Zarlenga", "Juyeon Heo", "Mateja Jamnik" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=DkyNNQPmSj
@inproceedings{ piratla2023estimation, title={Estimation of Concept Explanations Should be Uncertainty Aware}, author={Vihari Piratla and Juyeon Heo and Sukriti Singh and Adrian Weller}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=DkyNNQPmSj} }
Model explanations are very valuable for interpreting and debugging prediction models. We study a specific kind of global explanations called Concept Explanations, where the goal is to interpret a model using human-understandable concepts. Recent advances in multi-modal learning rekindled interest in concept explanations and led to several label-efficient proposals for estimation. However, existing estimation methods are unstable to the choice of concepts or dataset that is used for computing explanations. We observe that instability in explanations is because estimations do not model noise. We propose an uncertainty aware estimation method, which readily improved reliability of the concept explanations. We demonstrate with theoretical analysis and empirical evaluation that explanations computed by our method are stable to the choice of concepts and data shifts while also being label-efficient and faithful.
Estimation of Concept Explanations Should be Uncertainty Aware
[ "Vihari Piratla", "Juyeon Heo", "Sukriti Singh", "Adrian Weller" ]
Workshop/XAIA
2023
2312.08063
[ "https://github.com/vps-anonconfs/uace" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=CKPGhnMADQ
@inproceedings{ chan2023optimising, title={Optimising Human-{AI} Collaboration by Learning Convincing Explanations}, author={Alex Chan and Alihan H{\"u}y{\"u}k and Mihaela van der Schaar}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=CKPGhnMADQ} }
Machine learning models are being increasingly deployed to take, or assist in taking, complicated and high-impact decisions, from quasi-autonomous vehicles to clinical decision support systems. This poses challenges, particularly when models have hard-to-detect failure modes and are able to take actions without oversight. In order to handle this challenge, we propose a method for a collaborative system that remains safe by having a human ultimately making decisions, while giving the model the best opportunity to convince and debate them with interpretable explanations. However, the most helpful explanation varies among individuals and may be inconsistent across stated preferences. To this end we develop an algorithm, Ardent, to efficiently learn a ranking through interaction and best assist humans complete a task. By utilising a collaborative approach, we can ensure safety and improve performance while addressing transparency and accountability concerns. Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations, which we validate through extensive simulations alongside a user study involving a challenging image classification task, demonstrating consistent improvement over competing systems.
Optimising Human-AI Collaboration by Learning Convincing Explanations
[ "Alex Chan", "Alihan Hüyük", "Mihaela van der Schaar" ]
Workshop/XAIA
2023
2311.07426
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ANrzX5KFAG
@inproceedings{ madaan2023diffusionguided, title={Diffusion-Guided Counterfactual Generation for Model Explainability}, author={Nishtha Madaan and Srikanta Bedathur}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=ANrzX5KFAG} }
Generating counterfactual explanations is one of the most effective approaches for uncovering the inner workings of black-box neural network models and building user trust. While remarkable strides have been made in generative modeling using diffusion models in domains like vision, their utility in generating counterfactual explanations in structured modalities remains unexplored. In this paper, we introduce Structured Counterfactual Diffuser or SCD, the first plug-and-play framework leveraging diffusion for generating counterfactual explanations in structured data. SCD learns the underlying data distribution via a diffusion model which is then guided at test time to generate counterfactuals for any arbitrary black-box model, input, and desired prediction. Our experiments show that our counterfactuals not only exhibit high plausibility compared to the existing state-of-the-art but also show significantly better proximity and diversity.
Diffusion-Guided Counterfactual Generation for Model Explainability
[ "Nishtha Madaan", "Srikanta Bedathur" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=9yXEqVKacK
@inproceedings{ kori2023glance, title={{GLANCE}: Global to Local Architecture-Neutral Concept-based Explanations}, author={Avinash Kori and Ben Glocker and Francesca Toni}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=9yXEqVKacK} }
Most of the current explainability techniques focus on capturing the importance of features in input space. However, given the complexity of models and data-generating processes, the resulting explanations are far from being complete, in that they lack an indication of feature interactions and visualization of their effect. In this work, we propose a novel surrogate-model-based explainability framework to explain the decisions of any CNN-based image classifiers by extracting causal relations between the features. These causal relations serve as global explanations from which local explanations of different forms can be obtained. Specifically, we employ a generator to visualize the `effect' of interactions among features in latent space and draw feature importance therefrom as local explanations. We demonstrate and evaluate explanations obtained with our framework on the Morpho-MNIST, the FFHQ, and the AFHQ datasets.
GLANCE: Global to Local Architecture-Neutral Concept-based Explanations
[ "Avinash Kori", "Ben Glocker", "Francesca Toni" ]
Workshop/XAIA
2023
2207.01917
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=9i4AcMYE6o
@inproceedings{ harel2023inherent, title={Inherent Inconsistencies of Feature Importance}, author={Nimrod Harel and Uri Obolski and Ran Gilad-Bachrach}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=9i4AcMYE6o} }
The rapid advancement and widespread adoption of machine learning-driven technologies have underscored the practical and ethical need for creating interpretable artificial intelligence systems. Feature importance, a method that assigns scores to the contribution of individual features on prediction outcomes, seeks to bridge this gap as a tool for enhancing human comprehension of these systems. Feature importance serves as an explanation of predictions in diverse contexts, whether by providing a global interpretation of a phenomenon across the entire dataset or by offering a localized explanation for the outcome of a specific data point. Furthermore, feature importance is being used both for explaining models and for identifying plausible causal relations in the data, independently from the model. However, it is worth noting that these various contexts have traditionally been explored in isolation, with limited theoretical foundations. This paper presents an axiomatic framework designed to establish coherent relationships among the different contexts of feature importance scores. Notably, our work unveils a surprising conclusion: when we combine the proposed properties with those previously outlined in the literature, we demonstrate the existence of an inconsistency. This inconsistency highlights that certain essential properties of feature importance scores cannot coexist harmoniously within a single framework.
Inherent Inconsistencies of Feature Importance
[ "Nimrod Harel", "Uri Obolski", "Ran Gilad-Bachrach" ]
Workshop/XAIA
2023
2206.08204
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=8BR8EaWNTZ
@inproceedings{ chaleshtori2023on, title={On Evaluating Explanation Utility for Human-{AI} Decision-Making in {NLP}}, author={Fateme Hashemi Chaleshtori and Atreya Ghosal and Ana Marasovic}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=8BR8EaWNTZ} }
Is explainability a false promise? This debate has emerged from the lack of consistent evidence that explanations help in situations they are introduced for. In NLP, the evidence is not only inconsistent but also scarce. While there is a clear need for more human-centered, application-grounded evaluations, it is less clear where NLP researchers should begin if they want to conduct them. To address this, we introduce evaluation guidelines established through an extensive review and meta-analysis of related work.
On Evaluating Explanation Utility for Human-AI Decision-Making in NLP
[ "Fateme Hashemi Chaleshtori", "Atreya Ghosal", "Ana Marasovic" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=81FSrQxgEv
@inproceedings{ laguna2023explimeable, title={Exp{LIME}able: An exploratory framework for {LIME}}, author={Sonia Laguna and Julian Heidenreich and Jiugeng Sun and Nil{\"u}fer Cetin and Ibrahim Al Hazwani and Udo Schlegel and Furui Cheng and Mennatallah El-Assady}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=81FSrQxgEv} }
ExpLIMEable is a tool to enhance the comprehension of Local Interpretable Model-Agnostic Explanations (LIME), particularly within the realm of medical image analysis. LIME explanations often lack robustness due to variances in perturbation techniques and interpretable function choices. Powered by a convolutional neural network for brain MRI tumor classification, \textit{ExpLIMEable} seeks to mitigate these issues. This explainability tool allows users to tailor and explore the explanation space generated post hoc by different LIME parameters to gain deeper insights into the model's decision-making process, its sensitivity, and limitations. We introduce a novel dimension reduction step on the perturbations seeking to find more informative neighborhood spaces and extensive provenance tracking to support the user. This contribution ultimately aims to enhance the robustness of explanations, key in high-risk domains like healthcare.
ExpLIMEable: An exploratory framework for LIME
[ "Sonia Laguna", "Julian Heidenreich", "Jiugeng Sun", "Nilüfer Cetin", "Ibrahim Al Hazwani", "Udo Schlegel", "Furui Cheng", "Mennatallah El-Assady" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=3oysFpd6Pq
@inproceedings{ ghosh2023influence, title={Influence Based Approaches to Algorithmic Fairness: A Closer Look}, author={Soumya Ghosh and Prasanna Sattigeri and Inkit Padhi and Manish Nagireddy and Jie Chen}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=3oysFpd6Pq} }
Off-the-shelf pre-trained models are increasingly common in machine learning. When deployed in the real world, it is essential that such models are not just accurate but also demonstrate qualities like fairness. This paper takes a closer look at recently proposed approaches that edit a pre-trained model for group fairness by re-weighting the training data. We offer perspectives that unify disparate weighting schemes from past studies and pave the way for new weighting strategies to address group fairness concerns.
Influence Based Approaches to Algorithmic Fairness: A Closer Look
[ "Soumya Ghosh", "Prasanna Sattigeri", "Inkit Padhi", "Manish Nagireddy", "Jie Chen" ]
Workshop/XAIA
2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=3BX9tM03GT
@inproceedings{ singh2023explaining, title={Explaining black box text modules in natural language with language models}, author={Chandan Singh and Aliyah Hsu and Richard Antonello and Shailee Jain and Alexander Huth and Bin Yu and Jianfeng Gao}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=3BX9tM03GT} }
Large language models (LLMs) have demonstrated remarkable prediction performance for a growing array of tasks. However, their rapid proliferation and increasing opaqueness have created a growing need for interpretability. Here, we ask whether we can automatically obtain natural language explanations for black box text modules. A *text module* is any function that maps text to a scalar continuous value, such as a submodule within an LLM or a fitted model of a brain region. *Black box* indicates that we only have access to the module's inputs. We introduce Summarize and Score (SASC), a method that takes in a text module and returns a natural language explanation of the module's selectivity along with a score for how reliable the explanation. We study SASC in 2 contexts. First, we evaluate SASC on synthetic modules and find that it often recovers ground truth explanations. Second, we use SASC to explain modules found within a pre-trained BERT model, enabling inspection of the model's internals.
Explaining black box text modules in natural language with language models
[ "Chandan Singh", "Aliyah Hsu", "Richard Antonello", "Shailee Jain", "Alexander Huth", "Bin Yu", "Jianfeng Gao" ]
Workshop/XAIA
2023
2305.09863
[ "https://github.com/microsoft/automated-explanations" ]
https://huggingface.co/papers/2305.09863
5
3
0
7
1
[]
[]
[]
null
https://openreview.net/forum?id=2CfzKrx1vr
@inproceedings{ heo2023use, title={Use Perturbations when Learning from Explanations}, author={Juyeon Heo and Vihari Piratla and Matthew Wicker and Adrian Weller}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=2CfzKrx1vr} }
Machine learning from explanations (MLX) is an approach to learning that uses human-provided explanations of relevant or irrelevant features for each input to ensure that model predictions are right for the right reasons. Existing MLX approaches rely on local model interpretation methods and require strong model smoothing to align model and human explanations, leading to sub-optimal performance. We recast MLX as a robustness problem, where human explanations specify a lower dimensional manifold from which perturbations can be drawn, and show both theoretically and empirically how this approach alleviates the need for strong model smoothing. We consider various approaches to achieving robustness, leading to improved performance over prior MLX methods. Finally, we show how to combine robustness with an earlier MLX method, yielding state-of-the-art results on both synthetic and real-world benchmarks.
Use Perturbations when Learning from Explanations
[ "Juyeon Heo", "Vihari Piratla", "Matthew Wicker", "Adrian Weller" ]
Workshop/XAIA
2023
2303.06419
[ "https://github.com/vihari/robust_mlx" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=112o4j4VCY
@inproceedings{ kasmi2023assessment, title={Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain}, author={Gabriel Kasmi and Laurent Dubus and Yves-Marie Saint-Drenan and Philippe BLANC}, booktitle={XAI in Action: Past, Present, and Future Applications}, year={2023}, url={https://openreview.net/forum?id=112o4j4VCY} }
Neural networks have shown remarkable performance in computer vision, but their deployment in numerous scientific and technical fields is challenging due to their black-box nature. Scientists and practitioners need to evaluate the reliability of a decision, i.e., to know simultaneously if a model relies on the relevant features and whether these features are robust to image corruptions. Existing attribution methods aim to provide human-understandable explanations by highlighting important regions in the image domain, but fail to fully characterize a decision process's reliability. To bridge this gap, we introduce the Wavelet sCale Attribution Method (WCAM), a generalization of attribution from the pixel domain to the space-scale domain using wavelet transforms. Attribution in the wavelet domain reveals where and on what scales the model focuses, thus enabling us to assess whether a decision is reliable. Our code is accessible here: \url{https://github.com/gabrielkasmi/spectral-attribution}.
Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain
[ "Gabriel Kasmi", "Laurent Dubus", "Yves-Marie Saint-Drenan", "Philippe BLANC" ]
Workshop/XAIA
2023
2305.14979
[ "https://github.com/gabrielkasmi/spectral-attribution" ]
https://huggingface.co/papers/2305.14979
0
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=yqGoKziEvY
@inproceedings{ herrmann2023learning, title={Learning Useful Representations of Recurrent Neural Network Weight Matrices}, author={Vincent Herrmann and Francesco Faccio and J{\"u}rgen Schmidhuber}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=yqGoKziEvY} }
Recurrent Neural Networks (RNNs) are general-purpose parallel-sequential computers. The program of an RNN is its weight matrix. Its direct analysis, however, tends to be challenging. Is it possible to learn useful representations of RNN weights that facilitate downstream tasks? While the "Mechanistic Approach" directly 'looks inside' the RNN to predict its behavior, the "Functionalist Approach" analyzes its overall functionality---specifically, its input-output mapping. Our two novel Functionalist Approaches extract information from RNN weights by 'interrogating' the RNN through probing inputs. Our novel theoretical framework for the Functionalist Approach demonstrates conditions under which it can generate rich representations for determining the behavior of RNNs. RNN weight representations generated by Mechanistic and Functionalist approaches are compared by evaluating them in two downstream tasks. Our results show the superiority of Functionalist methods.
Learning Useful Representations of Recurrent Neural Network Weight Matrices
[ "Vincent Herrmann", "Francesco Faccio", "Jürgen Schmidhuber" ]
Workshop/NeurReps
poster
[ "https://github.com/vincentherrmann/rnn-weights-representation-learning" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yW1HcKnFcG
@inproceedings{ chetan2023distance, title={Distance Learner: Incorporating Manifold Prior to Model Training}, author={Aditya Chetan and Nipun Kwatra}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=yW1HcKnFcG} }
The manifold hypothesis (real-world data concentrates near low-dimensional manifolds) is suggested as the principle behind the effectiveness of machine learning algorithms in very high-dimensional problems that are common in domains such as vision and speech. Multiple methods have been proposed to explicitly incorporate the manifold hypothesis as a prior in modern Deep Neural Networks (DNNs), with varying success. In this paper, we propose a new method, Distance Learner, to incorporate this prior for DNN-based classifiers. Distance Learner is trained to predict the distance of a point from the underlying manifold of each class, rather than the class label. For classification, Distance Learner then chooses the class corresponding to the closest predicted class manifold. Distance Learner can also identify points as being out of distribution (belonging to neither class), if the distance to the closest manifold is higher than a threshold. We evaluate our method on multiple synthetic datasets and show that Distance Learner learns much more meaningful classification boundaries compared to a standard classifier. We also evaluate our method on the task of adversarial robustness and find that it not only outperforms standard classifiers by a large margin but also performs at par with classifiers trained via well-accepted standard adversarial training.
Distance Learner: Incorporating Manifold Prior to Model Training
[ "Aditya Chetan", "Nipun Kwatra" ]
Workshop/NeurReps
poster
2207.06888
[ "https://github.com/microsoft/distance-learner" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wIS0vop9R7
@inproceedings{ lecomte2023an, title={An Information-Theoretic Understanding of Maximum Manifold Capacity Representations}, author={Victor Lecomte and Rylan Schaeffer and Berivan Isik and Mikail Khona and Yann LeCun and Sanmi Koyejo and Andrey Gromov and Ravid Shwartz-Ziv}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=wIS0vop9R7} }
Maximum Manifold Capacity Representations (MMCR) is a recent multi-view self-supervised learning (MVSSL) method that matches or surpasses other leading MVSSL methods. MMCR is interesting for at least two reasons. Firstly, MMCR is an oddity in the zoo of MVSSL methods: it is not (explicitly) contrastive, applies no masking, performs no clustering, leverages no distillation, and does not (explicitly) reduce redundancy. Secondly, while many self-supervised learning (SSL) methods originate in information theory, MMCR distinguishes itself by claiming a different origin: a statistical mechanical characterization of the geometry of linear separability of data manifolds. However, given the rich connections between statistical mechanics and information theory, and given recent work showing how many SSL methods can be understood from an information-theoretic perspective, we conjecture that MMCR can be similarly understood from an information-theoretic perspective. In this paper, we leverage tools from high dimensional probability and information theory to demonstrate that an optimal solution to MMCR's nuclear norm-based objective function is the same optimal solution that maximizes a well-known lower bound on mutual information.
An Information-Theoretic Understanding of Maximum Manifold Capacity Representations
[ "Victor Lecomte", "Rylan Schaeffer", "Berivan Isik", "Mikail Khona", "Yann LeCun", "Sanmi Koyejo", "Andrey Gromov", "Ravid Shwartz-Ziv" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uOjSFxFz5k
@inproceedings{ sonoda2023joint, title={Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks}, author={Sho Sonoda and Hideyuki Ishi and Isao Ishikawa and Masahiro Ikeda}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=uOjSFxFz5k} }
The symmetry and geometry of input data are considered to be encoded in the internal data representation inside the neural network, but the specific encoding rule has been less investigated. In this study, we present a systematic method to induce a generalized neural network and its right inverse operator, called the ridgelet transform, from a joint group invariant function on the data-parameter domain. Since the ridgelet transform is an inverse, (1) it can describe the arrangement of parameters for the network to represent a target function, which is understood as the encoding rule, and (2) it implies the universality of the network. Based on the group representation theory, we present a new simple proof of the universality by using Schur's lemma in a unified manner covering a wide class of networks, for example, the original ridgelet transform, formal deep networks, and the dual voice transform. Since traditional universality theorems were demonstrated based on functional analysis, this study sheds light on the group theoretic aspect of the approximation theory, connecting geometric deep learning to abstract harmonic analysis.
Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks
[ "Sho Sonoda", "Hideyuki Ishi", "Isao Ishikawa", "Masahiro Ikeda" ]
Workshop/NeurReps
oral
2310.03530
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=u7r2160QiP
@inproceedings{ sortur2023sample, title={Sample Efficient Modeling of Drag Coefficients for Satellites with Symmetry}, author={Neel Sortur and Linfeng Zhao and Robin Walters}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=u7r2160QiP} }
Accurate knowledge of the atmospheric drag coefficient for a satellite in low Earth orbit is crucial to plan an orbit that avoids collisions with other spacecraft, but its calculation has high uncertainty and is very expensive to numerically compute for long-horizon predictions. Previous work has improved coefficient modeling speed with data-driven approaches, but these models do not utilize domain symmetry. This work investigates enforcing the invariance of atmospheric particle deflections off certain satellite geometries, resulting in higher sample efficiency and theoretically more robustness for data-driven methods. We train $G$-equivariant MLPs to predict the drag coefficient, where $G$ defines invariances of the coefficient across different orientations of the satellite. We experiment on a synthetic dataset computed using the numerical Test Particle Monte Carlo (TPMC) method, where particles are fired at a satellite in the computational domain. We find that our method is more sample and computationally efficient than unconstrained baselines, which is significant because TPMC simulations are extremely computationally expensive.
Sample Efficient Modeling of Drag Coefficients for Satellites with Symmetry
[ "Neel Sortur", "Linfeng Zhao", "Robin Walters" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=tIrGgIn8jr
@inproceedings{ lu2023ames, title={{AMES}: A Differentiable Embedding Space Selection Framework for Latent Graph Inference}, author={Yuan Lu and Haitz S{\'a}ez de Oc{\'a}riz Borde and Pietro Lio}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=tIrGgIn8jr} }
In real-world scenarios, although data entities may possess inherent relationships, the specific graph illustrating their connections might not be directly accessible. Latent graph inference addresses this issue by enabling Graph Neural Networks (GNNs) to operate on point cloud data, dynamically learning the necessary graph structure. These graphs are often derived from a latent embedding space, which can be modeled using Euclidean, hyperbolic, spherical, or product spaces. However, currently, there is no principled differentiable method for determining the optimal embedding space. In this work, we introduce the Attentional Multi-Embedding Selection (AMES) framework, a differentiable method for selecting the best embedding space for latent graph inference through backpropagation, considering a downstream task. Our framework consistently achieves comparable or superior results compared to previous methods for latent graph inference across five benchmark datasets. Importantly, our approach eliminates the need for conducting multiple experiments to identify the optimal embedding space. Furthermore, we explore interpretability techniques that track the gradient contributions of different latent graphs, shedding light on how our attention-based, fully differentiable approach learns to choose the appropriate latent space. In line with previous works, our experiments emphasize the advantages of hyperbolic spaces in enhancing performance. More importantly, our interpretability framework provides a general approach for quantitatively comparing embedding spaces across different tasks based on their contributions, a dimension that has been overlooked in previous literature on latent graph inference.
AMES: A Differentiable Embedding Space Selection Framework for Latent Graph Inference
[ "Yuan Lu", "Haitz Sáez de Ocáriz Borde", "Pietro Lio" ]
Workshop/NeurReps
poster
2311.11891
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rmdSVvC1Qk
@inproceedings{ vastola2023optimal, title={Optimal packing of attractor states in neural representations}, author={John Vastola}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=rmdSVvC1Qk} }
Animals' internal states reflect variables like their position in space, orientation, decisions, and motor actions—but how should these internal states be arranged? Internal states which frequently transition between one another should be close enough that transitions can happen quickly, but not so close that neural noise significantly impacts the stability of those states, and how reliably they can be encoded and decoded. In this paper, we study the problem of striking a balance between these two concerns, which we call an 'optimal packing' problem since it resembles mathematical problems like sphere packing. While this problem is generally extremely difficult, we show that symmetries in environmental transition statistics imply certain symmetries of the optimal neural representations, which allows us in some cases to exactly solve for the optimal state arrangement. We focus on two toy cases: uniform transition statistics, and cyclic transition statistics.
Optimal packing of attractor states in neural representations
[ "John Vastola" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ql3u5ITQ5C
@inproceedings{ murray2023grokking, title={Grokking in recurrent networks with attractive and oscillatory dynamics}, author={Keith Murray}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ql3u5ITQ5C} }
Generalization is perhaps the most salient property of biological intelligence. In the context of artificial neural networks (ANNs), generalization has been studied through investigating the recently-discovered phenomenon of "grokking" whereby small transformers generalize on modular arithmetic tasks. We extend this line of work to continuous time recurrent neural networks (CT-RNNs) to investigate generalization in neural systems. Inspired by the card game SET, we reformulated previous modular arithmetic tasks as a binary classification task to elicit interpretable CT-RNN dynamics. We found that CT-RNNs learned one of two dynamical mechanisms characterized by either attractive or oscillatory dynamics. Notably, both of these mechanisms displayed strong parallels to deterministic finite automata (DFA). In our grokking experiments, we found that attractive dynamics generalize more frequently in training regimes with few withheld data points while oscillatory dynamics generalize more frequently in training regimes with many withheld data points.
Grokking in recurrent networks with attractive and oscillatory dynamics
[ "Keith Murray" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qMdWGydOli
@inproceedings{ portilheiro2023quantifying, title={Quantifying Lie Group Learning with Local Symmetry Error}, author={Vasco Portilheiro}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=qMdWGydOli} }
Despite increasing interest in using machine learning to discover symmetries, no quantitative measure has been proposed in order to compare the performance of different algorithms. Our proposal, both intuitively and theoretically grounded, is to compare Lie groups using a *local symmetry error*, based on the difference between their infinitesimal actions at any possible datapoint. Namely, we use a well-studied metric to compare the induced tangent spaces. We obtain an upper bound on this metric which is uniform across datapoints, under some conditions. We show that when one of the groups is a circle group, this bound is furthermore both tight and easily computable, thus globally characterizing the local errors. We demonstrate our proposal by quantitatively evaluating an existing algorithm. We note that our proposed metric has deficiencies in comparing tangent spaces of different dimensions, as well as distinct groups whose local actions are similar.
Quantifying Lie Group Learning with Local Symmetry Error
[ "Vasco Portilheiro" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=q1zZJrXoIe
@inproceedings{ feng2023how, title={How do language models bind entities in context?}, author={Jiahai Feng and Jacob Steinhardt}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=q1zZJrXoIe} }
To correctly use in-context information, language models (LMs) must bind entities to their attributes. For example, given a context describing a "green square" and a "blue circle", LMs must bind the shapes to their respective colors. We analyze LM representations and identify the binding ID mechanism: a general mechanism for solving the binding problem, which we observe in every sufficiently large model from the Pythia and LLaMA families. Using causal interventions, we show that LMs' internal activations represent binding information by attaching binding ID vectors to corresponding entities and attributes. We further show that binding ID vectors form a continuous subspace, in which distances between binding ID vectors reflect their discernability. Overall, our results uncover interpretable strategies in LMs for representing symbolic knowledge in-context, providing a step towards understanding general in-context reasoning in large-scale LMs.
How do language models bind entities in context?
[ "Jiahai Feng", "Jacob Steinhardt" ]
Workshop/NeurReps
poster
2310.17191
[ "https://github.com/jiahai-feng/binding-iclr" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=oD8DD5jQ5I
@inproceedings{ charvin2023towards, title={Towards Information Theory-Based Discovery of Equivariances}, author={Hippolyte Charvin and Nicola Catenacci Volpi and Daniel Polani}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=oD8DD5jQ5I} }
The presence of symmetries imposes a stringent set of constraints on a system. This constrained structure allows intelligent agents interacting with such a system to drastically improve the efficiency of learning and generalization, through the internalisation of the system's symmetries into their information-processing. In parallel, principled models of complexity-constrained learning and behaviour make increasing use of information-theoretic methods. Here, we wish to marry these two perspectives and understand whether and in which form the information-theoretic lens can ``see'' the effect of symmetries of a system. For this purpose, we propose a novel variant of the Information Bottleneck principle, which has served as a productive basis for many principled studies of learning and information-constrained adaptive behaviour. We show (in the discrete case) that our approach formalises a certain duality between symmetry and information parsimony: namely, channel equivariances can be characterised by the optimal mutual information-preserving joint compression of the channel's input and output. This information-theoretic treatment furthermore suggests a principled notion of "soft" equivariance, whose "coarseness" is measured by the amount of input-output mutual information preserved by the corresponding optimal compression. This new notion offers a bridge between the field of bounded rationality and the study of symmetries in neural representations. The framework may also allow (exact and soft) equivariances to be automatically discovered.
Towards Information Theory-Based Discovery of Equivariances
[ "Hippolyte Charvin", "Nicola Catenacci Volpi", "Daniel Polani" ]
Workshop/NeurReps
oral
2310.16555
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=mQ1gpEXE3W
@inproceedings{ zhao2023improving, title={Improving Convergence and Generalization Using Parameter Symmetries}, author={Bo Zhao and Robert Gower and Robin Walters and Rose Yu}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=mQ1gpEXE3W} }
In overparametrized models, different parameter values may result in the same loss. Parameter space symmetries are loss-invariant transformations that change the model parameters. Teleportation applies such transformations to accelerate optimization. However, the exact mechanism behind this algorithm's success is not well understood. In this paper, we prove that teleportation gives overall faster time to convergence. Additionally, teleporting to minima with different curvatures improves generalization, which suggests a connection between the curvature of the minima and generalization ability. Finally, we show that integrating teleportation into optimization-based meta-learning improves convergence over traditional algorithms that perform only local updates. Our results showcase the versatility of teleportation and demonstrate the potential of incorporating symmetry in optimization.
Improving Convergence and Generalization Using Parameter Symmetries
[ "Bo Zhao", "Robert Gower", "Robin Walters", "Rose Yu" ]
Workshop/NeurReps
poster
2305.13404
[ "https://github.com/rose-stl-lab/teleportation-optimization" ]
https://huggingface.co/papers/2305.13404
1
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=kLwwaBdWAJ
@inproceedings{ versteeg2023expressive, title={Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity}, author={Christopher Versteeg and Andrew Sedler and Jonathan McCart and Chethan Pandarinath}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=kLwwaBdWAJ} }
An emerging framework in neuroscience uses the rules that govern how a neural circuit's state evolves over time to understand the circuit's underlying computation. While these \textit{neural dynamics} cannot be directly measured, new techniques attempt to estimate them by modeling observed neural recordings as a low-dimensional latent dynamical system embedded into a higher-dimensional neural space. How these models represent the readout from latent space to neural space can affect the interpretability of the latent representation -- for example, for models with a linear readout could make simple, low-dimensional dynamics unfolding on a non-linear neural manifold appear excessively complex and high-dimensional. Additionally, standard readouts (both linear and non-linear) often lack injectivity, meaning that they don't obligate changes in latent state to directly affect activity in the neural space. During training, non-injective readouts incentivize the model to invent dynamics that misrepresent the underlying system and computation. To address the challenges presented by non-linearity and non-injectivity, we combined a custom readout with a previously developed low-dimensional latent dynamics model to create the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN). We generated a synthetic spiking dataset by non-linearly embedding activity from a low-dimensional dynamical system into higher-D neural activity. We show that, in contrast to alternative models, ODIN is able to recover ground-truth latent activity from these data even when the nature of the system and embedding are unknown. Additionally, we show that ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry (e.g., the neural manifold) over alternative models. Overall, ODIN's ability to recover ground-truth latent features with low dimensionality make it a promising method for distilling interpretable dynamics that can explain neural computation.
Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity
[ "Christopher Versteeg", "Andrew Sedler", "Jonathan McCart", "Chethan Pandarinath" ]
Workshop/NeurReps
oral
2309.06402
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=jdT7PuqdSt
@inproceedings{ shamsian2023data, title={Data Augmentations in Deep Weight Spaces}, author={Aviv Shamsian and David Zhang and Aviv Navon and Yan Zhang and Miltiadis Kofinas and Idan Achituve and Riccardo Valperga and Gertjan Burghouts and Efstratios Gavves and Cees Snoek and Ethan Fetaya and Gal Chechik and Haggai Maron}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=jdT7PuqdSt} }
Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization. Recent works designed architectures for effective learning in that space, which takes into account its unique, permutation-equivariant, structure. Unfortunately, so far these architectures suffer from severe overfitting and were shown to benefit from large datasets. This poses a significant challenge because generating data for this learning setup is laborious and time-consuming since each data sample is a full set of network weights that has to be trained. In this paper, we address this difficulty by investigating data augmentations for weight spaces, a set of techniques that enable generating new data examples on the fly without having to train additional input weight space elements. We first review several recently proposed data augmentation schemes and divide them into categories. We then introduce a novel augmentation scheme based on the Mixup method. We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate, which can be valuable for future studies.
Data Augmentations in Deep Weight Spaces
[ "Aviv Shamsian", "David Zhang", "Aviv Navon", "Yan Zhang", "Miltiadis Kofinas", "Idan Achituve", "Riccardo Valperga", "Gertjan Burghouts", "Efstratios Gavves", "Cees Snoek", "Ethan Fetaya", "Gal Chechik", "Haggai Maron" ]
Workshop/NeurReps
oral
2311.08851
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=gt9dDWc6GL
@inproceedings{ tipton2023haldane, title={Haldane Bundles: A Dataset for Learning to Predict the Chern Number of Line Bundles on the Torus}, author={Cody Tipton and Elizabeth Coda and Davis Brown and Alyson Bittner and Jung Lee and Grayson Jorgenson and Tegan Emerson and Henry Kvinge}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=gt9dDWc6GL} }
Characteristic classes, which are abstract topological invariants associated with vector bundles, have become an important notion in modern physics with surprising real-world consequences. As a representative example, the incredible properties of topological insulators, which are insulators in their bulk but conductors on their surface, can be completely characterized by a specific characteristic class associated with their electronic band structure, the first Chern class. Given their importance to next generation computing and the computational challenge of calculating them using first-principles approaches, there is a need to develop machine learning approaches to predict the characteristic classes associated with a material system. To aid in this program we introduce the *Haldane bundle dataset*, which consists of synthetically generated complex line bundles on the $2$-torus. We envision this dataset, which is not as challenging as noisy and sparsely measured real-world datasets but (as we show) still difficult for off-the-shelf architectures, to be a testing ground for architectures that incorporate the rich topological and geometric priors underlying characteristic classes.
Haldane Bundles: A Dataset for Learning to Predict the Chern Number of Line Bundles on the Torus
[ "Cody Tipton", "Elizabeth Coda", "Davis Brown", "Alyson Bittner", "Jung Lee", "Grayson Jorgenson", "Tegan Emerson", "Henry Kvinge" ]
Workshop/NeurReps
poster
2312.04600
[ "https://github.com/shadtome/haldane-bundles" ]
https://huggingface.co/papers/2312.04600
0
0
0
8
1
[]
[]
[]
null
https://openreview.net/forum?id=fv0W1Yyg2v
@inproceedings{ ramesh2023how, title={How Capable Can a Transformer Become? A Study on Synthetic, Interpretable Tasks}, author={Rahul Ramesh and Mikail Khona and Robert P. Dick and Hidenori Tanaka and Ekdeep Singh Lubana}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=fv0W1Yyg2v} }
Transformers trained on huge text corpora exhibit a remarkable set of capabilities. Given the inherent compositional nature of language, one can expect the model to learn to compose these capabilities, potentially yielding a combinatorial explosion of what operations it can perform on an input. Motivated by the above, we aim to assess in this paper “how capable can a transformer become?”. In this work, we train Transformer models on a data-generating process that involves compositions of a set of well-defined monolithic capabilities and show that: (1) Transformers generalize to exponentially or even combinatorially many functions not seen in the training data; (2) composing functions by generating intermediate outputs is more effective at generalizing to unseen compositions; (3) the training data has a significant impact on the model’s ability to compose functions (4) Attention layers in the latter half of the model seem critical to compositionality.
How Capable Can a Transformer Become? A Study on Synthetic, Interpretable Tasks
[ "Rahul Ramesh", "Mikail Khona", "Robert P. Dick", "Hidenori Tanaka", "Ekdeep Singh Lubana" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=fjLD4U1MP7
@inproceedings{ gupta2023structurewise, title={Structure-wise Uncertainty for Curvilinear Image Segmentation}, author={Saumya Gupta and Xiaoling Hu and Chao Chen}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=fjLD4U1MP7} }
Segmenting curvilinear structures like blood vessels and roads poses significant challenges due to their intricate geometry and weak signals. To expedite large-scale annotation, it is essential to adopt semi-automatic methods such as proofreading by human experts. In this abstract, we focus on estimating uncertainty for such tasks, so that highly uncertain, and thus error-prone structures can be identified for human annotators to verify. Unlike prior work that generates pixel-wise uncertainty maps, we believe it is essential to measure uncertainty in the units of topological structures, e.g., small pieces of connections and branches. To realize this, we employ tools from topological data analysis, specifically discrete Morse theory (DMT), to first extract the structures and then reason about their uncertainties. On multiple 2D and 3D datasets, our methodology generates superior structure-wise uncertainty maps compared to existing models.
Structure-wise Uncertainty for Curvilinear Image Segmentation
[ "Saumya Gupta", "Xiaoling Hu", "Chao Chen" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=eY6zf3mk4d
@inproceedings{ kvinge2023internal, title={Internal Representations of Vision Models Through the Lens of Frames on Data Manifolds}, author={Henry Kvinge and Grayson Jorgenson and Davis Brown and Charles Godfrey and Tegan Emerson}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=eY6zf3mk4d} }
While the last five years have seen considerable progress in understanding the internal representations of deep learning models, many questions remain. This is especially true when trying to understand the impact of model design choices, such as model architecture or training algorithm, on hidden representation geometry and dynamics. In this work we present a new approach to studying such representations inspired by the idea of a frame on the tangent bundle of a manifold. Our construction, which we call a *neural frame*, is formed by assembling a set of vectors representing specific types of perturbations of a data point, for example infinitesimal augmentations, noise perturbations, or perturbations produced by a generative model, and studying how these change as they pass through a network. Using neural frames, we make observations about the way that models process, layer-by-layer, specific modes of variation within a small neighborhood of a datapoint. Our results provide new perspectives on a number of phenomena, such as the manner in which training with augmentation produces model invariance or the proposed trade-off between adversarial training and model generalization.
Internal Representations of Vision Models Through the Lens of Frames on Data Manifolds
[ "Henry Kvinge", "Grayson Jorgenson", "Davis Brown", "Charles Godfrey", "Tegan Emerson" ]
Workshop/NeurReps
oral
2211.10558
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=e9JBa515z2
@inproceedings{ pegoraro2023spectral, title={Spectral Maps for Learning on Subgraphs}, author={Marco Pegoraro and Riccardo Marin and Arianna Rampini and Simone Melzi and Luca Cosmo and Emanuele Rodol{\`a}}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=e9JBa515z2} }
In graph learning, maps between graphs and their subgraphs frequently arise. For instance, when coarsening or rewiring operations are present along the pipeline, one needs to keep track of the corresponding nodes between the original and modified graphs. Classically, these maps are represented as binary node-to-node correspondence matrices, and used as-is to transfer node-wise features between the graphs. In this paper, we argue that simply changing this map representation can bring notable benefits to graph learning tasks. Drawing inspiration from recent progress in geometry processing, we introduce a spectral representation for maps that is easy to integrate into existing graph learning models. This spectral representation is a compact and straightforward plug-in replacement, and is robust to topological changes of the graphs. Remarkably, the representation exhibits structural properties that make it interpretable, drawing an analogy with recent results on smooth manifolds. We demonstrate the benefits of incorporating spectral maps in graph learning pipelines, addressing scenarios where a node-to-node map is not well defined, or in the absence of exact isomorphism. Our approach bears practical benefits in knowledge distillation and hierarchical learning, where we show comparable or improved performance at a fraction of the computational cost.
Spectral Maps for Learning on Subgraphs
[ "Marco Pegoraro", "Riccardo Marin", "Arianna Rampini", "Simone Melzi", "Luca Cosmo", "Emanuele Rodolà" ]
Workshop/NeurReps
oral
2205.14938
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=e9EFqkfu2X
@inproceedings{ haan2023euclidean, title={Euclidean, Projective, Conformal: Choosing a Geometric Algebra for Equivariant Transformers}, author={Pim De Haan and Taco Cohen and Johann Brehmer}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=e9EFqkfu2X} }
The Geometric Algebra Transformer (GATr) is a versatile architecture for geometric deep learning based on projective geometric algebra. We generalize this architecture into a blueprint that allows one to construct a scalable transformer architecture given any geometric (or Clifford) algebra. We study versions of this architecture for Euclidean, projective, and conformal algebras, all of which are suited to represent 3D data, and evaluate them in theory and practice. The simplest Euclidean architecture is computationally cheap, but has a smaller symmetry group and is not as sample-efficient, while the projective model is not sufficiently expressive. Both the conformal algebra and an improved version of the projective algebra define powerful, performant architectures.
Euclidean, Projective, Conformal: Choosing a Geometric Algebra for Equivariant Transformers
[ "Pim De Haan", "Taco Cohen", "Johann Brehmer" ]
Workshop/NeurReps
oral
2311.04744
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=dq53F97iVv
@inproceedings{ khajehnejad2023on, title={On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay}, author={Moein Khajehnejad and Forough Habibollahi and Alon Loeffler and Brett Kagan and Adeel Razi}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=dq53F97iVv} }
In this study, we focus on characterising the complex network dynamics of in vitro neuronal system of live biological cells during two distinct activity states: spontaneous rest state and engagement in a real-time (closed-loop) game environment. We use DishBrain which is a system that embodies in vitro neural networks with in silico computation using a high-density multi-electrode array. First, we embed the spiking activity of these channels in a lower-dimensional space using various representation learning methods. We then extract a subset of representative channels that are consistent across all of the neuronal preparations. Next, by analyzing these low-dimensional representations, we explore the patterns of macroscopic neuronal network dynamics during the learning process. Remarkably, our findings indicate that just using the low-dimensional embedding of representative channels is sufficient to differentiate the neuronal culture during the Rest and Gameplay conditions. Furthermore, we characterise the evolving neuronal connectivity patterns within the Dish-Brain system over time during Gameplay in comparison to the Rest condition. Notably, our investigation shows dynamic changes in the overall connectivity within the same region and across multiple regions on the multi-electrode array only during Gameplay. These findings underscore the plasticity of these neuronal networks in response to external stimuli and highlight the potential for modulating connectivity in a controlled environment. The ability to distinguish between neuronal states using reduced-dimensional representations points to the presence of underlying patterns that could be pivotal for real-time monitoring and manipulation of neuronal cultures. Additionally, this provides insight into how biological based information processing systems rapidly adapt and learn and may lead to new or improved algorithms.
On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay
[ "Moein Khajehnejad", "Forough Habibollahi", "Alon Loeffler", "Brett Kagan", "Adeel Razi" ]
Workshop/NeurReps
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=dZbqejZB2V
@inproceedings{ gamba2023on, title={On the Varied Faces of Overparameterization in Supervised and Self-Supervised Learning}, author={Matteo Gamba and Arna Ghosh and Kumar Krishna Agrawal and Blake Aaron Richards and Hossein Azizpour and M{\r{a}}rten Bj{\"o}rkman}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=dZbqejZB2V} }
The quality of the representations learned by neural networks depends on several factors, including the loss function, learning algorithm, and model architecture. In this work, we use information geometric measures to assess the representation quality in a principled manner. We demonstrate that the sensitivity of learned representations to input perturbations, measured by the spectral norm of the feature Jacobian, provides valuable information about downstream generalization. On the other hand, measuring the coefficient of spectral decay observed in the eigenspectrum of feature covariance provides insights into the global representation geometry. First, we empirically establish an equivalence between these notions of representation quality and show that they are inversely correlated. Second, our analysis reveals the varying roles that overparameterization plays in improving generalization. Unlike supervised learning, we observe that increasing model width leads to higher discriminability and less smoothness in the self-supervised regime. Furthermore, we report that there is no observable double descent phenomenon in SSL with non-contrastive objectives for commonly used parameterization regimes, which opens up new opportunities for tight asymptotic analysis. Taken together, our results provide a loss-aware characterization of the different role of overparameterization in supervised and self-supervised learning.
On the Varied Faces of Overparameterization in Supervised and Self-Supervised Learning
[ "Matteo Gamba", "Arna Ghosh", "Kumar Krishna Agrawal", "Blake Aaron Richards", "Hossein Azizpour", "Mårten Björkman" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=dM8HXlBFJU
@inproceedings{ pegoraro2023geometric, title={Geometric Epitope and Paratope Prediction}, author={Marco Pegoraro and Cl{\'e}mentine Domin{\'e} and Emanuele Rodol{\`a} and Petar Veli{\v{c}}kovi{\'c} and Andreea Deac}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=dM8HXlBFJU} }
Antibody-antigen interactions play a crucial role in identifying and neutralizing harmful foreign molecules. In this paper, we investigate the optimal representation for predicting the binding sites in the two molecules and emphasize the importance of geometric information. Specifically, we compare different geometric deep learning methods applied to proteins’ inner (I-GEP) and outer (O-GEP) structures. We incorporate 3D coordinates and spectral geometric descriptors as input features to fully leverage the geometric information. Our research suggests that surface-based models are more efficient than other methods, and our O-GEP experiments have achieved state-of-the-art results with significant performance improvements.
Geometric Epitope and Paratope Prediction
[ "Marco Pegoraro", "Clémentine Dominé", "Emanuele Rodolà", "Petar Veličković", "Andreea Deac" ]
Workshop/NeurReps
poster
2307.13608
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=d55JaRL9wh
@inproceedings{ kaba2023symmetry, title={Symmetry Breaking and Equivariant Neural Networks}, author={S{\'e}kou-Oumar Kaba and Siamak Ravanbakhsh}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=d55JaRL9wh} }
Using symmetry as an inductive bias in deep learning has been proven to be a principled approach for sample-efficient model design. However, the relationship between symmetry and the imperative for equivariance in neural networks is not always obvious. Here, we analyze a key limitation that arises in equivariant functions: their incapacity to break symmetry at the level of individual data samples. In response, we introduce a novel notion of 'relaxed equivariance' that circumvents this limitation. We further demonstrate how to incorporate this relaxation into equivariant multilayer perceptrons (E-MLPs), offering an alternative to the noise-injection method. The relevance of symmetry breaking is then discussed in various application domains: physics, graph representation learning, combinatorial optimization and equivariant decoding.
Symmetry Breaking and Equivariant Neural Networks
[ "Sékou-Oumar Kaba", "Siamak Ravanbakhsh" ]
Workshop/NeurReps
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=c9u8tH1WA0
@inproceedings{ sonthalia2023relwire, title={RelWire: Metric Based Graph Rewiring}, author={Rishi Sonthalia and Anna Gilbert and Matthew Durham}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=c9u8tH1WA0} }
Oversquashing is a major hurdle to the application of geometric deep learning and graph neural networks to real applications. Recent work has found connections between oversquashing and commute times, effective resistance, and the eigengap of the underlying graph. Graph rewiring is the most promising technique to alleviate this issue. Some prior work adds edges locally to highly negatively curved subgraphs. These local changes, however, have a small effect on global statistics such as commute times and the eigengap. Other prior work uses the spectrum of the graph Laplacian to target rewiring to increase the eigengap. These approaches, however, make large structural and topological changes to the underlying graph. We use ideas from geometric group theory to present \textsc{RelWire}, a rewiring technique based on the geometry of the graph. We derive topological connections for \textsc{RelWire}. We then rewire different real world molecule datasets and show that \textsc{RelWire} is Pareto optimal: it has the best balance between improvement in eigengap and commute times and minimizing changes in the topology of the underlying graph.
RelWire: Metric Based Graph Rewiring
[ "Rishi Sonthalia", "Anna Gilbert", "Matthew Durham" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ZtAabWUPu3
@inproceedings{ he2023sheafbased, title={Sheaf-based Positional Encodings for Graph Neural Networks}, author={Yu He and Cristian Bodnar and Pietro Lio}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ZtAabWUPu3} }
Graph Neural Networks (GNNs) work directly with graph-structured data, capitalising on relational information among entities. One limitation of GNNs is their reliance on local interactions among connected nodes. GNNs may generate identical node embeddings for similar local neighbourhoods and fail to distinguish structurally distinct graphs. Positional encodings help to break the locality constraint by informing the nodes of their global positions in the graph. Furthermore, they are required by Graph Transformers to encode structural information. However, existing positional encodings based on the graph Laplacian only encode structural information and are typically fixed. To address these limitations, we propose a novel approach to design positional encodings using sheaf theory. The sheaf Laplacian can be learnt from node data, allowing it to encode both the structure and semantic information. We present two methodologies for creating sheaf-based positional encodings, showcasing their efficacy in node and graph tasks. Our work advances the integration of sheaves in graph learning, paving the way for innovative GNN techniques that draw inspiration from geometry and topology.
Sheaf-based Positional Encodings for Graph Neural Networks
[ "Yu He", "Cristian Bodnar", "Pietro Lio" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ZobkKCTaiY
@inproceedings{ li2023structural, title={Structural Similarities Between Language Models and Neural Response Measurements}, author={Jiaang Li and Antonia Karamolegkou and Yova Kementchedjhieva and Mostafa Abdou and Sune Lehmann and Anders S{\o}gaard}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ZobkKCTaiY} }
Large language models have complicated internal dynamics, but induce representations of words and phrases whose geometry we can study. Human language processing is also opaque, but neural response measurements can provide (noisy) recordings of activations during listening or reading, from which we can extract similar representations of words and phrases. Here we study the extent to which the geometries induced by these representations, share similarities in the context of brain decoding. We find that the larger neural language models get, the more their representations are structurally similar to neural response measurements from brain imaging.
Structural Similarities Between Language Models and Neural Response Measurements
[ "Jiaang Li", "Antonia Karamolegkou", "Yova Kementchedjhieva", "Mostafa Abdou", "Sune Lehmann", "Anders Søgaard" ]
Workshop/NeurReps
poster
2306.01930
[ "https://github.com/coastalcph/brain2llm" ]
https://huggingface.co/papers/2306.01930
0
2
0
6
1
[]
[]
[]
null
https://openreview.net/forum?id=ZLjUgDeGIC
@inproceedings{ zhou2023inrformer, title={{INRF}ormer: Neuron Permutation Equivariant Transformer on Implicit Neural Representations}, author={Lei Zhou and Varun Belagali and Joseph Bae and Prateek Prasanna and Dimitris Samaras}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ZLjUgDeGIC} }
Implicit Neural Representations (INRs) have demonstrated both precision in continuous data representation and compactness in encapsulating high-dimensional data. Yet, much of contemporary research remains centered on data reconstruction using INRs, with limited exploration into processing INRs themselves. In this paper, we endeavor to develop a model tailored to process INRs explicitly for computer vision tasks. We conceptualize INRs as computational graphs with neurons as nodes and weights as edges. To process INR graphs, we propose INRFormer consisting of the node blocks and the edge blocks alternatively. Within the node block, we further propose SlidingLayerAttention (SLA), which performs attention on nodes of three sequential INR layers. This sliding mechanism of the SLA across INR layers enables each layer's nodes to access a broader scope of the entire graph's information. In terms of the edge block, every edge's feature vector gets concatenated with the features of its two linked nodes, followed by a projection via an MLP. Ultimately, we formulate the visual recognition as INR-to-INR (inr2inr) translations. That is, INRFormer transforms the input INR, which maps coordinates to image pixels, to a target INR, which maps the coordinates to the labels. We demonstrate INRFormer on CIFAR10.
INRFormer: Neuron Permutation Equivariant Transformer on Implicit Neural Representations
[ "Lei Zhou", "Varun Belagali", "Joseph Bae", "Prateek Prasanna", "Dimitris Samaras" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ZFu7CPtznY
@inproceedings{ crisostomi2023from, title={From Charts to Atlas: Merging Latent Spaces into One}, author={Donato Crisostomi and Irene Cannistraci and Luca Moschella and Pietro Barbiero and Marco Ciccone and Pietro Lio and Emanuele Rodol{\`a}}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=ZFu7CPtznY} }
Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces. We investigate in this study the aggregation of such latent spaces to create a unified space encompassing the combined information. To this end, we introduce Relative Latent Space Aggregation (RLSA), a two-step approach that first renders the spaces comparable using relative representations, and then aggregates them via a simple mean. We carefully divide a classification problem into a series of learning tasks under three different settings: sharing samples, classes, or neither. We then train a model on each task and aggregate the resulting latent spaces. We compare the aggregated space with that derived from an end-to-end model trained over all tasks and show that the two spaces are similar. We then observe that the aggregated space is better suited for classification, and empirically demonstrate that it is due to the unique imprints left by task-specific embedders within the representations. We finally test our framework in scenarios where no shared region exists and show that it can still be used to merge the spaces, albeit with diminished benefits over naive merging.
From Charts to Atlas: Merging Latent Spaces into One
[ "Donato Crisostomi", "Irene Cannistraci", "Luca Moschella", "Pietro Barbiero", "Marco Ciccone", "Pietro Lio", "Emanuele Rodolà" ]
Workshop/NeurReps
poster
2311.06547
[ "https://github.com/crisostomi/latent-aggregation" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=XGFy3oFu7h
@inproceedings{ liu2023growing, title={Growing Brains in Recurrent Neural Networks for Multiple Cognitive Tasks}, author={Ziming Liu and Mikail Khona and Ila Fiete and Max Tegmark}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=XGFy3oFu7h} }
Recurrent neural networks (RNNs) trained on a diverse ensemble of cognitive tasks, as described by Yang et al (2019); Khona et al. (2023), have been shown to exhibit functional modularity, where neurons organize into discrete functional clusters, each specialized for specific shared computational subtasks. However, these RNNs do not demonstrate anatomical modularity, where these functionally specialized clusters also have a distinct spatial organization. This contrasts with the human brain which has both functional and anatomical modularity. Is there a way to train RNNs to make them more like brains in this regard? We apply a recent machine learning method, brain-inspired modular training (BIMT), to encourage neural connectivity to be local in space. Consequently, hidden neuron organization of the RNN forms spatial structures reminiscent of those of the brain: spatial clusters which correspond to functional clusters. Compared to standard $L_1$ regularization and absence of regularization, BIMT exhibits superior performance by optimally balancing between task performance and sparsity. This balance is quantified both in terms of the number of active neurons and the cumulative wiring length. In addition to achieving brain-like organization in RNNs, our findings also suggest that BIMT holds promise for applications in neuromorphic computing and enhancing the interpretability of neural network architectures.
Growing Brains in Recurrent Neural Networks for Multiple Cognitive Tasks
[ "Ziming Liu", "Mikail Khona", "Ila Fiete", "Max Tegmark" ]
Workshop/NeurReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=UGJkxLNVGh
@inproceedings{ shen2023are, title={Are {\textquotedblleft}Hierarchical{\textquotedblright} Visual Representations Hierarchical?}, author={Ethan Shen and Ali Farhadi and Aditya Kusupati}, booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations}, year={2023}, url={https://openreview.net/forum?id=UGJkxLNVGh} }
Learned visual representations often capture large amounts of semantic information for accurate downstream applications. Human understanding of the world is fundamentally grounded in hierarchy. To mimic this and further improve representation capabilities, the community has explored "hierarchical'' visual representations that aim at modeling the underlying hierarchy of the visual world. In this work, we set out to investigate if hierarchical visual representations truly capture the human perceived hierarchy better than standard learned representations. To this end, we create HierNet, a suite of 12 datasets spanning 3 kinds of hierarchy from the BREEDs subset of ImageNet. After extensive evaluation of Hyperbolic and Matryoshka Representations across training setups, we conclude that they do not capture hierarchy any better than the standard representations but can assist in other aspects like search efficiency and interpretability. Our benchmark and the datasets are open-sourced at https://github.com/ethanlshen/HierNet.
Are “Hierarchical” Visual Representations Hierarchical?
[ "Ethan Shen", "Ali Farhadi", "Aditya Kusupati" ]
Workshop/NeurReps
poster
[ "https://github.com/ethanlshen/hiernet" ]
-1
-1
-1
-1
0
[]
[]
[]