bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=oGJFcWGePV
@inproceedings{ celikkanat2023continuoustime, title={Continuous-time Graph Representation with Sequential Survival Process}, author={Abdulkadir Celikkanat and Nikolaos Nakis and Morten M{\o}rup}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=oGJFcWGePV} }
Over the past two decades, there has been a tremendous increase in the growth of representation learning methods for graphs, with numerous applications across various fields, including bioinformatics, chemistry, and the social sciences. However, current dynamic network approaches focus on discrete-time networks or treat links in continuous-time networks as instantaneous events. Therefore, these approaches have limitations in capturing the persistence or absence of links that continuously emerge and disappear over time for particular durations. To address this, we propose a novel stochastic process relying on survival functions to model the durations of links and their absences over time. This forms a generic new likelihood specification explicitly accounting for intermittent edge-persistent networks, namely GRAS2P: Graph Representation with Sequential Survival Process. We apply the developed framework to a recent continuous time dynamic latent distance model characterizing network dynamics in terms of a sequence of piecewise linear movements of nodes in latent space. We quantitatively assess the developed framework in various downstream tasks, such as link prediction and network completion, demonstrating that the developed modeling framework accounting for link persistence and absence well tracks the intrinsic trajectories of nodes in a latent space and captures the underlying characteristics of evolving network structure.
Continuous-time Graph Representation with Sequential Survival Process
[ "Abdulkadir Celikkanat", "Nikolaos Nakis", "Morten Mørup" ]
Workshop/TGL
longpaper
2312.13068
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=meet41uEs8
@inproceedings{ heeg2023using, title={Using Causality-Aware Graph Neural Networks to Predict Temporal Centralities in Dynamic Graphs}, author={Franziska Heeg and Ingo Scholtes}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=meet41uEs8} }
Node centralities play a pivotal role in network science, social network analysis, and recommender systems. In temporal data, static path-based centralities like closeness or betweeness can give misleading results about the true importance of nodes in a temporal graph. To address this issue, temporal generalizations of betweeness and closeness have been defined that are based on the shortest time-respecting paths between pairs of nodes. However, a major issue of those generalizations is that the calculation of such paths is computationally expensive. Addressing this issue, we study the application of De Bruijn Graph Neural Networks (DBGNN), a causality-aware graph neural network architecture, to predict temporal path-based centralities in time series data. We experimentally evaluate our approach in 13 temporal graphs from biological and social systems and show that it considerably improves the prediction of both betweenness and closeness centrality compared to a static Graph Convolutional Neural Network.
Using Causality-Aware Graph Neural Networks to Predict Temporal Centralities in Dynamic Graphs
[ "Franziska Heeg", "Ingo Scholtes" ]
Workshop/TGL
longpaper
2310.15865
[ "" ]
https://huggingface.co/papers/2310.15865
0
0
0
2
1
[]
[]
[]
null
https://openreview.net/forum?id=hh3salTr27
@inproceedings{ nguyen2023fast, title={Fast Temporal Wavelet Graph Neural Networks}, author={Duc Thien Nguyen and Tuan Nguyen and Truong Son Hy and Risi Kondor}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=hh3salTr27} }
Spatio-temporal signals forecasting plays an important role in numerous domains, especially in neuroscience and transportation. The task is challenging due to the highly intricate spatial structure, as well as the non-linear temporal dynamics of the network. To facilitate reliable and timely forecast for the human brain and traffic networks, we propose the Fast Temporal Wavelet Graph Neural Networks (FTWGNN) that is both time- and memory-efficient for learning tasks on timeseries data with the underlying graph structure, thanks to the theories of multiresolution analysis and wavelet theory on discrete spaces. We employ Multiresolution Matrix Factorization (MMF) (Kondor et al., 2014) to factorize the highly dense graph structure and compute the corresponding sparse wavelet basis that allows us to construct fast wavelet convolution as the backbone of our novel architecture. Experimental results on real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show that FTWGNN is competitive with the state-of-the-arts while maintaining a low computational footprint. Our PyTorch implementation is publicly available at https://github.com/HySonLab/TWGNN
Fast Temporal Wavelet Graph Neural Networks
[ "Duc Thien Nguyen", "Tuan Nguyen", "Truong Son Hy", "Risi Kondor" ]
Workshop/TGL
longpaper
2302.08643
[ "https://github.com/hysonlab/twgnn" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=gvZjnRuRFi
@inproceedings{ peng2023adaptive, title={Adaptive Message Passing Sign Algorithm}, author={Changran Peng and Yi Yan and Ercan KURUOGLU}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=gvZjnRuRFi} }
A new algorithm named the Adaptive Message Passing Sign (AMPS) algorithm is introduced for online prediction, missing data imputation, and impulsive noise removal in time-varying graph signals. This work investigates the potential of message passing on spectral adaptive graph filters to define online localized node aggregations. AMPS updates a sign error derived from $l_1$-norm optimization between observation and estimation, leading to fast and robust predictions in the presence of impulsive noise. The combination of adaptive spectral graph filters with message passing reveals a different perspective on viewing message passing and vice versa. Testing on a real-world network formed by a map of nationwide weather stations, the AMPS algorithm accurately forecasts time-varying temperatures.
Adaptive Message Passing Sign Algorithm
[ "Changran Peng", "Yi Yan", "Ercan KURUOGLU" ]
Workshop/TGL
shortpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=X6aLQLmnWq
@inproceedings{ liao2023gentkg, title={Gen{TKG}: Generative Forecasting on Temporal Knowledge Graph}, author={Ruotong Liao and Xu Jia and Yunpu Ma and Volker Tresp}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=X6aLQLmnWq} }
The rapid advancements in large language models (LLMs) have ignited interest in the temporal knowledge graph (tKG) domain, where conventional carefully designed embedding-based and rule-based models dominate. The question remains open of whether pre-trained LLMs can understand structured temporal relational data and replace them as the foundation model for temporal relational forecasting. Therefore, we bring temporal knowledge forecasting into the generative setting. However, challenges occur in the huge chasms between complex temporal graph data structure and sequential natural expressions LLMs can handle, and between the enormous data sizes of tKGs and heavy computation costs of finetuning LLMs. To address these challenges, we propose a novel retrieval augmented generation framework named GENTKG combining a temporal logical rule-based retrieval strategy and lightweight few-shot parameter-efficient instruction tuning to solve the above challenges. Extensive experiments have shown that GENTKG outperforms conventional methods of temporal relational forecasting under low computation resources with extremely limited training data as few as 16 samples. GENTKG also highlights remarkable cross-domain and in-domain generalizability with outperforming performance on unseen datasets without re-training. Our work reveals the huge potential of LLMs in the tKG domain and opens a new frontier for generative forecasting on tKGs.
GenTKG: Generative Forecasting on Temporal Knowledge Graph
[ "Ruotong Liao", "Xu Jia", "Yunpu Ma", "Volker Tresp" ]
Workshop/TGL
longpaper
[ "https://github.com/mayhugotong/gentkg" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=WvFHDkIhmp
@inproceedings{ kim2023largescale, title={Large-scale Graph Representation Learning of Dynamic Brain Connectome with Transformers}, author={Byung-Hoon Kim and Jungwon Choi and EungGu Yun and Kyungsang Kim and Xiang Li and Juho Lee}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=WvFHDkIhmp} }
Graph Transformers have recently been successful in various graph representation learning tasks, providing a number of advantages over message-passing Graph Neural Networks. Utilizing Graph Transformers for learning the representation of the brain functional connectivity network is also gaining interest. However, studies to date have underlooked the temporal dynamics of functional connectivity, which fluctuates over time. Here, we propose a method for learning the representation of dynamic functional connectivity with Graph Transformers. Specifically, we define the connectome embedding, which holds the position, structure, and time information of the functional connectivity graph, and use Transformers to learn its representation across time. We perform experiments with over 50,000 resting-state fMRI samples obtained from three datasets, which is the largest number of fMRI data used in studies by far. The experimental results show that our proposed method outperforms other competitive baselines in gender classification and age regression tasks based on the functional connectivity extracted from the fMRI data.
Large-scale Graph Representation Learning of Dynamic Brain Connectome with Transformers
[ "Byung-Hoon Kim", "Jungwon Choi", "EungGu Yun", "Kyungsang Kim", "Xiang Li", "Juho Lee" ]
Workshop/TGL
shortpaper
2312.14939
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=W9QOkbGccr
@inproceedings{ zhuang2023sauc, title={{SAUC}: Sparsity-Aware Uncertainty Calibration for Spatiotemporal Prediction with Graph Neural Networks}, author={Dingyi Zhuang and Yuheng Bu and Guang Wang and Shenhao Wang and Jinhua Zhao}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=W9QOkbGccr} }
Quantifying uncertainty is essential for achieving robust and reliable predictions. However, existing spatiotemporal models predominantly predict deterministic values, often overlooking the uncertainty in their forecasts. Particularly, high-resolution spatiotemporal datasets are rich in zeros, posing further challenges in quantifying the uncertainty of such sparse and asymmetrically distributed data. This paper introduces a novel post-hoc Sparsity-aware Uncertainty Calibration (SAUC) method, calibrating the uncertainty in both zero and non-zero values. We modify the state-of-the-art deterministic spatiotemporal Graph Neural Networks (GNNs) to probabilistic ones as the synthetic models in the pre-calibration phase. Applied to two real-world spatiotemporal datasets of varied granularities, extensive experiments demonstrate SAUC's capacity to adeptly calibrate uncertainty, effectively fitting the variance of zero values and exhibiting robust generalizability. Specifically, our empirical experiments show a 20\% of reduction in calibration errors in zero entries on the sparse traffic accident and urban crime prediction. The results validate our method's theoretical and empirical values, demonstrating calibrated results that provide reliable safety guidance, thereby bridging a significant gap in uncertainty quantification (UQ) for sparse spatiotemporal data.
SAUC: Sparsity-Aware Uncertainty Calibration for Spatiotemporal Prediction with Graph Neural Networks
[ "Dingyi Zhuang", "Yuheng Bu", "Guang Wang", "Shenhao Wang", "Jinhua Zhao" ]
Workshop/TGL
longpaper
2409.08766
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=W1egrsoVIf
@inproceedings{ feldman2023leveraging, title={Leveraging Temporal Graph Networks Using Module Decoupling}, author={Or Feldman and Chaim Baskin}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=W1egrsoVIf} }
Modern approaches for learning on dynamic graphs have adopted the use of batches instead of applying updates one by one. The use of batches allows these techniques to become helpful in streaming scenarios where updates to graphs are received at extreme speeds. Using batches, however, forces the models to update infrequently, which results in the degradation of their performance. In this work, we suggest a decoupling strategy that enables the models to update frequently while using batches. By decoupling the core modules of temporal graph networks and implementing them using a minimal number of learnable parameters, we have developed the Lightweight Decoupled Temporal Graph Network (LDTGN), an exceptionally efficient model for learning on dynamic graphs. LDTG was validated on various dynamic graph benchmarks, providing comparable or state-of-the-art results with significantly higher throughput than previous art. Notably, our method outperforms previous approaches by more than 20% on benchmarks that require rapid model update rates, such as USLegis or UNTrade. The code to reproduce our experiments is available at https://github.com/TPFI22/MODULES-DECOUPLING.
Leveraging Temporal Graph Networks Using Module Decoupling
[ "Or Feldman", "Chaim Baskin" ]
Workshop/TGL
longpaper
2310.02721
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=VOVQQcjrz1
@inproceedings{ varugunda2023exploring, title={Exploring Graph Structure in Graph Neural Networks for Epidemic Forecasting}, author={Sai Supriya Varugunda and Ching-Hao Fan and Lijing Wang}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=VOVQQcjrz1} }
Graph neural networks (GNNs) that incorporate cross-location signals have the ability to capture spatial patterns during infectious disease epidemics, potentially improving forecasting performance. However, these models may be susceptible to biases arising from mis-specification, particularly related to the level of connectivity within the graph (i.e., graph structure). In this paper, we investigated the impact of graph structure on GNNs for epidemic forecasting. Multiple graph structures are defined and analyzed based on several characteristics i.e., dense or sparse, dynamic or static. We design a comprehensive ablation study and conduct experiments on real-world data. One of the major findings is that sparse graphs built using geographical information can achieve advanced performance and are more generalizable among different tasks compared with more complex attention-based adjacency matrices.
Exploring Graph Structure in Graph Neural Networks for Epidemic Forecasting
[ "Ching-Hao Fan", "Sai Supriya Varugunda", "Lijing Wang" ]
Workshop/TGL
shortpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=UVh8QBeWlN
@inproceedings{ tochner2023gent, title={Gen-T: Reduce Distributed Tracing Operational Costs Using Generative Models}, author={Saar Tochner and Giulia Fanti and Vyas Sekar}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=UVh8QBeWlN} }
Distributed tracing (DT) is an important aspect of modern microservice operations. It allows operators to troubleshoot problems by modeling the sequence of services a specific request traverses in the system. However, transmitting traces incurs significant costs. This forces operators to use coarse-grained prefiltering or sampling techniques, creating undesirable tradeoffs between cost and fidelity. We propose to circumvent these issues using generative modeling to capture the semantic structure of collected traces in a lossy-yet-succinct way. Realizing this potential in practice, however, is challenging. Naively extending ideas from the literature on deep generative models in timeseries generation or graph generation can result in poor cost-fidelity tradeoffs. In designing and implementing Gen-T, we tackle key algorithmic and systems challenges to make deep generative models practical for DT. We design a hybrid generative model that separately models different components of DT data, and conditionally stitches them together. Our system Gen-T, which has been integrated with the widely-used OpenTelemetry framework, achieves a level of fidelity comparable to that of 1:15 sampling, which is more fine-grained than the default 1:20 sampling setting in the Opentelemetry documentation, while maintaining a cost profile equivalent to that of 1:100 lossless-compressed sampling (i.e., a 7$\times$ volume reduction).
Gen-T: Reduce Distributed Tracing Operational Costs Using Generative Models
[ "Saar Tochner", "Giulia Fanti", "Vyas Sekar" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=UMokRwWfLW
@inproceedings{ pan2023do, title={Do Temporal Knowledge Graph Embedding Models Learn or Memorize}, author={Jiaxin Pan and Mojtaba Nayyeri and Yinan Li and Steffen Staab}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=UMokRwWfLW} }
Temporal Knowledge Graph Embedding models predict missing facts in temporal knowledge graphs. Previous work on static knowledge graph embedding models has revealed that KGE models utilize shortcuts in test set leakage to achieve high performance. In this work, we show that a similar test set leakage problem exists in widely used temporal knowledge graph datasets ICEWS14 and ICEWS05-15. We propose a naive rule-based model that can achieve start-of-the-art results on both datasets without a deep-learning process. Following this consideration, we construct two more challenging datasets for the evaluation of TKGEs.
Do Temporal Knowledge Graph Embedding Models Learn or Memorize Shortcuts?
[ "Jiaxin Pan", "Mojtaba Nayyeri", "Yinan Li", "Steffen Staab" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=QJx3Cmddsy
@inproceedings{ beddar-wiesing2023marked, title={Marked Neural Spatio-Temporal Point Process Involving a Dynamic Graph Neural Network}, author={Silvia Beddar-Wiesing and Alice Moallemy-Oureh and R{\"u}diger Nather and Josephine Thomas}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=QJx3Cmddsy} }
Spatio-Temporal Point Processes (STPPs) have recently become increasingly interesting for learning dynamic graph data since many scientific fields, ranging from mathematics, biology, social sciences, and physics to computer science, are naturally related and dynamic. While training Recurrent Neural Networks and solving PDEs for representing temporal data is expensive, TPPs were a good alternative. The drawback is that constructing an appropriate TPP for modeling temporal data requires the assumption of a particular temporal behavior of the data. To overcome this problem, Neural TPPs have been developed that enable learning of the parameters of the TPP. However, the research is relatively young for modeling dynamic graphs, and only a few TPPs have been proposed to handle edge-dynamic graphs. To allow for learning on a fully dynamic graph, we propose the first Marked Neural Spatio-Temporal Point Process (MNSTPP) that leverages a Dynamic Graph Neural Network to learn Spatio-TPPs to model and predict any event in a graph stream. In addition, our model can be updated efficiently by considering single events for local retraining.
Marked Neural Spatio-Temporal Point Process Involving a Dynamic Graph Neural Network
[ "Alice Moallemy-Oureh", "Silvia Beddar-Wiesing", "Rüdiger Nather", "Josephine Thomas" ]
Workshop/TGL
shortpaper
2206.03469
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Ks94Yn5jqY
@inproceedings{ daniluk2023temporal, title={Temporal graph models fail to capture global temporal dynamics}, author={Michal Daniluk and Jacek Dabrowski}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=Ks94Yn5jqY} }
A recently released Temporal Graph Benchmark is analyzed in the context of Dynamic Link Property Prediction. We outline our observations and propose a trivial optimization-free baseline of "recently popular nodes" outperforming other methods on medium and large-size datasets in the Temporal Graph Benchmark. We propose two measures based on Wasserstein distance which can quantify the strength of short-term and long-term global dynamics of datasets. By analyzing our unexpectedly strong baseline, we show how standard negative sampling evaluation can be unsuitable for datasets with strong temporal dynamics. We also show how simple negative-sampling can lead to model degeneration during training, resulting in impossible to rank, fully saturated predictions of temporal graph networks. We propose improved negative sampling schemes for both training and evaluation and prove their usefulness. We conduct a comparison with a model trained non-contrastively without negative sampling. Our results provide a challenging baseline and indicate that temporal graph network architectures need deep rethinking for usage in problems with significant global dynamics, such as social media, cryptocurrency markets or e-commerce. We open-source the code for baselines, measures and proposed negative sampling schemes.
Temporal graph models fail to capture global temporal dynamics
[ "Michal Daniluk", "Jacek Dabrowski" ]
Workshop/TGL
longpaper
2309.15730
[ "https://github.com/temporal-graphs-negative-sampling/tgb" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=KGqtCfYJon
@inproceedings{ zambon2023graph, title={Graph Kalman Filters}, author={Daniele Zambon and Cesare Alippi}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=KGqtCfYJon} }
The well-known Kalman filters model dynamical systems by relying on state-space representations with the next state updated, and its uncertainty controlled, by fresh information associated with newly observed system outputs. This paper generalizes, for the first time in the literature, Kalman and extended Kalman filters to discrete-time settings where inputs, states, and outputs are represented as attributed graphs whose topology and attributes can change with time. The setup allows us to adapt the framework to cases where the output is a vector or a scalar too (node/graph level tasks). Within the proposed theoretical framework, the unknown state transition and readout are learned end-to-end along with the downstream prediction task.
Graph Kalman Filters
[ "Daniele Zambon", "Cesare Alippi" ]
Workshop/TGL
longpaper
2303.12021
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JXranzV2zV
@inproceedings{ choi2023a, title={A Generative Self-Supervised Framework using Functional Connectivity in f{MRI} Data}, author={Jungwon Choi and Seongho Keum and EungGu Yun and Byung-Hoon Kim and Juho Lee}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=JXranzV2zV} }
Deep neural networks trained on Functional Connectivity (FC) networks extracted from functional Magnetic Resonance Imaging (fMRI) data have gained popularity due to the increasing availability of data and advances in model architectures, including Graph Neural Network (GNN). Recent research on the application of GNN to FC suggests that exploiting the time-varying properties of the FC could significantly improve the accuracy and interpretability of the model prediction. However, the high cost of acquiring high-quality fMRI data and corresponding phenotypic labels poses a hurdle to their application in real-world settings, such that a model naïvely trained in a supervised fashion can suffer from insufficient performance or a lack of generalization on a small number of data. In addition, most Self-Supervised Learning (SSL) approaches for GNNs to date adopt a contrastive strategy, which tends to lose appropriate semantic information when the graph structure is perturbed or does not leverage both spatial and temporal information simultaneously. In light of these challenges, we propose a generative SSL approach that is tailored to effectively harness spatio-temporal information within dynamic FC. Our empirical results, experimented with large-scale (>50,000) fMRI datasets, demonstrate that our approach learns valuable representations and enables the construction of accurate and robust models when fine-tuned for downstream tasks.
A Generative Self-Supervised Framework using Functional Connectivity in fMRI Data
[ "Jungwon Choi", "Seongho Keum", "EungGu Yun", "Byung-Hoon Kim", "Juho Lee" ]
Workshop/TGL
longpaper
2312.01994
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=HZuRArNb4b
@inproceedings{ dong2023deep, title={Deep graph kernel point processes}, author={Zheng Dong and Matthew Repasky and Xiuyuan Cheng and Yao Xie}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=HZuRArNb4b} }
Point process models are widely used for continuous asynchronous event data, where each data point includes time and additional information called ``marks'', which can be locations, nodes, or event types. In this paper, we present a novel point process model for discrete event data over graphs, where the event interaction occurs within a latent graph structure. Our model builds upon the classic influence kernel-based formulation by Hawkes in the original self-exciting point processes work to capture the influence of historical events on future events' occurrence. The key idea is to represent the influence kernel by Graph Neural Networks (GNN) to capture the underlying graph structure while harvesting the strong representation power of GNN. Compared with prior works that focus on directly modeling the conditional intensity function using neural networks, our kernel presentation herds the repeated event influence patterns more effectively by combining statistical and deep models, achieving better model estimation/learning efficiency and superior predictive performance. Our work significantly extends the existing deep spatio-temporal kernel for point process data, which is inapplicable to our setting due to the fundamental difference in the nature of the observation space being Euclidean rather than a graph. We present comprehensive experiments on synthetic and real-world data to show the superior performance of the proposed approach against the state-of-the-art in predicting future events and uncovering the relational structure among data.
Deep graph kernel point processes
[ "Zheng Dong", "Matthew Repasky", "Xiuyuan Cheng", "Yao Xie" ]
Workshop/TGL
longpaper
2306.11313
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=HSgx1aJeR8
@inproceedings{ liu2023topological, title={Topological and Temporal Data Augmentation for Temporal Graph Networks}, author={Haoran Liu and Jianling Wang and Kaize Ding and James Caverlee}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=HSgx1aJeR8} }
Temporal graphs are extensively employed to represent evolving networks, finding applications across diverse fields such as transportation systems, social networks, and biological networks. Temporal Graph Networks (TGNs) build upon these graphs to model and learn from temporal dependencies in dynamic networks. A significant aspect of enhancing the performance of TGNs lies in effective data augmentation, which helps in better capturing the underlying patterns within temporal graphs while ensuring robustness to variations. However, existing data augmentation strategies for temporal graphs are largely heuristic and hand-crafted, which may alter the inherent semantics of temporal graphs, thereby degrading the performance of downstream tasks. To address this, we propose two simple yet effective data augmentation strategies, specifically tailored within the representation space of TGNs, targeting both the graph topology and the temporal axis. Through experiments on future link prediction and node classification tasks, we demonstrate that the integration of our proposed augmentation methods significantly amplifies the performance of TGNs, outperforming state-of-the-art methods.
Topological and Temporal Data Augmentation for Temporal Graph Networks
[ "Haoran Liu", "Jianling Wang", "Kaize Ding", "James Caverlee" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=HPVXyXAYk3
@inproceedings{ yang2023spatialtemporal, title={Spatial-Temporal {DAG} Convolutional Networks for End-to-End Joint Effective Connectivity Learning and Resting-State f{MRI} Classification}, author={Rui Yang and Wenrui Dai and Huajun She and Yiping P. Du and Dapeng Wu and Hongkai Xiong}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=HPVXyXAYk3} }
Building comprehensive brain connectomes has proved of fundamental importance in resting-state fMRI (rs-fMRI) analysis. Based on the foundation of brain network, spatial-temporal-based graph convolutional networks have dramatically improved the performance of deep learning methods in rs-fMRI time series classification. However, existing works either pre-define the brain network as the correlation matrix derived from the raw time series or jointly learn the connectome and model parameters without any topology constraint. These methods could suffer from degraded classification performance caused by the deviation from the intrinsic brain connectivity and lack biological interpretability of demonstrating the causal structure (i.e., effective connectivity) among brain regions. Moreover, most existing methods for effective connectivity learning are unaware of the downstream classification task and cannot sufficiently exploit useful rs-fMRI label information. To address these issues in an end-to-end manner, we model the brain network as a directed acyclic graph (DAG) to discover direct causal connections between brain regions and propose Spatial-Temporal DAG Convolutional Network (ST-DAGCN) to jointly infer effective connectivity and classify rs-fMRI time series by learning brain representations based on nonlinear structural equation model. The optimization problem is formulated into a continuous program and solved with score-based learning method via gradient descent. We evaluate ST-DAGCN on two public rs-fMRI databases. Experiments show that ST-DAGCN outperforms existing models by evident margins in rs-fMRI classification and simultaneously learns meaningful edges of effective connectivity that help understand brain activity patterns and pathological mechanisms in brain disease.
Spatial-Temporal DAG Convolutional Networks for End-to-End Joint Effective Connectivity Learning and Resting-State fMRI Classification
[ "Rui Yang", "Wenrui Dai", "Huajun She", "Yiping P. Du", "Dapeng Wu", "Hongkai Xiong" ]
Workshop/TGL
longpaper
2312.10317
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=GYSG2vF6z5
@inproceedings{ kim2023hierarchical, title={Hierarchical Joint Graph Learning and Multivariate Time Series Forecasting}, author={JuHyeon Kim and HyunGeun Lee and Seungwon Yu and Ung Hwang and Wooyul Jung and Miseon Park and Kijung Yoon}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=GYSG2vF6z5} }
Multivariate time series is prevalent in many scientific and industrial domains. Modeling multivariate signals is challenging due to their long-range temporal dependencies and intricate interactions--both direct and indirect. To confront these complexities, we introduce a method of representing multivariate signals as nodes in a graph with edges indicating interdependency between them. Specifically, we leverage graph neural networks (GNN) and attention mechanisms to efficiently learn the underlying relationships within the time series data. Moreover, we suggest employing hierarchical signal decompositions running over the graphs to capture multiple spatial dependencies. The effectiveness of our proposed model is evaluated across various real-world benchmark datasets designed for long-term forecasting tasks. The results consistently showcase the superiority of our model, achieving an average 23\% reduction in mean squared error (MSE) compared to existing models.
Hierarchical Joint Graph Learning and Multivariate Time Series Forecasting
[ "JuHyeon Kim", "HyunGeun Lee", "Seungwon Yu", "Ung Hwang", "Wooyul Jung", "Miseon Park", "Kijung Yoon" ]
Workshop/TGL
longpaper
2311.12630
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=EWoAYik8ta
@inproceedings{ jiang2023exploring, title={Exploring Time Granularity on Temporal Graphs for Dynamic Link Prediction in Real-world Networks}, author={Xiangjian Jiang and Yanyi Pu}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=EWoAYik8ta} }
Dynamic Graph Neural Networks (DGNNs) have emerged as the predominant approach for processing dynamic graph-structured data. However, the influence of temporal information on model performance and robustness remains insufficiently explored, particularly regarding how models address prediction tasks with different time granularities. In this paper, we explore the impact of time granularity when training DGNNs on dynamic graphs through extensive experiments. We examine graphs derived from various domains and compare three different DGNNs to the baseline model across four varied time granularities. We mainly consider the interplay between time granularities, model architectures, and negative sampling strategies to obtain general conclusions. Our results reveal that a sophisticated memory mechanism and proper time granularity are crucial for a DGNN to deliver competitive and robust performance in the dynamic link prediction task. We also discuss drawbacks in considered models and datasets and propose promising directions for future research on the time granularity of temporal graphs.
Exploring Time Granularity on Temporal Graphs for Dynamic Link Prediction in Real-world Networks
[ "Xiangjian Jiang", "Yanyi Pu" ]
Workshop/TGL
longpaper
2311.12255
[ "https://github.com/silencex12138/time-granularity-on-temporal-graphs" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=DRrSYKNhD1
@inproceedings{ chatterjee2023inductive, title={Inductive Link Prediction in Static and Temporal Graphs for Isolated Nodes}, author={Ayan Chatterjee and Robin Walters and Giulia Menichetti and Tina Eliassi-Rad}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=DRrSYKNhD1} }
Link prediction is a vital task in graph machine learning, involving the anticipation of connections between entities within a network. In the realm of drug discovery, link prediction takes the form of forecasting interactions between drugs and target genes. Likewise, in recommender systems, link prediction entails suggesting items to users. In temporal graphs, link prediction ranges from friendship recommendations to introducing new devices in wireless networks and dynamic routing. However, a prevailing challenge in link prediction lies in the reliance on topological neighborhoods and the lack of informative node metadata for making predictions. Consequently, predictions for nodes with low degrees, and especially for newly introduced nodes with no neighborhood data, tend to be inaccurate and misleading. State-of-the-art models frequently fall short when tasked with predicting interactions between a novel drug and an unexplored disease target or suggesting a new product to a recently onboarded user. In temporal graphs, the link prediction models often misplace a newly introduced entity in the evolving network. This paper delves into the issue of observation bias related to the inequity of data availability for different entities in a network, unavailability of informative node metadata, and explores how contemporary models struggle when it comes to making inductive link predictions for low-degree and previously unseen isolated nodes. Additionally, we propose a non-end-to-end training approach harnessing informative node attributes generated by unsupervised pre-training on corpora different from and with significantly more entities than the observed graphs to enhance the overall generalizability of link prediction models.
Inductive Link Prediction in Static and Temporal Graphs for Isolated Nodes
[ "Ayan Chatterjee", "Robin Walters", "Giulia Menichetti", "Tina Eliassi-Rad" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=B7Wd1K0l4I
@inproceedings{ barghi2023bitgraph, title={BitGraph: A Framework For Scaling Temporal Graph Queries on {GPU}s}, author={Alexandria Barghi}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=B7Wd1K0l4I} }
Graph query languages have become the standard among data scientists analyzing large, dynamic graphs, allowing them to structure their analysis as SQL-like queries. One of the challenges in supporting graph query languages is that, unlike SQL queries, graph queries nearly always involve aggregation of sparse data, making it challenging to scale graph queries without heavy reliance on expensive indices. This paper introduces the first major release of $\textit{BitGraph}$, a graph query processing engine that uses GPU-acceleration to quickly process Gremlin graph queries with minimal memory overhead, along with its supporting stack, $\textit{Gremlin++}$, which provides query language support in C++, and $\textit{Maelstrom}$, a lightweight library for compute-agnostic, accelerated vector operations built on top of $\textit{Thrust}$. This paper also analyzes the performance of BitGraph compared to existing CPU-only backends applied specifically to temporal graph queries, demonstrating BitGraph's superior scalability and speedup of up to 35x over naive CPU implementations.
BitGraph: A Framework For Scaling Temporal Graph Queries on GPUs
[ "Alexandria Barghi" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=API0yII2Ua
@inproceedings{ nath2023tboost, title={{TB}oost: Gradient Boosting Temporal Graph Neural Networks}, author={Pritam Nath and Govind Waghmare and Nancy Agrawal and Nitish Kumar and Siddhartha Asthana}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=API0yII2Ua} }
Fraud prediction, compromised account detection, and attrition signaling are vital problems in the financial domain. Generally, these tasks are temporal classification problems as labels exhibit temporal dependence. The labels of these tasks change with time. Each financial transaction contains heterogeneous data like account number, merchant, amount, decline status, etc. A financial dataset contains chronological transactions. This data possesses three distinct characteristics: heterogeneity, relational structure, and temporal nature. Previous efforts fall short of modeling all these characteristics in a unified way. Gradient-boosted decision trees (GBDTs) are used to tackle heterogeneity. Graph Neural Networks (GNNs) are employed to model relational information. Temporal GNNs account for temporal dependencies in the data. In this paper, we propose a novel unified framework, TBoost, which combines GBDTs and temporal GNNs to jointly model the heterogeneous, relational, and temporal characteristics of the data. It leverages both node and edge-level dynamics to solve temporal classification problems. To validate the effectiveness of TBoost, we conduct extensive experiments, demonstrating its superiority in handling the complexities of financial data.
TBoost: Gradient Boosting Temporal Graph Neural Networks
[ "Pritam Nath", "Govind Waghmare", "Nancy Agrawal", "Nitish Kumar", "Siddhartha Asthana" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=9B8ocBg4VJ
@inproceedings{ pop2023towards, title={Towards predicting future time intervals on Temporal Knowledge Graphs}, author={Roxana Pop and Egor Kostylev}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=9B8ocBg4VJ} }
Temporal Knowledge Graphs (TKGs), a temporal extension of Knowledge Graphs where facts are contextualized by time information, have received increasing attention in the temporal graph learning community. In this short paper we focus on TKGs where the temporal contexts are time intervals, and address the time prediction problem in the forecasting setting. We propose both a system architecture for addressing the task and a benchmarking methodology.
Towards predicting future time intervals on Temporal Knowledge Graphs
[ "Roxana Pop", "Egor Kostylev" ]
Workshop/TGL
shortpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=8PRRNv81qB
@inproceedings{ manoj2023stgraph, title={{STG}raph: A Framework for Temporal Graph Neural Networks}, author={Nithin Puthalath Manoj and Joel Cherian and Kevin Jude Concessao and Unnikrishnan Cheramangalath}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=8PRRNv81qB} }
Real-life graphs from various application domains like social networks, transportation networks, and citation networks evolve over time. These evolving graphs can be modeled as (i) interactions between two nodes in a graph and (ii) interactions associated with a single node. Deep learning techniques using Graph Neural Networks (GNNs) are used for analyzing spatial and temporal properties of graphs from these application domains. Analyzing temporal graphs is challenging in comparison to static graphs, hence warranting the need for a GNN variant named Temporal Graph Neural Networks (TGNNs). We propose STGraph, a framework to program TGNNs. The proposed framework extends Seastar, a vertex-centric programming model for training static GNNs on GPUs. STGraph supports TGNNs for static temporal and discrete-time dynamic graphs (DTDGs). Existing TGNN frameworks store DTDGs as separate snapshots, incurring high memory overhead. As an improvement, STGraph constructs each snapshot on demand during training. This is achieved by integrating the system with dynamic graph data structures capable of building graph snapshots from temporal updates. Additionally, we present improvements to the Seastar design for easier maintenance and greater software portability. STGraph is benchmarked against Pytorch Geometric Temporal (PyG-T) on an NVIDIA GPU. For static-temporal graphs, STGraph shows up to 1.22× speedup and up to 2.14× memory improvement over PyG-T. For DTDGs, STGraph exhibits up to 1.70× speedup and 1.52× memory improvement over PyG-T.
STGraph: A Framework for Temporal Graph Neural Networks
[ "Nithin Puthalath Manoj", "Joel Cherian", "Kevin Jude Concessao", "Unnikrishnan Cheramangalath" ]
Workshop/TGL
shortpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=88tGIxxhsf
@inproceedings{ reha2023anomaly, title={Anomaly Detection in Continuous-Time Temporal Provenance Graphs}, author={Jakub Reha and Giulio Lovisotto and Michele Russo and Alessio Gravina and Claas Grohnfeldt}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=88tGIxxhsf} }
Recent advances in Graph Neural Networks (GNNs) have matured the field of learning on graphs, making GNNs essential for prediction tasks in complex, interconnected, and evolving systems. In this paper, we focus on self-supervised, inductive learning for continuous-time dynamic graphs. Without compromising generality, we propose an approach to learn representations and mine anomalies in provenance graphs, which are a form of large-scale, heterogeneous, attributed, and continuous-time dynamic graphs used in the cybersecurity domain, syntactically resembling complex temporal knowledge graphs. We modify the Temporal Graph Network (TGN) framework to heterogeneous input data and directed edges, refining it specifically for inductive learning on provenance graphs. We present and release two pioneering large-scale, continuous-time temporal, heterogeneous, attributed benchmark graph datasets. The datasets incorporate expert-labeled anomalies, promoting subsequent research on representation learning and anomaly detection on intricate real-world networks. Comprehensive experimental analyses of modules, datasets, and baselines underscore the effectiveness of TGN-based inductive learning, affirming its practical utility in identifying semantically significant anomalies in real-world systems.
Anomaly Detection in Continuous-Time Temporal Provenance Graphs
[ "Jakub Reha", "Giulio Lovisotto", "Michele Russo", "Alessio Gravina", "Claas Grohnfeldt" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=77Tyf2SFhX
@inproceedings{ dileo2023durendal, title={{DURENDAL}: Graph deep learning framework for temporal heterogeneous networks}, author={Manuel Dileo and Matteo Zignani and Sabrina Gaito}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=77Tyf2SFhX} }
Temporal heterogeneous networks (THNs) are evolving networks that characterize many real-world applications such as citation and events networks, recommender systems, and knowledge graphs. Although different Graph Neural Networks (GNNs) have been successfully applied to dynamic graphs, most of them only support homogeneous graphs or suffer from model design heavily influenced by specific THNs prediction tasks. Furthermore, there is a lack of temporal heterogeneous networked data in current standard graph benchmark datasets. Hence, in this work, we propose DURENDAL, a graph deep learning framework for THNs. DURENDAL can help to easily repurpose any heterogeneous graph learning model to evolving networks by combining design principles from snapshot-based and multirelational message-passing graph learning models. We introduce two different schemes to update embedding representations for THNs, discussing the strengths and weaknesses of both strategies. We also extend the set of benchmarks for TNHs by introducing two novel high-resolution temporal heterogeneous graph datasets derived from an emerging Web3 platform and a well-established e-commerce website. Overall, we conducted the experimental evaluation of the framework over four temporal heterogeneous network datasets on future link prediction tasks in an evaluation setting that takes into account the evolving nature of the data. Experiments show the prediction power of DURENDAL compared to current solutions for evolving and dynamic graphs, and the effectiveness of its model design
DURENDAL: Graph deep learning framework for temporal heterogeneous networks
[ "Manuel Dileo", "Matteo Zignani", "Sabrina Gaito" ]
Workshop/TGL
longpaper
2310.00336
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=5VataMO1Gs
@inproceedings{ behrouz2023learning, title={Learning Temporal Higher-order Patterns to Detect Anomalous Brain Activity}, author={Ali Behrouz and Farnoosh Hashemi}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=5VataMO1Gs} }
Due to recent advances in machine learning on graphs, representing the connections of the human brain as a network has become one of the most pervasive analytical paradigms. However, most existing graph machine learning-based methods suffer from a subset of five critical limitations: They are (1) designed for simple pair-wise interactions while recent studies on the human brain show the existence of higher-order dependencies of brain regions, (2) designed to perform on pre-constructed networks from time-series data, which limits their generalizability, (3) designed for classifying brain networks, limiting their ability to reveal underlying patterns that might cause the symptoms of a disease or disorder, (4) designed for learning of static patterns, missing the dynamics of human brain activity, and (5) designed in supervised setting, relying their performance on the existence of labeled data. To address these limitations, we present HADiB, an end-to-end anomaly detection model that automatically learns the structure of the hypergraph representation of the brain from neuroimage data. HADiB uses a tetra-stage message-passing mechanism along with an attention mechanism that learns the importance of higher-order dependencies of brain regions. We further present a new adaptive hypergraph pooling to obtain brain-level representation, enabling HADiB to detect the neuroimage of people living with a specific disease or disorder. Our experiments on Parkinson’s Disease, Attention Deficit Hyperactivity Disorder, and Autism Spectrum Disorder show the efficiency and effectiveness of our approaches in detecting anomalous brain activity.
Learning Temporal Higher-order Patterns to Detect Anomalous Brain Activity
[ "Ali Behrouz", "Farnoosh Hashemi" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=1W1LyStQap
@inproceedings{ fatemi2023mitigating, title={Mitigating Cold-start Problem using Cold Causal Demand Forecasting Model}, author={Zahra Fatemi and Minh Huynh and Elena Zheleva and Zamir Syed and Xiaojun Di}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=1W1LyStQap} }
Forecasting multivariate time series data, which involves predicting future values of variables over time using historical data, has significant practical applications. Although deep learning-based models have shown promise in this field, they often fail to capture the causal relationship between dependent variables, leading to less accurate forecasts. Additionally, these models cannot handle the cold-start problem in time series data, where certain variables lack historical data, posing challenges in identifying dependencies among variables. To address these limitations, we introduce the Cold Causal Demand Forecasting (CDF-cold) framework that integrates causal inference with deep learning-based models to enhance the forecasting accuracy of multivariate time series data affected by the cold-start problem. To validate the effectiveness of the proposed approach, we collect 15 multivariate time-series datasets containing the network traffic of different Google data centers. Our experiments demonstrate that the CDF-cold framework outperforms state-of-the-art forecasting models in predicting future values of multivariate time series data suffering from cold-start problem.
Mitigating Cold-start Problem Using Cold Causal Demand Forecasting Model
[ "Zahra Fatemi", "Minh Huynh", "Elena Zheleva", "Zamir Syed", "Xiaojun Di" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=1Ji9QsUVQ1
@inproceedings{ farzaneh2023an, title={An Information-Theoretic Analysis on Temporal Graph Evolution}, author={Amirmohammad Farzaneh}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=1Ji9QsUVQ1} }
In this paper, we present a novel model termed Network Evolution Chains for simulating the temporal dynamics of networks. Our model's design is tailored to enable comprehensive analysis through information theory. We establish that this model creates a stationary and ergodic stochastic process, thus facilitating the application of the asymptotic equipartition property. This breakthrough paves the way for a thorough information-theoretic investigation into network behavior, encompassing the definition of typical sequences, future state prediction, and beyond.
An Information-Theoretic Analysis on Temporal Graph Evolution
[ "Amirmohammad Farzaneh" ]
Workshop/TGL
longpaper
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=13jswzpMI8
@inproceedings{ biparva2023todyformer, title={Todyformer: Towards Holistic Dynamic Graph Transformers with Structure-Aware Tokenization}, author={Mahdi Biparva and Raika Karimi and Faezeh Faez and Yingxue Zhang}, booktitle={Temporal Graph Learning Workshop @ NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=13jswzpMI8} }
Temporal Graph Neural Networks have garnered substantial attention for their capacity to model evolving structural and temporal patterns while exhibiting impressive performance. However, it is known that these architectures are encumbered by issues that constrain their performance, such as over-squashing and over-smoothing. Meanwhile, Transformers have demonstrated exceptional computational capacity to effectively address challenges related to long-range dependencies. Consequently, we introduce Todyformer—a novel Transformer-based neural network tailored for dynamic graphs. It unifies the local encoding capacity of Message-Passing Neural Networks (MPNNs) with the global encoding of Transformers through i) a novel patchifying paradigm for dynamic graphs to improve over-squashing, ii) a structure-aware parametric tokenization strategy leveraging MPNNs, iii) a Transformer with temporal positional-encoding to capture long-range dependencies, and iv) an encoding architecture that alternates between local and global contextualization, mitigating over-smoothing in MPNNs. Experimental evaluations on public benchmark datasets demonstrate that Todyformer consistently outperforms the state-of-the-art methods for the downstream tasks. Furthermore, we illustrate the underlying aspects of the proposed model in effectively capturing extensive temporal dependencies in dynamic graph.
Todyformer: Towards Holistic Dynamic Graph Transformers with Structure-Aware Tokenization
[ "Mahdi Biparva", "Raika Karimi", "Faezeh Faez", "Yingxue Zhang" ]
Workshop/TGL
longpaper
2402.05944
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y5ihmWxYWx
@inproceedings{ harrington2023evaluating, title={Evaluating Peripheral Vision as an Input Transformation to Understand Object Detection Model Behavior}, author={Anne Harrington and Vasha DuTell and Mark Hamilton and Ayush Tewari and Simon Stent and William T. Freeman and Ruth Rosenholtz}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=y5ihmWxYWx} }
Incorporating aspects of human gaze into deep neural networks (DNNs) has been used to both improve and understand the representational properties of models. We extend this work by simulating peripheral vision -- a key component of human gaze -- in object detection DNNs. To do so, we modify a well-tested model of human peripheral vision (the Texture Tiling Model, TTM) to transform a subset of the MS-COCO dataset to mimic the information loss from peripheral vision. This transformed dataset enables us to (1) evaluate the performance of a variety of pre-trained DNNs on object detection in the periphery, (2) train a Faster-RCNN with peripheral vision input, and (3) test trained DNNs for corruption robustness. Our results show that stimulating peripheral vision helps us understand how different DNNs perform under constrained viewing conditions. In addition, we show that one benefit of training with peripheral vision is increased robustness to geometric and high severity image corruptions, but decreased robustness to noise-like corruptions. Altogether, our work makes it easier to model human peripheral vision in DNNs to understand both the role of peripheral vision in guiding gaze behavior and the benefits of human gaze in machine learning. Data and code will be released at https://github.com/RosenholtzLab/coco-periph-gaze
Evaluating Peripheral Vision as an Input Transformation to Understand Object Detection Model Behavior
[ "Anne Harrington", "Vasha DuTell", "Mark Hamilton", "Ayush Tewari", "Simon Stent", "William T. Freeman", "Ruth Rosenholtz" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wEoagacatn
@inproceedings{ kuang2023interactionaware, title={Interaction-aware Dynamic 3D Gaze Estimation in Videos}, author={Chenyi Kuang and Jeffrey O. Kephart and Qiang Ji}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=wEoagacatn} }
Human gaze in in-the-wild and outdoor human activities is a continuous and dynamic process that is driven by the anatomical eye movements such as fixations, saccades and smooth pursuit. However, learning gaze dynamics in videos remains as a challenging task as annotating human gaze in videos is labor-expensive. In this paper, we propose a novel method for dynamic 3D gaze estimation in videos by utilizing the human interaction labels. Our model contains a temporal gaze estimator which is built upon Autoregressive Transformer structures. Besides, our model learns the spatial relationship of gaze among multiple subjects, by constructing a Human Interaction Graph from predicted gaze and update the gaze feature with a structure-aware Transformer. Our model predict future gaze conditioned on historical gaze and the gaze interactions in an autoregressive manner. We propose a multi-state training algorithm to alternately update the Interaction module and dynamic gaze estimation module, when training on a mixture of labeled and unlabeled sequences. We show significant improvements in both within-domain gaze estimation accuracy and cross-domain generalization on the physically-unconstrained gaze estimation benchmark.
Interaction-aware Dynamic 3D Gaze Estimation in Videos
[ "Chenyi Kuang", "Jeffrey O. Kephart", "Qiang Ji" ]
Workshop/Gaze_Meets_ML
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w5uLSMt3Qc
@inproceedings{ o'shea2023supervision, title={SuperVision: Self-Supervised Super-Resolution for Appearance-Based Gaze Estimation}, author={Galen O'Shea and Majid Komeili}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=w5uLSMt3Qc} }
Gaze estimation is a valuable tool with a broad range of applications in various fields, including medicine, psychology, virtual reality, marketing, and safety. Therefore, it is essential to have gaze estimation software that is cost-efficient and high-performing. Accurately predicting gaze remains a difficult task, particularly in real-world situations where images are affected by motion blur, video compression, and noise. Super-resolution (SR) has been shown to remove these degradations and improve image quality from a visual perspective. This work examines the usefulness of super-resolution for improving appearance-based gaze estimation and demonstrates that not all SR models preserve the gaze direction. We propose a two-step framework for gaze estimation based on the SwinIR super-resolution model. The proposed method consistently outperforms the state-of-the-art, particularly in scenarios involving low-resolution or degraded images. Furthermore, we examine the use of super-resolution through the lens of self-supervised learning for gaze estimation and propose a novel architecture “SuperVision” by fusing an SR backbone network to a ResNet18. While only using 20\% of the data, the proposed SuperVision architecture outperforms the state-of-the-art GazeTR method by 15.5\%.
SuperVision: Self-Supervised Super-Resolution for Appearance-Based Gaze Estimation
[ "Galen O'Shea", "Majid Komeili" ]
Workshop/Gaze_Meets_ML
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v25W8STO2T
@inproceedings{ banerjee2023an, title={An Attention-based Predictive Agent for Handwritten Numeral/Alphabet Recognition via Generation}, author={Bonny Banerjee and Murchana Baruah}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=v25W8STO2T} }
A number of attention-based models for either classification or generation of handwritten numerals/alphabets have been reported in the literature. However, generation and classification are done jointly in very few end-to-end models. We propose a predictive agent model that actively samples its visual environment via a sequence of glimpses. The attention is driven by the agent's sensory prediction (or generation) error. At each sampling instant, the model predicts the observation class and completes the partial sequence observed till that instant. It learns where and what to sample by jointly minimizing the classification and generation errors. Three variants of this model are evaluated for handwriting generation and recognition on images of handwritten numerals and alphabets from benchmark datasets. We show that the proposed model is more efficient in handwritten numeral/alphabet recognition than human participants in a recently published study as well as a highly-cited attention-based reinforcement model. This is the first known attention-based agent to interact with and learn end-to-end from images for recognition via generation, with high degree of accuracy and efficiency.
An Attention-based Predictive Agent for Handwritten Numeral/Alphabet Recognition via Generation
[ "Bonny Banerjee", "Murchana Baruah" ]
Workshop/Gaze_Meets_ML
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=tY06Zwn3v4
@inproceedings{ singh2023egsif, title={{EG}-{SIF}: Improving Appearance Based Gaze Estimation using Self Improving Features}, author={Vasudev Singh and Chaitanya Langde and Sourav Lakotia and Vignesh Kannan and Shuaib Ahmed}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=tY06Zwn3v4} }
Accurate gaze estimation is integral to a myriad of applications, from augmented reality to non-verbal communication analysis. However, the performance of gaze estimation models is often compromised by adverse conditions such as poor lighting, artifacts, low-resolution imagery, etc. To counter these challenges, we introduce the eye gaze estimation with self- improving features (EG-SIF) method, a novel approach that enhances model robustness and performance in suboptimal conditions. The EG-SIF method innovatively segregates eye images by quality, synthesizing pairs of high-quality and corresponding degraded images. It leverages a multitask training paradigm that emphasizes image enhancement through reconstruction from impaired versions. This strategy is not only pioneering in the realm of data segregation based on image quality but also introduces a transformative multitask framework that integrates image enhancement as an auxiliary task. We implement adaptive binning and mixed regression with intermediate supervision to refine capability of our model further. Empirical evidence demonstrates that our EG-SIF method significantly reduces the angular error in gaze estimation on challenging datasets such as MPIIGaze, improving from 4.64◦ to 4.53◦, and on RTGene, from 7.44◦ to 7.41◦, thereby setting a new benchmark in the field. Our contributions lay the foundation for future eye appearance based gaze estimation models that can operate reliably despite the presence of image quality adversities.
EG-SIF: Improving Appearance Based Gaze Estimation using Self Improving Features
[ "Vasudev Singh", "Chaitanya Langde", "Sourav Lakotia", "Vignesh Kannan", "Shuaib Ahmed" ]
Workshop/Gaze_Meets_ML
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rtdn6GHiLo
@inproceedings{ mathew2023leveraging, title={Leveraging Multi-Modal Saliency and Fusion for Gaze Target Detection}, author={Athul Mathew and Arshad Khan and Thariq Khalid and Faroq AL-Tam and Riad Souissi}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=rtdn6GHiLo} }
Gaze target detection (GTD) is the task of predicting where a person in an image is looking. This is a challenging task, as it requires the ability to understand the relationship between the person's head, body, and eyes, as well as the surrounding environment. In this paper, we propose a novel method for GTD that fuses multiple pieces of information extracted from an image. First, we project the 2D image into a 3D representation using monocular depth estimation. We then extract a depth-infused saliency module map, which highlights the most salient ($\textit{attention-grabbing}$) regions in image for the subject in consideration. We also extract face and depth modalities from the image, and finally fuse all the extracted modalities to identify the gaze target. We quantitatively evaluated our method, including the ablation analysis on three publicly available datasets, namely VideoAttentionTarget, GazeFollow and GOO-Real, and showed that it outperforms other state-of-the-art methods. This suggests that our method is a promising new approach for GTD.
Leveraging Multi-Modal Saliency and Fusion for Gaze Target Detection
[ "Athul Mathew", "Arshad Khan", "Thariq Khalid", "Faroq AL-Tam", "Riad Souissi" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qUfLsi3Vlm
@inproceedings{ ibrayev2023exploring, title={Exploring Foveation and Saccade for Improved Weakly-Supervised Localization}, author={Timur Ibrayev and Manish Nagaraj and Amitangshu Mukherjee and Kaushik Roy}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=qUfLsi3Vlm} }
Deep neural networks have become the de facto choice as feature extraction engines, ubiquitously used for computer vision tasks. The current approach is to process every input with uniform resolution in a one-shot manner and make all of the predictions at once. However, human vision is an "active" process that not only actively switches from one focus point to another within the visual field, but also applies spatially varying attention centered at such focus points. To bridge the gap, we propose incorporating the bio-plausible mechanisms of foveation and saccades to build an active object localization framework. While foveation enables it to process different regions of the input with variable degrees of detail, saccades allow it to change the focus point of such foveated regions. Our experiments show that these mechanisms improve the quality of predicted bounding boxes by capturing all the essential object parts while minimizing unnecessary background clutter. Additionally, they enable the resiliency of the method by allowing it to detect multiple objects while being trained only on data containing a single object per image. Finally, we explore the alignment of our method with human perception using the interesting "duck-rabbit" optical illusion. The code is available at: https://github.com/TimurIbrayev/FALcon.
Exploring Foveation and Saccade for Improved Weakly-Supervised Localization
[ "Timur Ibrayev", "Manish Nagaraj", "Amitangshu Mukherjee", "Kaushik Roy" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=o9YkHqBE5I
@inproceedings{ beckmann2023sam, title={{SAM} meets Gaze: Passive Eye Tracking for Prompt-based Instance Segmentation}, author={Daniel Beckmann and Jacqueline Kockwelp and Joerg Gromoll and Friedemann Kiefer and Benjamin Risse}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=o9YkHqBE5I} }
The annotation of large new datasets for machine learning is a very time-consuming and expensive process. This is particularly true for pixel-accurate labelling of e.g. segmentation masks. Prompt-based methods have been developed to accelerate this label generation process by allowing the model to incorporate additional clues from other sources such as humans. The recently published Segment Anything foundation model (SAM) extends this approach by providing a flexible framework with a model that was trained on more than 1 billion segmentation masks, while also being able to exploit explicit user input. In this paper, we explore the usage of a passive eye tracking system to collect gaze data during unconstrained image inspections which we integrate as a novel prompt input for SAM. We evaluated our method on the original SAM model and finetuned the prompt encoder and mask decoder for different gaze-based inputs, namely fixation points, blurred gaze maps and multiple heatmap variants. Our results indicate that the acquisition of gaze data is faster than other prompt-based approaches while the segmentation performance stays comparable to the state-of-the-art performance of SAM. Code is available at https://zivgitlab.uni-muenster.de/cvmls/sam_meets_gaze.
SAM meets Gaze: Passive Eye Tracking for Prompt-based Instance Segmentation
[ "Daniel Beckmann", "Jacqueline Kockwelp", "Joerg Gromoll", "Friedemann Kiefer", "Benjamin Risse" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=kMNLseFKcT
@inproceedings{ lakshminarasimhan2023planning, title={Planning by Active Sensing}, author={Kaushik Lakshminarasimhan and Seren Zhu and Dora Angelaki}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=kMNLseFKcT} }
Flexible behavior requires rapid planning, but planning requires a good internal model of the environment. Learning this model by trial-and-error is impractical when acting in complex environments. How do humans plan action sequences efficiently when there is uncertainty about model components? To address this, we asked human participants to navigate complex mazes in virtual reality. We found that the paths taken to gather rewards were close to optimal even though participants had no prior knowledge of these environments. Based on the sequential eye movement patterns observed when participants mentally compute a path before navigating, we develop an algorithm that is capable of rapidly planning under uncertainty by active sensing i.e., visually sampling information about the structure of the environment. ew eye movements are chosen in an iterative manner by following the gradient of a dynamic value map which is updated based on the previous eye movement, until the planning process reaches convergence. In addition to bearing hallmarks of human navigational planning, the proposed algorithm is sample-efficient such that the number of visual samples needed for planning scales linearly with the path length regardless of the size of the state space.
Planning by Active Sensing
[ "Kaushik Lakshminarasimhan", "Seren Zhu", "Dora Angelaki" ]
Workshop/Gaze_Meets_ML
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=hJ5DREWdjs
@inproceedings{ wang2023gazesam, title={Gaze{SAM}: Interactive Image Segmentation with Eye Gaze and Segment Anything Model}, author={Bin Wang and Armstrong Aboah and Zheyuan Zhang and Hongyi Pan and Ulas Bagci}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=hJ5DREWdjs} }
Interactive image segmentation aims to assist users in efficiently generating high-quality data annotations through user-friendly interactions such as clicking, scribbling, and bounding boxes. However, mouse-based interaction methods can induce user fatigue during large-scale dataset annotation and are not entirely suitable for some domains, such as radiology. This study introduces eye gaze as a novel interactive prompt for image segmentation, different than previous model-based applications. Specifically, leveraging the real-time interactive prompting feature of the recently proposed Segment Anything Model (SAM), we present the GazeSAM system to enable users to collect target segmentation masks by simply looking at the region of interest. GazeSAM tracks users' eye gaze and utilizes it as the input prompt for SAM, generating target segmentation masks in real time. To our best knowledge, GazeSAM is the first work to combine eye gaze and SAM for interactive image segmentation. Experimental results demonstrate that GazeSAM can improve nearly 50\% efficiency in 2D natural image and 3D medical image segmentation tasks. The code is available in https://github.com/ukaukaaaa/GazeSAM.
GazeSAM: Interactive Image Segmentation with Eye Gaze and Segment Anything Model
[ "Bin Wang", "Armstrong Aboah", "Zheyuan Zhang", "Hongyi Pan", "Ulas Bagci" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=bGHlCkrceS
@inproceedings{ vegner2023fovae, title={Fo{VAE}: Reconstructive Foveation as a Self-Supervised Variational Inference Task for Visual Representation Learning}, author={Ivan Vegner and Siddharth N and Leonidas A. A. Doumas}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=bGHlCkrceS} }
We present the first steps toward a model of visual representation learning driven by a self-supervised reconstructive foveation mechanism. Tasked with looking at one visual patch at a time while reconstructing the current patch, predicting the next patch, and reconstructing the full image after a set number of timesteps, FoVAE learns to reconstruct images from the MNIST and Omniglot datasets, while inferring high-level priors about the whole image. In line with theories of Bayesian predictive coding in the brain and prior work on human foveation biases, the model combines bottom-up input processing with top-down learned priors to reconstruct its input, choosing foveation targets that balance local feature predictability with global information gain. FoVAE is able to transfer its priors and foveation policy across datasets to reconstruct samples from untrained datasets in a zero-shot transfer-learning setting. By showing that robust and domain-general policies of generative inference and action-based information gathering emerge from simple biologically-plausible inductive biases, this work paves the way for further exploration of the role of foveation in visual representation learning.
FoVAE: Reconstructive Foveation as a Self-Supervised Variational Inference Task for Visual Representation Learning
[ "Ivan Vegner", "Siddharth N", "Leonidas A. A. Doumas" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=arfdjgKyhz
@inproceedings{ peters2023humanlike, title={Human-like multiple object tracking through occlusion via gaze-following}, author={Benjamin Peters and Eivinas Butkus and Nikolaus Kriegeskorte}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=arfdjgKyhz} }
State-of-the-art multiple object tracking (MOT) models have recently been shown to behave in qualitatively different ways from human observers. They exhibit superhuman performance for large numbers of targets and subhuman performance when targets disappear behind occluders. Here we investigate whether human gaze behavior can help explain differences in human and model behavior. Human subjects watched scenes with objects of various appearances. They tracked a designated subset of the objects, which moved continuously and frequently disappeared behind static black-bar occluders, reporting the designated objects at the end of each trial. We measured eye movements during tracking and tracking accuracy. We found that human gaze behavior is clearly guided by task relevance: designated objects were preferentially fixated. We compared human performance to that of cognitive models inspired by state-of-the-art MOT models with object slots, where each slot represents the model's probabilistic belief about the location and appearance of one object. In our model, incoming observations are unambiguously assigned to slots using the Hungarian algorithm. Locations are tracked probabilistically (given the hard assignment) with one Kalman filter per slot. We equipped the computational models with a fovea, yielding high-precision observations at the center and low-precision observations in the periphery. We found that constraining models to follow the same gaze behavior as humans (imposing the human-measured fixation sequences) best captures human behavioral phenomena. These results demonstrate the importance of gaze behavior, allowing the human visual system to optimally use its limited resources.
Human-like multiple object tracking through occlusion via gaze-following
[ "Benjamin Peters", "Eivinas Butkus", "Nikolaus Kriegeskorte" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ai0ES5VAAM
@inproceedings{ russek2023inverting, title={Inverting cognitive models with machine learning to infer preferences from fixations}, author={Evan Russek and Frederick Callaway and Thomas L. Griffiths}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=ai0ES5VAAM} }
Inferring an individual’s preferences from their observable behavior is a key step in the development of assistive decision-making technology. Although machine learning models such as neural networks could in principle be deployed toward this inference, a large amount of data is required to train such models. Here, we present an approach in which a cognitive model generates simulated data to augment limited human data. Using these data, we train a neural network to invert the model, making it possible to infer preferences from behavior. We show how this approach can be used to infer the value that people assign to food items from their eye movements when choosing between those items. We demonstrate first that neural networks can infer the latent preferences used by the model to generate simulated fixations, and second that simulated data can be beneficial in pretraining a network for predicting human-reported preferences from real fixations. Compared to inferring preferences from choice alone, this approach confers a slight improvement in predicting preferences and also allows prediction to take place prior to the choice being made. Overall, our results suggest that using a combination of neural networks and model-simulated training data is a promising approach for developing technology that infers human preferences.
Inverting cognitive models with machine learning to infer preferences from fixations
[ "Evan Russek", "Frederick Callaway", "Thomas L. Griffiths" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=TaLDynXjZa
@inproceedings{ makowski2023detection, title={Detection of Drowsiness and Impending Microsleep from Eye Movements}, author={Silvia Makowski and Paul Prasse and Lena Ann J{\"a}ger and Tobias Scheffer}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=TaLDynXjZa} }
Drowsiness is a contributing factor in an estimated 12% of all road traffic fatalities. It is known that drowsiness directly affects oculomotor control. We therefore investigate whether drowsiness can be detected based on eye movements. To this end, we develop deep neural sequence models that exploit a person's raw eye-gaze and eye-closure signals to detect drowsiness. We explore three measures of drowsiness ground truth: a widely-used sleepiness self-assessment, reaction time, and impending microsleep in the near future. We find that our sequence models are able to detect drowsiness and outperform a baseline processing established engineered features. We also find that the risk of a microsleep event in the near future can be predicted more accurately than the sleepiness self-assessment or the reaction time. Moreover, a model that has been trained on predicting microsleep also excels at predicting self-assessed sleepiness in a cross-task evaluation, which indicates that upcoming microsleep is a less noisy proxy of the drowsiness ground truth. We investigate the relative contribution of eye-closure and gaze information to the model's performance. In order to make the topic of drowsiness detection more accessible to the research community, we collect and share eye-gaze data with participants in baseline and sleep-deprived states.
Detection of Drowsiness and Impending Microsleep from Eye Movements
[ "Silvia Makowski", "Paul Prasse", "Lena Ann Jäger", "Tobias Scheffer" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JBIfteTlxk
@inproceedings{ wang2023crafting, title={Crafting Good Views of Medical Images for Contrastive Learning via Expert-level Visual Attention}, author={Sheng Wang and Zihao Zhao and Lichi Zhang and Dinggang Shen and Qian Wang}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=JBIfteTlxk} }
Recent advancements in contrastive learning methods have shown significant improvements, which focus on minimizing the distances between different views of the same image. These methods typically craft two randomly augmented views of the same image as a positive pair, expecting the model to capture the inherent representation of the image. However, random data augmentation might not fully preserve image semantic information and can lead to a decline in the quality of the augmented views, thereby affecting the effectiveness of contrastive learning. This issue is particularly pronounced in the domain of medical images, where lesion areas can be subtle and are susceptible to distortion or removal. To address this issue, we leverage insights from radiologists' expertise in diagnosing medical images and propose Gaze-Conditioned Augmentation (GCA) to craft high-quality contrastive views of medical images given the radiologist's visual attention. Specifically, we track the gaze movements of radiologists and model their visual attention when reading to diagnose X-ray images. The learned model can predict visual attention of the radiologist when presented with a new X-ray image, and further guide the attention-aware augmentation, ensuring that it pays special attention to preserving disease-related abnormalities. Our proposed GCA can significantly improve the performance of contrastive learning methods on knee X-ray images, revealing its potential in medical applications.
Crafting Good Views of Medical Images for Contrastive Learning via Expert-level Visual Attention
[ "Sheng Wang", "Zihao Zhao", "Lichi Zhang", "Dinggang Shen", "Qian Wang" ]
Workshop/Gaze_Meets_ML
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=EykfhjYrM0
@inproceedings{ stock2023memorybased, title={Memory-Based Sequential Attention}, author={Jason Stock and Charles Anderson}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=EykfhjYrM0} }
Computational models of sequential attention often use recurrent neural networks, which may lead to information loss over accumulated glimpses and an inability to dynamically reweigh glimpses at each step. Addressing the former limitation should result in greater performance, while addressing the latter should enable greater interpretability. In this work, we propose a biologically-inspired model of sequential attention for image classification. Specifically, our algorithm contextualizes the history of observed locations from within an image to inform future gaze points, akin to scanpaths in the biological visual system. We achieve this by using a transformer-based memory module coupled with a reinforcement learning-based learning algorithm, improving both task performance and model interpretability. In addition to empirically evaluating our approach on classical vision tasks, we demonstrate the robustness of our algorithm to different initial locations in the image and provide interpretations of sampled locations from within the trajectory.
Memory-Based Sequential Attention
[ "Jason Stock", "Charles Anderson" ]
Workshop/Gaze_Meets_ML
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=6h8RjNchuh
@inproceedings{ koevesdi2023stattexnet, title={StatTexNet: Evaluating the Importance of Statistical Parameters for Pyramid-Based Texture and Peripheral Vision Models}, author={Christian Koevesdi and Vasha DuTell and Anne Harrington and Mark Hamilton and William T. Freeman and Ruth Rosenholtz}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=6h8RjNchuh} }
Peripheral vision plays an important role in human vision, directing where and when to make saccades. Although human behavior in the periphery is well-predicted by pyramid- based texture models, these approaches rely on hand-picked image statistics that are still insufficient to capture a wide variety of textures. To develop a more principled approach to statistic selection for texture-based models of peripheral vision, we develop a self-supervised machine learning model to determine what set of statistics are most important for repre- senting texture. Our model, which we call StatTexNet, uses contrastive learning to take a large set of statistics and compress them to a smaller set that best represents texture fami- lies. We validate our method using depleted texture images where the constituent statistics are already known. We then use StatTexNet to determine the most and least important statistics for natural (non-depleted) texture images using weight interpretability metrics, finding these to be consistent with previous psychophysical studies. Finally, we demonstrate that textures are most effectively synthesized with the statistics identified as important; we see noticeable deterioration when excluding the most important statistics, but minimal effects when excluding least important. Overall, we develop a machine learning method of selecting statistics that can be used to create better peripheral vision models. With these better models, we can more effectively understand the effects of peripheral vision in human gaze.
StatTexNet: Evaluating the Importance of Statistical Parameters for Pyramid-Based Texture and Peripheral Vision Models
[ "Christian Koevesdi", "Vasha DuTell", "Anne Harrington", "Mark Hamilton", "William T. Freeman", "Ruth Rosenholtz" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=5xAhArE66p
@inproceedings{ belen2023temporal, title={Temporal Understanding of Gaze Communication with GazeTransformer}, author={Ryan Anthony de Belen and Gelareh Mohammadi and Arcot Sowmya}, booktitle={NeuRIPS 2023 Workshop on Gaze Meets ML}, year={2023}, url={https://openreview.net/forum?id=5xAhArE66p} }
Gaze plays a crucial role in daily social interactions as it allows humans to communicate intentions effectively. We address the problem of temporal understanding of gaze communication in social videos in two stages. First, we develop GazeTransformer, an end-to-end module that infers atomic-level behaviours in a given frame. Second, we develop a temporal module that predicts event-level behaviours in a video using the inferred atomic-level behaviours. Compared to existing methods, GazeTransformer does not require human head and object locations as input. Instead, it identifies these locations in a parallel and end-to-end manner. In addition, it can predict the attended targets of all predicted humans and infer more atomic-level behaviours that cannot be handled simultaneously by previous approaches. We achieve promising performance on both atomic- and event-level prediction on the (M)VACATION dataset. Code will be available at https://github.com/gazetransformer/gazetransformer.
Temporal Understanding of Gaze Communication with GazeTransformer
[ "Ryan Anthony de Belen", "Gelareh Mohammadi", "Arcot Sowmya" ]
Workshop/Gaze_Meets_ML
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zd2qE6BBdU
@inproceedings{ zhuge2023mindstorms, title={Mindstorms in Natural Language-Based Societies of Mind}, author={Mingchen Zhuge and Haozhe Liu and Francesco Faccio and Dylan R. Ashley and R{\'o}bert Csord{\'a}s and Anand Gopalakrishnan and Abdullah Hamdi and Hasan Abed Al Kader Hammoud and Vincent Herrmann and Kazuki Irie and Louis Kirsch and Bing Li and Guohao Li and Shuming Liu and Jinjie Mai and Piotr Pi{\k{e}}kos and Aditya Ramesh and Imanol Schlag and Weimin Shi and Aleksandar Stani{\'c} and Wenyi Wang and Yuhui Wang and Mengmeng Xu and Deng-Ping Fan and Bernard Ghanem and J{\"u}rgen Schmidhuber}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=zd2qE6BBdU} }
Both Minsky's "society of mind" and Schmidhuber's "learning to think" inspire diverse societies of large multimodal neural networks (NNs) that solve problems by interviewing each other in a "mindstorm." Recent implementations of NN-based societies of minds consist of large language models (LLMs) and other NN-based experts communicating through a natural language interface. In doing so, they overcome the limitations of single LLMs, improving multimodal zero-shot reasoning. In these natural language-based societies of mind (NLSOMs), new agents---all communicating through the same universal symbolic language---are easily added in a modular fashion. To demonstrate the power of NLSOMs, we assemble and experiment with several of them (having up to 129 members), leveraging mindstorms in them to solve some practical AI tasks: visual question answering, image captioning, text-to-image synthesis, 3D generation, egocentric retrieval, embodied AI, and general language-based task solving. We view this as a starting point towards much larger NLSOMs with billions of agents—some of which may be humans. And with this emergence of great societies of heterogeneous minds, many new research questions have suddenly become paramount to the future of artificial intelligence. What should be the social structure of an NLSOM? What would be the (dis)advantages of having a monarchical rather than a democratic structure? How can principles of NN economies be used to maximize the total reward of a reinforcement learning NLSOM? In this work, we identify, discuss, and try to answer some of these questions.
Mindstorms in Natural Language-Based Societies of Mind
[ "Mingchen Zhuge", "Haozhe Liu", "Francesco Faccio", "Dylan R. Ashley", "Róbert Csordás", "Anand Gopalakrishnan", "Abdullah Hamdi", "Hasan Abed Al Kader Hammoud", "Vincent Herrmann", "Kazuki Irie", "Louis Kirsch", "Bing Li", "Guohao Li", "Shuming Liu", "Jinjie Mai", "Piotr Piękos", "Aditya Ramesh", "Imanol Schlag", "Weimin Shi", "Aleksandar Stanić", "Wenyi Wang", "Yuhui Wang", "Mengmeng Xu", "Deng-Ping Fan", "Bernard Ghanem", "Jürgen Schmidhuber" ]
Workshop/R0-FoMo
oral
2305.17066
[ "" ]
https://huggingface.co/papers/2305.17066
2
3
0
26
1
[]
[]
[]
null
https://openreview.net/forum?id=yMkalE52Xv
@inproceedings{ hu2023evoke, title={Evoke: Evoking Critical Thinking Abilities in {LLM}s via Reviewer-Author Prompt Editing}, author={Xinyu Hu and Pengfei Tang and Simiao Zuo and Zihan Wang and Bowen Song and Qiang Lou and Jian Jiao and Denis X Charles}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=yMkalE52Xv} }
Large language models (LLMs) have made impressive progress in natural language processing. These models rely on proper human instructions (or prompts) to generate suitable responses. However, the potential of LLMs are not fully harnessed by commonly-used prompting methods: many human-in-the-loop algorithms employ ad-hoc procedures for prompt selection; while auto prompt generation approaches are essentially searching all possible prompts randomly and inefficiently. We propose Evoke, an automatic prompt refinement framework. In Evoke, there are two instances of a same LLM: one as a reviewer (LLM-Reviewer), it scores the current prompt; the other as an author (LLM-Author), it edits the prompt by considering the edit history and the reviewer's feedback. Such an author-reviewer feedback loop ensures that the prompt is refined in each iteration. We further aggregate a data selection approach to Evoke, where only the hard samples are exposed to the LLM. The hard samples are more important because the LLM can develop deeper understanding of the tasks out of them, while the model may already know how to solve the easier cases. Experimental results show that Evoke significantly outperforms existing methods across a diverse range of tasks.
Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author Prompt Editing
[ "Xinyu Hu", "Pengfei Tang", "Simiao Zuo", "Zihan Wang", "Bowen Song", "Qiang Lou", "Jian Jiao", "Denis X Charles" ]
Workshop/R0-FoMo
poster
2310.13855
[ "" ]
https://huggingface.co/papers/2310.13855
0
1
0
8
1
[]
[]
[]
null
https://openreview.net/forum?id=yLUAM6s1cn
@inproceedings{ raunak2023dissecting, title={Dissecting In-Context Learning of Translations}, author={Vikas Raunak and Arul Menezes and Hany Hassan Awadalla}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=yLUAM6s1cn} }
Most of the recent work in leveraging Large Language Models (LLMs) such as GPT-3 for Machine Translation (MT) through in-context learning of translations has focused on selecting the few-shot demonstration samples. In this work, we characterize the robustness of LLMs from the GPT family to certain perturbations on few-shot translation demonstrations as a means to dissect the in-context learning of translations. In particular, we try to better understand the role of demonstration attributes for the in-context learning of translations through perturbations of high-quality, in-domain demonstrations. We find that asymmetric perturbation of the source-target mappings yield vastly different results. Further, we show that the perturbation of the source side has surprisingly little impact, while target perturbation can drastically reduce translation quality, suggesting that it is the output text distribution that provides the most important learning signal during in-context learning of translations. Based on our findings, we propose a method named Zero-Shot-Context to add this signal automatically in Zero-Shot prompting. Our proposed method greatly improves upon the zero-shot translation performance of GPT-3, thereby making it competitive with few-shot prompted translations.
Dissecting In-Context Learning of Translations
[ "Vikas Raunak", "Arul Menezes", "Hany Hassan Awadalla" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xGKuxPVwcL
@inproceedings{ dun2023fedjets, title={Fed{JET}s: Efficient Just-In-Time Personalization with Federated Mixture of Experts}, author={Chen Dun and Mirian Hipolito Garcia and Guoqing Zheng and Ahmed Awadallah and Robert Sim and Anastasios Kyrillidis and Dimitrios Dimitriadis}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=xGKuxPVwcL} }
One of the goals in Federated Learning (FL) is to create personalized models that can adapt to the context of each participating client, while utilizing knowledge from a shared global model. Yet, often, personalization requires a fine-tuning step using clients' labeled data in order to achieve good performance. This may not be feasible in scenarios where incoming clients are fresh and/or have privacy concerns. It, then, remains open how one can achieve just-in-time personalization in these scenarios. We propose FedJETs, a novel solution by using a Mixture-of-Experts (MoE) framework within a FL setup. Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s). Our gating function harnesses the knowledge of a pretrained model (common expert) to enhance its routing decisions on-the-fly. As a highlight, our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance. In practice, our method can handle non-homogeneous data distributions, scale more efficiently, and improve the state-of-the-art performance on common FL benchmarks.
FedJETs: Efficient Just-In-Time Personalization with Federated Mixture of Experts
[ "Chen Dun", "Mirian Hipolito Garcia", "Guoqing Zheng", "Ahmed Awadallah", "Robert Sim", "Anastasios Kyrillidis", "Dimitrios Dimitriadis" ]
Workshop/R0-FoMo
poster
2306.08586
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v3JJmLYk12
@inproceedings{ khaddaj2023extra, title={Extra Training Provides a Strong Baseline for {CLIP}}, author={Alaa Khaddaj and Hadi Salman and Andrew Ilyas and Guillaume Leclerc and Aleksander Madry}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=v3JJmLYk12} }
Contrastive Language-Image Pretraining (CLIP) models exhibit good performance on a range of vision tasks. To improve the performance of this class of models even further, several works have proposed to modify the CLIP training procedure. In this work, we show that it is possible to achieve substantial gains using a much simpler strategy. Specifically, existing CLIP models---especially those trained on smaller datasets---tend to be undertrained. As a result, simply extending the training procedure according to a simple heuristic can significantly improve the performance of CLIP models.
Extra Training Provides a Strong Baseline for CLIP
[ "Alaa Khaddaj", "Hadi Salman", "Andrew Ilyas", "Guillaume Leclerc", "Aleksander Madry" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uwKI6Rwj6S
@inproceedings{ toyer2023tensor, title={Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game}, author={Sam Toyer and Olivia Watkins and Ethan Adrian Mendes and Justin Svegliato and Luke Bailey and Tiffany Wang and Isaac Ong and Karim Elmaaroufi and Pieter Abbeel and Trevor Darrell and Alan Ritter and Stuart Russell}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=uwKI6Rwj6S} }
While Large Language Models (LLMs) are increasingly being used in real-world applications, they remain vulnerable to *prompt injection attacks*: malicious third party prompts that subvert the intent of the system designer. To help researchers study this problem, we present a dataset of over 126,000 prompt injection attacks and 46,000 prompt-based "defenses" against prompt injection, all created by players of an online game called Tensor Trust. The attacks in our dataset have easily interpretable structure, and shed light on the weaknesses of LLMs. We also use the dataset to create a benchmark for resistance to two types of prompt injection, which we refer to as *prompt extraction* and *prompt hijacking*. Our benchmark results show that many models are vulnerable to the attack strategies in the Tensor Trust dataset. Furthermore, we show that some attack strategies from the dataset generalize to deployed LLM-based applications, even though they have a very different set of constraints to the game. We release data and code at [tensortrust.ai/paper](https://tensortrust.ai/paper)
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game
[ "Sam Toyer", "Olivia Watkins", "Ethan Adrian Mendes", "Justin Svegliato", "Luke Bailey", "Tiffany Wang", "Isaac Ong", "Karim Elmaaroufi", "Pieter Abbeel", "Trevor Darrell", "Alan Ritter", "Stuart Russell" ]
Workshop/R0-FoMo
oral
2311.01011
[ "" ]
https://huggingface.co/papers/2311.01011
1
0
0
12
1
[]
[ "qxcv/tensor-trust" ]
[]
null
https://openreview.net/forum?id=ucTe1eiLc6
@inproceedings{ zhao2023provable, title={Provable Robust Watermarking for {AI}-Generated Text}, author={Xuandong Zhao and Prabhanjan Vijendra Ananth and Lei Li and Yu-Xiang Wang}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=ucTe1eiLc6} }
We study the problem of watermarking large language models (LLMs) generated text — one of the most promising approaches for addressing the safety challenges of LLM usage. In this paper, we propose a rigorous theoretical framework to quantify the effectiveness and robustness of LLM watermarks. We propose a robust and high-quality watermark method, Unigram-Watermark, by extending an existing approach with a simplified fixed grouping strategy. We prove that our watermark method enjoys guaranteed generation quality, correctness in watermark detection, and is robust against text editing and paraphrasing. Experiments on three varying LLMs and two datasets verify that our Unigram-Watermark achieves superior detection accuracy and comparable generation quality in perplexity, thus promoting the responsible use of LLMs.
Provable Robust Watermarking for AI-Generated Text
[ "Xuandong Zhao", "Prabhanjan Vijendra Ananth", "Lei Li", "Yu-Xiang Wang" ]
Workshop/R0-FoMo
poster
2306.17439
[ "https://github.com/xuandongzhao/gptwatermark" ]
https://huggingface.co/papers/2306.17439
1
0
0
4
1
[]
[]
[ "Xuandong/Unigram-Watermark" ]
null
https://openreview.net/forum?id=tnpdX0wnc3
@inproceedings{ mushsharat2023neural, title={Neural Sandbox Framework for Classification: A Concept Based Method of Leveraging {LLM}s for Text Classification}, author={Mostafa Mushsharat and Nabeel Mohammed and Mohammad Ruhul Amin}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=tnpdX0wnc3} }
We introduce a neural sandbox framework for text classification via self-referencing defined label concepts from a Large Language Model(LLM). The framework draws inspiration from the define-optimize alignment problem, in which the motivations of a model are described initially and then the model is optimized to align with these predefined objectives. In our case, we focus on text classification where we use a pre-trained LLM to convert text into vectors and provide it with specific concept words based on the dataset labels. We then optimize an operator, keeping the LLM frozen, to classify the input text based on how relevant it is to these concept operator words (cop-words). In addition to exhibiting explainable features, experiments with multiple text classification datasets and LLM models reveal that incorporating our sandbox network generally improves the accuracy and macro f1 when compared to a baseline. The framework, not only improves classification but also provides insights into the model's decision making based on the relevance scores of provided cop-words. We also demonstrated the framework's ability to generalize learned concepts and identify potential biases through spurious relations. However, we found that the model's incentives may not always align with human decisions.
Neural Sandbox Framework for Classification: A Concept Based Method of Leveraging LLMs for Text Classification
[ "Mostafa Mushsharat", "Nabeel Mohammed", "Mohammad Ruhul Amin" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=tlGjuJEEo1
@inproceedings{ bhattamishra2023understanding, title={Understanding In-Context Learning in Transformers and {LLM}s by Learning to Learn Discrete Functions}, author={Satwik Bhattamishra and Arkil Patel and Phil Blunsom and Varun Kanade}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=tlGjuJEEo1} }
In order to understand the in-context learning phenomenon, recent works have adopted a stylized experimental framework and demonstrated that Transformers can learn gradient-based learning algorithms for various classes of real-valued functions. However, the limitations of Transformers in implementing learning algorithms, and their ability to learn other forms of algorithms are not well understood. Additionally, the degree to which these capabilities are confined to attention-based models is unclear. Furthermore, it remains to be seen whether the insights derived from these stylized settings can be extrapolated to pretrained Large Language Models (LLMs). In this work, we take a step towards answering these questions by demonstrating the following: (a) On a test-bed with a variety of Boolean function classes, we find that Transformers can nearly match the optimal learning algorithm for 'simpler' tasks, while their performance deteriorates on more 'complex' tasks. Additionally, we find that certain attention-free models perform (almost) identically to Transformers on a range of tasks. (b) When provided a *teaching sequence*, i.e. a set of examples that uniquely identifies a function in a class, we show that Transformers learn more sample-efficiently. Interestingly, our results show that Transformers can learn to implement *two distinct* algorithms to solve a *single* task, and can adaptively select the more sample-efficient algorithm depending on the sequence of in-context examples. (c) Lastly, we show that extant LLMs, e.g. LLaMA-2, GPT-4, can compete with nearest-neighbor baselines on prediction tasks that are guaranteed to not be in their training set.
Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions
[ "Satwik Bhattamishra", "Arkil Patel", "Phil Blunsom", "Varun Kanade" ]
Workshop/R0-FoMo
oral
2310.03016
[ "" ]
https://huggingface.co/papers/2310.03016
2
2
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=tRMiaexFZO
@inproceedings{ chen2023can, title={Can {LLM}-Generated Misinformation Be Detected?}, author={Canyu Chen and Kai Shu}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=tRMiaexFZO} }
The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures.
Can LLM-Generated Misinformation Be Detected?
[ "Canyu Chen", "Kai Shu" ]
Workshop/R0-FoMo
poster
2309.13788
[ "https://github.com/llm-misinformation/llm-misinformation" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rYWD5TMaLj
@inproceedings{ chao2023jailbreaking, title={Jailbreaking Black Box Large Language Models in Twenty Queries}, author={Patrick Chao and Alexander Robey and Edgar Dobriban and Hamed Hassani and George J. Pappas and Eric Wong}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=rYWD5TMaLj} }
There is growing research interest in ensuring that large language models align with human safety and ethical guidelines. Adversarial attacks known as 'jailbreaks' pose a significant threat as they coax models into overriding alignment safeguards. Identifying these vulnerabilities through attacking a language model (red teaming) is instrumental in understanding inherent weaknesses and preventing misuse. We present Prompt Automatic Iterative Refinement (PAIR), which generates semantic jailbreaks with only black-box access to a language model. Empirically, PAIR often requires fewer than 20 queries, orders of magnitude fewer than prior jailbreak attacks. PAIR draws inspiration from the human process of social engineering, and employs an attacker language model to automatically generate adversarial prompts in place of a human. The attacker model uses the target model's response as additional context to iteratively refine the adversarial prompt. PAIR achieves competitive jailbreaking success rates and transferability on open and closed-source language models, including GPT-3.5/4, Vicuna, and PaLM.
Jailbreaking Black Box Large Language Models in Twenty Queries
[ "Patrick Chao", "Alexander Robey", "Edgar Dobriban", "Hamed Hassani", "George J. Pappas", "Eric Wong" ]
Workshop/R0-FoMo
poster
2310.08419
[ "https://github.com/patrickrchao/jailbreakingllms" ]
https://huggingface.co/papers/2310.08419
0
0
0
6
1
[]
[]
[ "TrustSafeAI/GradientCuff-Jailbreak-Defense", "TrustSafeAI/Defensive-Prompt-Patch-Jailbreak-Defense" ]
null
https://openreview.net/forum?id=r7YtqrcPQN
@inproceedings{ roy2023learning, title={Learning Through Consistency for Prompt Tuning}, author={Shuvendu Roy and Ali Etemad}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=r7YtqrcPQN} }
We propose Consistency-guided Prompt learning (CoPrompt), a new fine-tuning method for vision-language models that addresses the challenge of improving the generalization capability of large foundation models while fine-tuning them on downstream tasks in a few-shot setting. The basic idea of CoPrompt is to enforce a consistency constraint in the prediction of the trainable and pre-trained models to prevent overfitting on the downstream task. Additionally, we introduce the following two components into our consistency constraint to further boost the performance: enforcing consistency on two perturbed inputs and combining two dominant paradigms of tuning, prompting and adapter. Enforcing consistency on perturbed input further regularizes the consistency constraint, effectively improving generalization, while tuning additional parameters with prompting and adapters improves the performance on downstream tasks. Extensive experiments show that CoPrompt outperforms existing methods on a range of evaluation suites, including base-to-novel generalization, domain generalization, and cross-dataset evaluation tasks. On the generalization task, CoPrompt improves the state-of-the-art by 2.09\% on the zero-shot task and 1.93\% on the harmonic mean over 11 recognition datasets. Detailed ablation studies show the effectiveness of each of the components in CoPrompt.
Learning Through Consistency for Prompt Tuning
[ "Shuvendu Roy", "Ali Etemad" ]
Workshop/R0-FoMo
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=quQ7PN4Fk2
@inproceedings{ jain2023how, title={How does fine-tuning affect your model? Mechanistic analysis on procedural tasks}, author={Samyak Jain and Robert Kirk and Ekdeep Singh Lubana and Robert P. Dick and Hidenori Tanaka and Tim Rockt{\"a}schel and Edward Grefenstette and David Krueger}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=quQ7PN4Fk2} }
Fine-tuning large pre-trained models has become the *de facto* strategy for developing models that are safe to deploy. However, there has been little work that explains how fine-tuning alters the underlying capabilities learnt by a model during pretraining: does fine-tuning yield entirely novel capabilities or does it just modulate existing ones? We address this question empirically in *synthetic* settings with mechanistic interpretability tools (e.g., network pruning and probing) to understand how the model's underlying capabilities are changing. Our extensive analysis of the effects of fine-tuning shows: (i) fine-tuning rarely alters the underlying model capabilities; (ii) a minimal transformation, which we call a 'wrapper', is typically learned on top of the underlying model capabilities; and (iii) further fine-tuning on a task where such wrapped capabilities are relevant leads to sample-efficient "revival'' of the capability, i.e., the model begins reusing this capability in a few gradient steps. *This indicates practitioners can unintentionally remove a model's safety wrapper by merely fine-tuning it on a superficially unrelated task.* We additionally perform analysis on language models trained on the TinyStories dataset to support our claims in a more realistic setup.
How does fine-tuning affect your model? Mechanistic analysis on procedural tasks
[ "Samyak Jain", "Robert Kirk", "Ekdeep Singh Lubana", "Robert P. Dick", "Hidenori Tanaka", "Tim Rocktäschel", "Edward Grefenstette", "David Krueger" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qtpTVc1c3c
@inproceedings{ dong2023how, title={How Robust is Google's Bard to Adversarial Image Attacks?}, author={Yinpeng Dong and Huanran Chen and Jiawei Chen and Zhengwei Fang and Xiao Yang and Yichi Zhang and Yu Tian and Hang Su and Jun Zhu}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=qtpTVc1c3c} }
Multimodal Large Language Models (MLLMs) that integrate text and other modalities (especially vision) have achieved unprecedented performance in various multimodal tasks. However, due to the unsolved adversarial robustness problem of vision models, MLLMs can have more severe safety and security risks by introducing the vision inputs. In this work, we study the adversarial robustness of commercial MLLMs, and especially Google's Bard, a representative chatbot with multimodal capability. By attacking white-box surrogate vision encoders or MLLMs, the generated adversarial examples can mislead Bard to output wrong image descriptions with a 22\% success rate based solely on the transferability. We demonstrate that the adversarial examples can also attack other MLLMs, e.g., a 45\% attack success rate against GPT-4V, a 26\% attack success rate against Bing Chat, and a 86\% attack success rate against ERNIE bot. Moreover, we identify two defense mechanisms of Bard, including face detection and toxicity detection of images. We design corresponding attacks to evade these defenses, demonstrating that the current defenses of Bard are also vulnerable. We hope this work can deepen our understanding on the robustness of MLLMs and facilitate future research on defenses. Our code is available at https://github.com/thu-ml/Attack-Bard.
How Robust is Google's Bard to Adversarial Image Attacks?
[ "Yinpeng Dong", "Huanran Chen", "Jiawei Chen", "Zhengwei Fang", "Xiao Yang", "Yichi Zhang", "Yu Tian", "Hang Su", "Jun Zhu" ]
Workshop/R0-FoMo
poster
2309.11751
[ "https://github.com/thu-ml/attack-bard" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=pv0zb04mEj
@inproceedings{ trabucco2023effective, title={Effective Data Augmentation With Diffusion Models}, author={Brandon Trabucco and Kyle Doherty and Max Gurinas and Ruslan Salakhutdinov}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=pv0zb04mEj} }
Data augmentation is one of the most prevalent tools in deep learning, underpinning many recent advances, including those from classification, generative models, and representation learning. The standard approach to data augmentation combines simple transformations like rotations and flips to generate new images from existing ones. However, these new images lack diversity along key semantic axes present in the data. Current augmentations cannot alter the high-level semantic attributes, such as animal species present in a scene, to enhance the diversity of data. We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models. Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples. We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
Effective Data Augmentation With Diffusion Models
[ "Brandon Trabucco", "Kyle Doherty", "Max Gurinas", "Ruslan Salakhutdinov" ]
Workshop/R0-FoMo
oral
2302.07944
[ "https://github.com/brandontrabucco/da-fusion" ]
https://huggingface.co/papers/2302.07944
2
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=pb0dRw3b97
@inproceedings{ yi2023leveraging, title={Leveraging Cross-Modal Neighbor Representation for Improved {CLIP} Classification}, author={Chao Yi and Lu Ren and De-Chuan Zhan and Han-Jia Ye}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=pb0dRw3b97} }
CLIP showcases exceptional cross-modal matching capabilities due to its training on text-image matching tasks. However, without specific optimization for unimodal scenarios, its performance in single-modality feature extraction might be suboptimal. Despite this, some studies have directly used CLIP's image encoder for tasks like few-shot classification, introducing a misalignment between its pre-training objectives and feature extraction methods. This inconsistency can diminish the quality of the image feature representation, adversely affecting CLIP's effectiveness in targeted tasks. In this paper, we view text features as precise neighbors of image features in CLIP's space and present a novel CrOss-moDal nEighbor Representation (CODER) based on the distance structure between images and their neighbor texts. This feature extraction method aligns better with CLIP's pre-training objectives, thereby fully leveraging CLIP's robust cross-modal capabilities. The key to constructing a high-quality CODER lies in how to create a vast amount of high-quality text to match with images. We introduce the Auto Prompt Generator (APG) to autonomously produce the required text in a data-free and training-free manner. We apply CODER to CLIP's zero-shot and few-shot image classification tasks. Experimental results across various datasets and architectures confirm CODER's effectiveness.
Leveraging Cross-Modal Neighbor Representation for Improved CLIP Classification
[ "Chao Yi", "Lu Ren", "De-Chuan Zhan", "Han-Jia Ye" ]
Workshop/R0-FoMo
poster
2404.17753
[ "https://github.com/ycaigogogo/cvpr24-coder" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=p6hzUHjQn1
@inproceedings{ zhao2023group, title={Group Preference Optimization: Few-Shot Alignment of Large Language Models}, author={Siyan Zhao and John Dang and Aditya Grover}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=p6hzUHjQn1} }
Applications of large language models (LLMs) often demand nuanced judgments that vary among different groups. Existing alignment algorithms can be costly, requiring extensive group-specific data and computation. We present Group Preference Optimization (GPO), a framework that efficiently aligns LLMs to group preferences using a few-shot approach. In GPO, we augment the base LLM with an independent transformer module to predict the preferences of a group for the LLM generations. For few-shot learning, this module acts as an in-context autoregressive transformer and is trained via meta-learning on several groups. Through empirical validation on opinion adaptation tasks involving US demographic groups, global countries, and individuals, GPO demonstrates superior alignment performance, requiring fewer group-specific preferences and reduced training and computational resources, surpassing existing strategies like in-context steering and fine-tuning.
Group Preference Optimization: Few-Shot Alignment of Large Language Models
[ "Siyan Zhao", "John Dang", "Aditya Grover" ]
Workshop/R0-FoMo
poster
2310.11523
[ "https://github.com/jamqd/Group-Preference-Optimization" ]
https://huggingface.co/papers/2310.11523
0
0
0
3
1
[]
[]
[]
null
https://openreview.net/forum?id=nVHj8zEmiJ
@inproceedings{ liang2023hart, title={{HART}: Efficient Adaptation via Regularized Autoregressive Parameter Generation}, author={Chen Liang and Nikos Karampatziakis and Tuo Zhao and Weizhu Chen}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=nVHj8zEmiJ} }
Fine-tuning is an effective approach for adapting a pre-trained language model to downstream tasks, but it incurs a high computational cost. To achieve an extremely efficient task adaptation, \citet{phang2022hypertuning} have proposed to use an auxiliary hypernetwork to generate task-specific weights without any backpropagation. A hypernetwork can generate weights for parameter-efficient fine-tuning (PEFT) modules, such as prefixes \citep{li2021prefix} and LoRAs \citep{hu2021lora}, for any unseen task based on a few task-specific demonstration examples, at the cost of a single forward pass. However, hypernetwork training is challenging. Firstly, it is sample inefficient due to the under-exploitation of the dependencies between PEFT weights across layers. Secondly, it exhibits training instability due to the high diversity of few-shot demonstration inputs. To address these limitations, we propose a novel hypernetwork training approach, named HART. It exploits layerwise dependencies by autoregressively generating weights for individual layers, and stabilizes the training by regularizing the consistency between weights generated based on different demonstrations. We train the hypernetwork on a diverse collection of tasks \citep{wang2022super,sanh2021multitask} and evaluate its performance on unseen tasks. HART notably outperforms \citet{phang2022hypertuning} on both T5-Large and T5-XL models.
HART: Efficient Adaptation via Regularized Autoregressive Parameter Generation
[ "Chen Liang", "Nikos Karampatziakis", "Tuo Zhao", "Weizhu Chen" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=nN8pCTVQZD
@inproceedings{ zhao2023selfexplain, title={{SELF}-{EXPLAIN}: Teaching Large Language Models to Reason Complex Questions by Themselves}, author={Jiachen ZHAO and Zonghai Yao and zhichao Yang and hong yu}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=nN8pCTVQZD} }
Large language models (LLMs) can generate intermediate reasoning steps. To elicit the reliable reasoning, the common practice is to employ few-shot chain-of-thought prompting, where several in-context demonstrations for reasoning are prepended to the question. However, such chain-of-thought examples are expensive to craft, especially for professional domains, and can have high variance depending on human annotators. Therefore, this work investigates whether LLMs can teach themselves to reason without human-crafted demonstrations. We propose SELF-EXPLAIN to generate CoT examples by LLMs inspired by ``encoding specificity'' in human memory retrieval. We find using self-explanations makes LLMs more confident, more calibrated and less biased when answering complex questions. Moreover, we find prompting with self-explanations can even significantly outperform using human-crafted CoTs on several complex question-answering datasets.
SELF-EXPLAIN: Teaching Large Language Models to Reason Complex Questions by Themselves
[ "Jiachen ZHAO", "Zonghai Yao", "zhichao Yang", "hong yu" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=msOSDvY4Ss
@inproceedings{ robey2023smoothllm, title={Smooth{LLM}: Defending Large Language Models Against Jailbreaking Attacks}, author={Alexander Robey and Eric Wong and Hamed Hassani and George Pappas}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=msOSDvY4Ss} }
Despite efforts to align large language models (LLMs), widely-used LLMs such as GPT and Claude are susceptible to jailbreaking attacks, wherein an adversary fools a targeted LLM into generating objectionable content. To address this vulnerability, we propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on LLMs. Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs. SmoothLLM reduces the attack success rate on numerous popular LLMs to below one percentage point, avoids unnecessary conservatism, and admits provable guarantees on attack mitigation. Moreover, our defense uses exponentially fewer queries than existing attacks and is compatible with any LLM.
SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks
[ "Alexander Robey", "Eric Wong", "Hamed Hassani", "George Pappas" ]
Workshop/R0-FoMo
poster
2310.03684
[ "https://github.com/arobey1/smooth-llm" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=m37czv08Ie
@inproceedings{ studnia2023evaluating, title={Evaluating Adversarial Defense in the Era of Large Language Models}, author={Joachim Studnia and Simiao Zuo and Xiaodong Liu and Qiang Lou and Jian Jiao and Denis Charles}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=m37czv08Ie} }
Large language models (LLMs) have demonstrated superior performance in many natural language processing tasks. Existing works have shown that LLMs are not robust to adversarial attacks, questioning the applicability of these models in scenarios with safety concerns. However, one key aspect that has been overlooked is evaluating and developing defense mechanisms against adversarial attacks. In this work, we systematically study how LLMs react to different adversarial defense strategies. We also propose defenses tailored for LLMs that can significantly improve their robustness: First, we develop prompting methods to alert the LLM about potential adversarial contents; Second, we use neural models such as the LLM itself for typo correction; Third, we propose an effective fine-tuning scheme to improve robustness against corrupted inputs. Extensive experiments are conducted to evaluate the adversarial defense approaches. We show that by using the proposed defenses, robustness of LLMs can increase by up to 20\%.
Evaluating Adversarial Defense in the Era of Large Language Models
[ "Joachim Studnia", "Simiao Zuo", "Xiaodong Liu", "Qiang Lou", "Jian Jiao", "Denis Charles" ]
Workshop/R0-FoMo
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=lyRpY2bpBh
@inproceedings{ huang2023lorahub, title={LoraHub: Efficient Cross-Task Generalization via Dynamic Lo{RA} Composition}, author={Chengsong Huang and Qian Liu and Bill Yuchen Lin and Chao Du and Tianyu Pang and Min Lin}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=lyRpY2bpBh} }
Low-rank adaptations (LoRA) are often employed to fine-tune large language models (LLMs) for new tasks. This paper investigates LoRA composability for cross-task generalization and introduces LoraHub, a strategic framework devised for the purposive assembly of LoRA modules trained on diverse given tasks, with the objective of achieving adaptable performance on unseen tasks. With just a few examples from a novel task, LoraHub enables the fluid combination of multiple LoRA modules, eradicating the need for human expertise. Notably, the composition requires neither additional model parameters nor gradients. Our empirical results, derived from the Big-Bench Hard (BBH) benchmark, suggest that LoraHub can effectively mimic the performance of in-context learning in few-shot scenarios, excluding the necessity of in-context examples alongside each inference input. A significant contribution of our research is the fostering of a community for LoRA, where users can share their trained LoRA modules, thereby facilitating their application to new tasks. We anticipate this resource will widen access to and spur advancements in general intelligence as well as LLMs in production. Code is available at github.com/sail-sg/lorahub.
LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
[ "Chengsong Huang", "Qian Liu", "Bill Yuchen Lin", "Chao Du", "Tianyu Pang", "Min Lin" ]
Workshop/R0-FoMo
oral
2307.13269
[ "https://github.com/sail-sg/lorahub" ]
https://huggingface.co/papers/2307.13269
6
31
2
6
1
[]
[]
[ "sail/lorahub" ]
null
https://openreview.net/forum?id=lfIXPclVHj
@inproceedings{ brunet2023iclmarkup, title={{ICL}-Markup: Structuring In-Context Learning using Soft-Token Tags}, author={Marc-Etienne Brunet and Ashton Anderson and Richard Zemel}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=lfIXPclVHj} }
Large pretrained language models (PLMs) can be rapidly adapted to a wide variety of tasks via a text-to-text approach, where the instruction and input are fed to the model in natural language. Combined with in-context learning (ICL), this paradigm is impressively flexible and powerful. However, it also burdens engineers with an overwhelming amount of choices, many of them arbitrary. Inspired by markup languages like HTML, we contribute a method of using soft-token (a.k.a tunable token) tags to compose prompt templates. This approach reduces arbitrary decisions and streamlines the application of ICL. Our method is a form of meta-learning for ICL; it learns these tags in advance during a parameter-efficient fine-tuning ``warm-up'' process. The tags can subsequently be used in templates for ICL on new, unseen tasks without any additional fine-tuning. Our experiments with this approach yield promising initial results, improving PLM performance in important enterprise applications such as few-shot and open-world intent detection, as well as text classification in news and legal domains. Our method is a form of meta-learning for ICL; it learns these tags in advance during a parameter-efficient fine-tuning "warm-up" process. The tags can subsequently be used in templates for ICL on new, unseen tasks without any additional fine-tuning. Our experiments with this approach yield promising initial results. Improving PLM performance in important enterprise applications such as few-shot and open-world intent detection, as well as text classification in a legal domain.
ICL-Markup: Structuring In-Context Learning using Soft-Token Tags
[ "Marc-Etienne Brunet", "Ashton Anderson", "Richard Zemel" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=jYluzCLFDM
@inproceedings{ rasul2023lagllama, title={Lag-Llama: Towards Foundation Models for Time Series Forecasting}, author={Kashif Rasul and Arjun Ashok and Andrew Robert Williams and Arian Khorasani and George Adamopoulos and Rishika Bhagwatkar and Marin Bilo{\v{s}} and Hena Ghonia and Nadhir Hassen and Anderson Schneider and Sahil Garg and Alexandre Drouin and Nicolas Chapados and Yuriy Nevmyvaka and Irina Rish}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=jYluzCLFDM} }
Aiming to build foundation models for time-series forecasting and study their scaling behavior, we present here our work-in-progress on Lag-Llama, a general-purpose univariate probabilistic time-series forecasting model trained on a large collection of time-series data. The model shows good zero-shot prediction capabilities on unseen "out-of-distribution" time-series datasets, outperforming supervised baselines. We use smoothly broken power-laws to fit and predict model scaling behavior. The open source code is made available at https://github.com/kashif/pytorch-transformer-ts.
Lag-Llama: Towards Foundation Models for Time Series Forecasting
[ "Kashif Rasul", "Arjun Ashok", "Andrew Robert Williams", "Arian Khorasani", "George Adamopoulos", "Rishika Bhagwatkar", "Marin Biloš", "Hena Ghonia", "Nadhir Hassen", "Anderson Schneider", "Sahil Garg", "Alexandre Drouin", "Nicolas Chapados", "Yuriy Nevmyvaka", "Irina Rish" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ikSuM1txmW
@inproceedings{ madasu2023analyzing, title={Analyzing Zero-Shot Abilities of Vision-Language Models on Video Understanding Tasks}, author={Avinash Madasu and Anahita Bhiwandiwalla and Vasudev Lal}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=ikSuM1txmW} }
Foundational multimodal models pre-trained on large scale image-text pairs or video-text pairs or both have shown strong generalization abilities on downstream tasks. However unlike image-text models, pretraining video-text models is always not feasible due to the difficulty in collecting large-scale clean and aligned data, and exponential computational costs involved in the pretraining phase. Therefore, the pertinent question to ask is: Can image-text models be adapted to video tasks and is there any benefit to using these models over pretraining directly on videos? In this work, we focus on this question by proposing a detailed study on the generalization abilities of image-text models when evaluated on video understanding tasks in a zero-shot setting. We investigate 9 foundational image-text models on a diverse set of video tasks that include video action recognition (video AR), video retrieval (video RT), video question answering (video QA), video multiple choice (video MC) and video captioning (video CP). Our experiments show that image-text models exhibit impressive performance on video AR, video RT and video MC. Furthermore, they perform moderately on video captioning and poorly on video QA. These findings shed a light on the benefits of adapting foundational image-text models to an array of video tasks while avoiding the costly pretraining step.
Analyzing Zero-Shot Abilities of Vision-Language Models on Video Understanding Tasks
[ "Avinash Madasu", "Anahita Bhiwandiwalla", "Vasudev Lal" ]
Workshop/R0-FoMo
poster
2310.04914
[ "https://github.com/intellabs/multimodal_cognitive_ai" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=h5VySv1O4q
@inproceedings{ bhatia2023tart, title={{TART}: A plug-and-play Transformer module for task-agnostic reasoning}, author={Kush Bhatia and Avanika Narayan and Christopher De Sa and Christopher Re}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=h5VySv1O4q} }
Large language models (LLMs) exhibit in-context learning abilities which enable the same model to perform several tasks without any task-specific training. In contrast, traditional adaptation approaches, such as fine-tuning, modify the underlying models for each specific task. In-context learning, however, consistently underperforms task-specific tuning approaches even when presented with the same examples. While most existing approaches (e.g., prompt engineering) focus on the LLM's learned representations to patch this performance gap, our experiments actually reveal that LLM representations contain sufficient information to make good predictions. As such, we focus on the LLM's reasoning abilities and demonstrate that this performance gap exists due to their inability to perform simple probabilistic reasoning tasks. This raises an intriguing question: Are LLMs actually capable of learning how to reason in a task-agnostic manner? We answer this in the affirmative and, as a proof of concept, propose TART which generically improves an LLM's reasoning abilities using a synthetically trained reasoning module. TART trains this Transformer-based reasoning module in a task-agnostic manner using only synthetic logistic regression tasks and composes it with an arbitrary real-world pre-trained model without any additional training. With a single inference module, TART improves performance across different model families (GPT-Neo, Pythia, Bloom), model sizes (100M - 6B), tasks (14 NLP classification tasks), and even across different modalities (audio and vision). On the RAFT Benchmark, TART improves GPT-Neo (125M)'s performance such that it outperforms Bloom (176B), and is within $4$% of GPT-3.
TART: A plug-and-play Transformer module for task-agnostic reasoning
[ "Kush Bhatia", "Avanika Narayan", "Christopher De Sa", "Christopher Re" ]
Workshop/R0-FoMo
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=gtCTVAu8PR
@inproceedings{ wang2023read, title={{READ}: Recurrent Adaptation of Large Transformers}, author={Sid Wang and John Nguyen and Ke Li and Carole-Jean Wu}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=gtCTVAu8PR} }
In the realm of Natural Language Processing (NLP), large-scale transformers have established themselves as pivotal, achieving unparalleled results across numerous tasks. The conventional approach involves pre-training these models on extensive web-scale data, followed by fine-tuning them for specific downstream tasks. However, the burgeoning size of these models, which has surged almost two orders of magnitude faster than GPU memory since 2018, has rendered their fine-tuning financially and computationally exorbitant, limiting this capability to a select few well-funded institutions. Parameter-efficient transfer learning (PETL) has emerged as a potential solution, aiming to efficiently adapt pre-trained model parameters to target tasks using smaller, task-specific models. Nonetheless, existing PETL methods either introduce additional inference latency or marginally reduce memory requirements during training, thus not fully addressing the primary motivation behind PETL. This paper introduces REcurrent ADaption (READ), a novel, lightweight, and memory-efficient fine-tuning method that incorporates a small RNN network alongside the backbone model. READ not only achieves comparable model quality to traditional fine-tuning, saving over 84\% in energy consumption, but also demonstrates scalability and independence from the backbone model size. Through extensive experiments on various NLP benchmarks, including the GLUE benchmark, READ showcases robust performance and high efficiency, reducing model training memory consumption by 56\% and GPU energy usage by 84\% relative to full-tuning, without significantly impacting inference latency and memory. We provide a theoretically justified, scalable solution for fine-tuning large transformers.
READ: Recurrent Adaptation of Large Transformers
[ "Sid Wang", "John Nguyen", "Ke Li", "Carole-Jean Wu" ]
Workshop/R0-FoMo
poster
2305.15348
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=gfsVPiIFPr
@inproceedings{ sakhinana2023hierarchical, title={Hierarchical Network Fusion for Multi-Modal Electron Micrograph Representation Learning with Foundational Large Language Models}, author={Sagar Sakhinana and Sannidhi Geethan and Venkataramana Runkana}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=gfsVPiIFPr} }
Characterizing materials with electron micrographs is a crucial task in fields such as semiconductors and quantum materials. The complex hierarchical structure of micrographs often poses challenges for traditional classification methods. In this study, we propose an innovative backbone architecture for analyzing electron micrographs. We create multi-modal representations of the micrographs by tok- enizing them into patch sequences and, additionally, representing them as vision graphs, commonly referred to as patch attributed graphs. We introduce the Hierarchical Network Fusion (HNF), a multi-layered network structure architecture that facilitates information exchange between the multi-modal representations and knowledge integration across different patch resolutions. Furthermore, we leverage large language models (LLMs) to generate detailed technical descriptions of nano-materials as auxiliary information to assist in the downstream task. We utilize a cross-modal attention mechanism for knowledge fusion across cross-domain representations(both image-based and linguistic insights) to predict the nanomaterial category. This multi-faceted approach promises a more comprehensive and accurate representation and classification of micrographs for nanomaterial identification. Our framework outperforms traditional methods, overcoming challenges posed by distributional shifts, and facilitating high-throughput screening.
Hierarchical Network Fusion for Multi-Modal Electron Micrograph Representation Learning with Foundational Large Language Models
[ "Sagar Sakhinana", "Sannidhi Geethan", "Venkataramana Runkana" ]
Workshop/R0-FoMo
poster
2408.13661
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=fzLpelzcXl
@inproceedings{ cen2023sad, title={{SAD}: Segment Any {RGBD}}, author={Jun CEN and Yizheng Wu and Kewei Wang and Xingyi Li and Jingkang Yang and Yixuan Pei and Lingdong Kong and Ziwei Liu and Qifeng Chen}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=fzLpelzcXl} }
The Segment Anything Model (SAM) has demonstrated its effectiveness in segmenting any part of 2D RGB images. A lot of SAM-based applications have shown amazing performance. However, SAM exhibits a stronger emphasis on texture information while paying less attention to geometry information when segmenting RGB images. To address this limitation, we propose the Segment Any RGBD (SAD) model, which is specifically designed to extract geometry information directly from images. Inspired by the natural ability of humans to identify objects through the visualization of depth maps, SAD utilizes SAM to segment the rendered depth map, thus providing cues with enhanced geometry information and mitigating the issue of over-segmentation. Compared to other SAM-based projects, we are the first to use SAM to segment non-RGB images. We further include the open-vocabulary semantic segmentation in our framework to provide the semantic labels of each segment.
SAD: Segment Any RGBD
[ "Jun CEN", "Yizheng Wu", "Kewei Wang", "Xingyi Li", "Jingkang Yang", "Yixuan Pei", "Lingdong Kong", "Ziwei Liu", "Qifeng Chen" ]
Workshop/R0-FoMo
poster
2305.14207
[ "https://github.com/jun-cen/segmentanyrgbd" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=f7mLYe95m4
@inproceedings{ dutta2023estimating, title={Estimating Uncertainty in Multimodal Foundation Models using Public Internet Data}, author={Shiladitya Dutta and Hongbo Wei and Lars van der Laan and Ahmed Alaa}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=f7mLYe95m4} }
Foundation models are trained on vast amounts of data at scale using self-supervised learning, enabling adaptation to a wide range of downstream tasks. At test time, these models exhibit zero-shot capabilities through which they can classify previously unseen (user-specified) categories. In this paper, we address the problem of quantifying uncertainty in these zero-shot predictions. We propose a heuristic approach for uncertainty estimation in zero-shot settings using conformal prediction with web data. Given a set of classes at test time, we conduct zero-shot classification with CLIP-style models using a prompt template, e.g., ``an image of a <category>'', and use the same template as a search query to source calibration data from the open web. Given a web-based calibration set, we apply conformal prediction with a novel conformity score that accounts for potential errors in retrieved web data. We evaluate the utility of our proposed method in Biomedical foundation models; our preliminary results show that web-based conformal prediction sets achieve the target coverage with satisfactory efficiency on a variety of biomedical datasets.
Estimating Uncertainty in Multimodal Foundation Models using Public Internet Data
[ "Shiladitya Dutta", "Hongbo Wei", "Lars van der Laan", "Ahmed Alaa" ]
Workshop/R0-FoMo
oral
2310.09926
[ "https://github.com/alaalab/webcp" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ext42s06eY
@inproceedings{ sakhinana2023crossing, title={Crossing New Frontiers: Knowledge-Augmented Large Language Model Prompting for Zero-Shot Text-Based De Novo Molecule Design}, author={Sagar Sakhinana and Venkataramana Runkana}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=ext42s06eY} }
Molecule design is a multifaceted approach that leverages computational methods and experiments to optimize molecular properties, fast-tracking new drug discoveries, innovative material development, and more efficient chemical processes. Recently, text-based molecule design has emerged, inspired by next-generation AI tasks analogous to foundational vision-language models. Our study explores the use of knowledge-augmented prompting of large language models (LLMs) for the zero-shot text-conditional de novo molecular generation task. Our approach uses task-specific instructions and a few demonstrations to address distributional shift challenges when constructing augmented prompts for querying LLMs to generate molecules consistent with technical descriptions. Our framework proves effective, outperforming state-of-the-art (SOTA) baseline models on benchmark datasets.
Crossing New Frontiers: Knowledge-Augmented Large Language Model Prompting for Zero-Shot Text-Based De Novo Molecule Design
[ "Sagar Sakhinana", "Venkataramana Runkana" ]
Workshop/R0-FoMo
poster
2408.11866
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=erl90pLIH0
@inproceedings{ veldanda2023investigating, title={Investigating Hiring Bias in Large Language Models}, author={Akshaj Kumar Veldanda and Fabian Grob and Shailja Thakur and Hammond Pearce and Benjamin Tan and Ramesh Karri and Siddharth Garg}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=erl90pLIH0} }
Large Language Models (LLMs) such as GPT-3.5, Bard, and Claude exhibit applicability across numerous tasks. One domain of interest is their use in algorithmic hiring, specifically in matching resumes with job categories. Yet, this introduces issues of bias on protected attributes like gender, race and maternity status. The seminal work of Bertrand and Mullainathan (2003) set the gold-standard for identifying hiring bias via field experiments where the response rate for identical resumes that differ only in protected attributes, e.g., racially suggestive names such as Emily or Lakisha, is compared. We replicate this experiment on state-of-art LLMs to evaluate bias (or lack thereof) on gender, race, maternity status, pregnancy status, and political affiliation. We evaluate LLMs on two tasks: (1) matching resumes to job categories; and (2) summarizing resumes with employment relevant information. Overall, LLMs are robust across race and gender. They differ in their performance on pregnancy status and political affiliation. We use contrastive input decoding on open-source LLMs to uncover potential sources of bias.
Investigating Hiring Bias in Large Language Models
[ "Akshaj Kumar Veldanda", "Fabian Grob", "Shailja Thakur", "Hammond Pearce", "Benjamin Tan", "Ramesh Karri", "Siddharth Garg" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=eNL8QJlWxc
@inproceedings{ guo2023lowa, title={{LOWA}: Localize Objects in the Wild with Attributes}, author={Xiaoyuan Guo and Kezhen Chen and Jinmeng Rao and Yawen Zhang and Baochen Sun and Jie Yang}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=eNL8QJlWxc} }
Existing open-vocabulary object detectors can struggle with uncommon or fine-grained classes, as the model and users may have different understandings of object names. Incorporating attributes such as color, shape, and size can help to reduce this inconsistency and make interactive detection more convenient and flexible. Motivated by this, we present LOWA, a new method for localizing objects with attributes effectively in the wild. To train LOWA, we propose a multi-step vision-language training strategy to learn object detection and recognition with class names as well as attribute information, which empowers users to flexibly customize text queries and extend to fine-grained detection with attribute and object information for a wider range of applications. LOWA is built on top of a two-tower vision-language architecture and consists of a standard vision transformer as the image encoder and a similar transformer as the text encoder. To learn the alignment between visual and text inputs at the instance level, we train LOWA with three training steps: object-level training, attribute-aware learning, and free-text joint training of objects and attributes. This training strategy first ensures correct object detection, then incorporates instance-level attribute information, and finally balances the object class and attribute sensitivity. We evaluate our model performance of attribute classification and attribute localization on the Open-Vocabulary Attribute Detection (OVAD) benchmark and the Visual Attributes in the Wild (VAW) dataset, and experiments indicate strong zero-shot performance. Ablation studies additionally demonstrate the effectiveness of each training step of our approach.
LOWA: Localize Objects in the Wild with Attributes
[ "Xiaoyuan Guo", "Kezhen Chen", "Jinmeng Rao", "Yawen Zhang", "Baochen Sun", "Jie Yang" ]
Workshop/R0-FoMo
poster
2305.20047
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=eLmjmG39KP
@inproceedings{ chen2023understanding, title={Understanding the Vulnerability of {CLIP} to Image Compression}, author={Cangxiong Chen and Vinay P. Namboodiri and Julian Padget}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=eLmjmG39KP} }
CLIP is a widely used foundational vision-language model that is used for zero-shot image recognition and other image-text alignment tasks. We demonstrate that CLIP is vulnerable to change in image quality under compression. This surprising result is further analysed using an attribution method-Integrated Gradients. Using this attribution method, we are able to better understand both quantitatively and qualitatively exactly the nature in which the compression affects the zero-shot recognition accuracy of this model. We evaluate this extensively on CIFAR-10 and STL-10. Our work provides the basis to understand this vulnerability of CLIP and can help us develop more effective methods to improve the robustness of CLIP and other vision-language models.
Understanding the Vulnerability of CLIP to Image Compression
[ "Cangxiong Chen", "Vinay P. Namboodiri", "Julian Padget" ]
Workshop/R0-FoMo
poster
2311.14029
[ "https://github.com/CangxiongChen/understanding_CLIP_vulnerability" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=cmOzZuiFPs
@inproceedings{ kirsch2023towards, title={Towards General-Purpose In-Context Learning Agents}, author={Louis Kirsch and James Harrison and C. Freeman and Jascha Sohl-Dickstein and J{\"u}rgen Schmidhuber}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=cmOzZuiFPs} }
Reinforcement Learning (RL) algorithms are usually hand-crafted, driven by the research and engineering of humans. An alternative approach is to automate this research process via meta-learning. A particularly ambitious objective is to automatically discover new RL algorithms from scratch that use in-context learning to learn-how-to-learn entirely from data while also generalizing to a wide range of environments. Those RL algorithms are implemented entirely in neural networks, by conditioning on previous experience from the environment, without any explicit optimization-based routine at meta-test time. To achieve generalization, this requires a broad task distribution of diverse and challenging environments. Our Transformer-based Generally Learning Agents (GLAs) are an important first step in this direction. Our GLAs are meta-trained using supervised learning techniques on an offline dataset with experiences from RL environments that is augmented with random projections to generate task diversity. During meta-testing our agents perform in-context meta-RL on entirely different robotic control problems such as Reacher, Cartpole, or HalfCheetah that were not in the meta-training distribution.
Towards General-Purpose In-Context Learning Agents
[ "Louis Kirsch", "James Harrison", "C. Freeman", "Jascha Sohl-Dickstein", "Jürgen Schmidhuber" ]
Workshop/R0-FoMo
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=c4BeWwaUiN
@inproceedings{ halbe2023hepco, title={He{PC}o: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning}, author={Shaunak Halbe and James Smith and Junjiao Tian and Zsolt Kira}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=c4BeWwaUiN} }
In this paper, we focus on the important yet understudied problem of Continual Federated Learning (CFL), where a server communicates with a set of clients to incrementally learn new concepts over time without sharing or storing any data. The complexity of this problem is compounded by challenges from both the Continual and Federated Learning perspectives. Specifically, models trained in a CFL setup suffer from catastrophic forgetting which is exacerbated by data heterogeneity across clients. Existing attempts at this problem tend to impose large overheads on clients and communication channels or require access to stored data which renders them unsuitable for real-world use due to privacy. We study this problem in the context of Foundation Models and showcase their effectiveness in mitigating forgetting while minimizing overhead costs and without requiring access to any stored data. We achieve this by leveraging a prompting based approach (such that only prompts and classifier heads have to be communicated) and proposing a novel and lightweight generation and distillation scheme to aggregate client models at the server. We formulate this problem for image classification and establish strong baselines for comparison, conduct experiments on CIFAR-100 as well as challenging, large-scale datasets like ImageNet-R and DomainNet. Our approach outperforms both existing methods and our own baselines by more than 7% while significantly reducing communication and client-level computation costs.
HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning
[ "Shaunak Halbe", "James Smith", "Junjiao Tian", "Zsolt Kira" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=byI1dQkkf9
@inproceedings{ kwon2023image, title={Image Clustering Conditioned on Text Criteria}, author={Sehyun Kwon and Jaeseung Park and Minkyu Kim and Jaewoong Cho and Ernest K. Ryu and Kangwook Lee}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=byI1dQkkf9} }
Classical clustering methods do not provide users with direct control of the clustering results, and the clustering results may not be consistent with the relevant criterion that a user has in mind. In this work, we present a new methodology for performing image clustering based on user-specified criteria in the form of text by leveraging modern Vision-Language Models and Large Language Models. We call our method Image Clustering Conditioned on Text Criteria (IC$|$TC), and it represents a different paradigm of image clustering. IC$|$TC requires a minimal and practical degree of human intervention and grants the user significant control over the clustering results in return. Our experiments show that IC$|$TC can effectively cluster images with various criteria, such as human action, physical location, or the person's mood, while significantly outperforming baselines.
Image Clustering Conditioned on Text Criteria
[ "Sehyun Kwon", "Jaeseung Park", "Minkyu Kim", "Jaewoong Cho", "Ernest K. Ryu", "Kangwook Lee" ]
Workshop/R0-FoMo
poster
2310.18297
[ "https://github.com/sehyunkwon/ictc" ]
https://huggingface.co/papers/2310.18297
0
0
0
6
1
[]
[]
[]
null
https://openreview.net/forum?id=aKSiwNGqx1
@inproceedings{ ackermann2023on, title={On the Relationship between Skill Neurons and Robustness in Prompt Tuning}, author={Leon Ackermann and Xenia Ohmer}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=aKSiwNGqx1} }
Prompt Tuning is a popular parameter-efficient finetuning method for pre-trained large language models (PLMs). Recently, based on experiments with RoBERTa, it has been suggested that Prompt Tuning activates specific neurons in the transformer's feed-forward networks, that are highly predictive and selective for the given task. In this paper, we study the robustness of Prompt Tuning in relation to these "skill neurons", using RoBERTa and T5. We show that prompts tuned for a specific task are transferable to tasks of the same type but are not very robust to adversarial data, with higher robustness for T5 than RoBERTa. At the same time, we replicate the existence of skill neurons in RoBERTa and further show that skill neurons also seem to exist in T5. Interestingly, the skill neurons of T5 determined on non-adversarial data are also among the most predictive neurons on the adversarial data, which is not the case for RoBERTa. We conclude that higher adversarial robustness may be related to a model's ability to activate the relevant skill neurons on adversarial data.
On the Relationship between Skill Neurons and Robustness in Prompt Tuning
[ "Leon Ackermann", "Xenia Ohmer" ]
Workshop/R0-FoMo
poster
2309.12263
[ "https://github.com/leonackermann/robust-neurons" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=a3ZQVXD0Hv
@inproceedings{ xu2023latent, title={Latent Skill Discovery for Chain-of-Thought Reasoning}, author={Zifan Xu and Haozhu Wang and Dmitriy Bespalov and Peter Stone and Yanjun Qi}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=a3ZQVXD0Hv} }
Recent advances in Large Language Models (LLMs) have led to an emergent ability of chain-of-thought (CoT) prompting, a prompt reasoning strategy that adds intermediate rationale steps between questions and answers to construct prompts. Conditioned on these prompts, LLMs can effectively learn in context to generate rationales that lead to more accurate answers than when answering the same question directly. To design LLM prompts, one important setting, called demonstration selection, considers selecting demonstrations from an example bank. Existing methods use various heuristics for this selection, but for CoT prompting, which involves unique rationales, it is essential to base the selection upon the intrinsic skills that CoT rationales need, for instance, the skills of addition or subtraction for math word problems. To address this requirement, we introduce a novel approach named Reasoning Skill Discovery (RSD) that uses unsupervised learning to create a latent space representation of rationales, called a reasoning skill. Simultaneously, RSD learns a reasoning policy to determine the required reasoning skill for a given question. This can then guide the selection of examples that demonstrate the required reasoning skills. Our approach offers several desirable properties: it is (1) theoretically grounded, (2) sample-efficient, requiring no LLM inference or manual prompt design, and (3) LLM-agnostic. Empirically, RSD outperforms existing methods by up to 6% in terms of the answer accuracy across multiple reasoning tasks.
Latent Skill Discovery for Chain-of-Thought Reasoning
[ "Zifan Xu", "Haozhu Wang", "Dmitriy Bespalov", "Peter Stone", "Yanjun Qi" ]
Workshop/R0-FoMo
poster
2312.04684
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YrYcoV2dAk
@inproceedings{ zhang2023visual, title={Visual Cropping Improves Zero-Shot Question Answering of Multimodal Large Language Models}, author={Jiarui Zhang and Mahyar Khayatkhoei and Prateek Chhikara and Filip Ilievski}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=YrYcoV2dAk} }
Multimodal Large Language Models (LLMs) have recently achieved promising zero-shot accuracy on visual question answering (VQA) -- a fundamental task affecting various downstream applications and domains. Given the great potential for the broad use of these models, it is important to investigate their limitations in dealing with different image and question properties. In this work, we investigate whether multimodal LLMs can perceive small details as well as large details in images. In particular, we show that their zero-shot accuracy in answering visual questions is very sensitive to the size of the visual subject of the question, declining up to $46\%$ with size. Furthermore, we show that this effect is causal by observing that human visual cropping can significantly mitigate their sensitivity to size. Inspired by the usefulness of human cropping, we then propose three automatic visual cropping methods as inference time mechanisms to improve the zero-shot performance of multimodal LLMs. We study their effectiveness on four popular VQA datasets, and a subset of the VQAv2 dataset tailored towards fine visual details. Our findings suggest that multimodal LLMs should be used with caution in detail-sensitive VQA applications, and that visual cropping is a promising direction to improve their zero-shot performance.
Visual Cropping Improves Zero-Shot Question Answering of Multimodal Large Language Models
[ "Jiarui Zhang", "Mahyar Khayatkhoei", "Prateek Chhikara", "Filip Ilievski" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Yd2S8flZKm
@inproceedings{ tanneru2023quantifying, title={Quantifying Uncertainty in Natural Language Explanations of Large Language Models}, author={Sree Harsha Tanneru and Chirag Agarwal and Himabindu Lakkaraju}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=Yd2S8flZKm} }
Large Language Models (LLMs) are increasingly used as powerful tools for several high-stakes natural language processing (NLP) applications. Recent works on prompting claim to elicit intermediate reasoning steps and key tokens that serve as proxy explanations for LLM predictions. However, there is no certainty whether these explanations are reliable and reflect the LLM’s behavior. In this work, we make one of the first attempts at quantifying the uncertainty in explanations of LLMs. To this end, we propose two novel metrics --- $\textit{Verbalized Uncertainty}$ and $\textit{Probing Uncertainty}$ --- to quantify the uncertainty of generated explanations. While verbalized uncertainty involves prompting the LLM to express its confidence in its explanations, probing uncertainty leverages sample and model perturbations as a means to quantify the uncertainty. Our empirical analysis of benchmark datasets reveals that verbalized uncertainty is not a reliable estimate of explanation confidence. Further, we show that the probing uncertainty estimates are correlated with the faithfulness of an explanation, with lower uncertainty corresponding to explanations with higher faithfulness. Our study provides insights into the challenges and opportunities of quantifying uncertainty in LLM explanations, contributing to the broader discussion of the trustworthiness of foundation models.
Quantifying Uncertainty in Natural Language Explanations of Large Language Models
[ "Sree Harsha Tanneru", "Chirag Agarwal", "Himabindu Lakkaraju" ]
Workshop/R0-FoMo
oral
2311.03533
[ "https://github.com/harsha070/uncertainty-quantification-nle" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YMutYSbvVe
@inproceedings{ sun2023benchmarking, title={Benchmarking Robustness of Text-Image Composed Retrieval}, author={Shitong Sun and Jindong Gu and Shaogang Gong}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=YMutYSbvVe} }
Text-image composed retrieval aims to retrieve the target image through the composed query, which is specified in the form of an image plus some text that describes desired modifications to the input image. It has recently attracted attention due to its ability to leverage both information-rich images and concise language to precisely express the requirements for target images. However, the robustness of these approaches against real-world corruptions or further text understanding has never been studied. In this paper, we perform the first robustness study and establish three new diversified benchmarks for systematically analysis of text-image composed retrieval against natural corruptions in both vision and text and further probe textural understanding. For natural corruption analysis, we introduce two new large-scale benchmark datasets, CIRR-C and FashionIQ-C for testing in open domain and fashion domain respectively, both of which apply 15 visual corruptions and 7 textural corruptions. For textural understanding analysis, we introduce a new diagnostic dataset CIRR-D by expanding the original raw data with synthetic data, which contains modified text so to better probe textual understanding ability including numerical variation, attribute variation, object removal, background variation, and fine-grained evaluation. The code and benchmark datasets are available at https://github.com/SunTongtongtong/Benchmark-Robustness-Text-Image-Compose-Retrieval.
Benchmarking Robustness of Text-Image Composed Retrieval
[ "Shitong Sun", "Jindong Gu", "Shaogang Gong" ]
Workshop/R0-FoMo
poster
2311.14837
[ "https://github.com/suntongtongtong/benchmark-robustness-text-image-compose-retrieval" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=XoacWibt7b
@inproceedings{ adila2023foundation, title={Foundation Models Can Robustify Themselves, For Free}, author={Dyah Adila and Changho Shin and Linrong Cai and Frederic Sala}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=XoacWibt7b} }
Zero-shot inference is a powerful paradigm that enables the use of large pretrained models for downstream classification tasks without further training. However, these models are vulnerable to inherited biases that can impact their performance. The traditional solution is fine-tuning, but this undermines the key advantage of pretrained models, which is their ability to be used out-of-the-box. We propose RoboShot, a method that improves the robustness of pretrained model embeddings in a fully zero-shot fashion. First, we use language models (LMs) to obtain useful insights from task descriptions. These insights are embedded and used to remove harmful and boost useful components in embeddings---without any supervision. Theoretically, we provide a simple and tractable model for biases in zero-shot embeddings and give a result characterizing under what conditions our approach can boost performance. Empirically, we evaluate RoboShot on nine image and NLP classification tasks and show an average improvement of 15.98% over several zero-shot baselines. Additionally, we demonstrate that RoboShot is compatible with a variety of pretrained and language models.
Foundation Models Can Robustify Themselves, For Free
[ "Dyah Adila", "Changho Shin", "Linrong Cai", "Frederic Sala" ]
Workshop/R0-FoMo
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=VxEr7qpxJo
@inproceedings{ albalak2023improving, title={Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data}, author={Alon Albalak and Colin Raffel and William Yang Wang}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=VxEr7qpxJo} }
Few-shot learning is valuable in many real-world applications, but learning a generalizable model without overfitting to the few labeled datapoints is challenging. In this work, we focus on Few-shot Learning with Auxiliary Data (FLAD), a training paradigm that assumes access to auxiliary data during few-shot learning in hopes of improving generalization. Previous works have proposed automated methods for mixing auxiliary and target data, but these methods typically scale linearly (or worse) with the number of auxiliary datasets, limiting their practicality. In this work we relate FLAD to the explore-exploit dilemma that is central to the multi-armed bandit setting and derive algorithms whose computational complexity is independent of the number of auxiliary datasets, allowing us to scale to 100x more auxiliary datasets than prior methods. We propose two algorithms -- EXP3-FLAD and UCB1-FLAD -- and compare them with prior FLAD methods that either explore or exploit, finding that the combination of exploration and exploitation is crucial. Through extensive experimentation we find that our methods outperform all pre-existing FLAD methods by 4\% and lead to the first 3 billion parameter language models that outperform the 175 billion parameter GPT-3.
Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data
[ "Alon Albalak", "Colin Raffel", "William Yang Wang" ]
Workshop/R0-FoMo
poster
2302.00674
[ "https://github.com/alon-albalak/flad" ]
https://huggingface.co/papers/2302.00674
2
0
0
3
1
[]
[]
[]
null
https://openreview.net/forum?id=VU4h3siRAw
@inproceedings{ saxena2023predicting, title={Predicting the Performance of Foundation Models via Agreement-on-the-line}, author={Rahul Saxena and Aman Mehra and Taeyoun Kim and Christina Baek and J Zico Kolter and Aditi Raghunathan}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=VU4h3siRAw} }
Estimating out-of-distribution (OOD) performance is critical to safely deploying machine learning models. Recently, Baek et al showed that the phenomenon ``agreement-on-the-line'' can be a reliable method for predicting OOD accuracy of models in an ensemble consisting largely of CNNs trained from scratch. However, it is now increasingly common to lightly fine-tune foundation models, and it is unclear whether such fine-tuning is sufficient to produce enough diversity in models for such agreement-based methods to work properly. In this paper, we develop methods for reliably applying agreement-on-the-line-based performance estimation to fine-tuned foundation models. In particular, we first study the case of fine-tuning a single foundation model, where we extensively study how different types of randomness (linear head initialization, hyperparameter selection, data subsetting, and data shuffling) contribute to the agreement-on-the-line of the resulting model sets; we find, somewhat surprisingly, that it is typically possible to obtain strong agreement via random initialization of the linear head alone. Next, we study how multiple foundation models, pretrained on different data sets but fine-tuned on the same task, may or may not produce agreement; we show, again rather surprisingly, that the diversity of such models is already sufficient and not too disparate for them to all lie on the same agreement line. In total, these methods enable reliable and efficient estimation of OOD accuracy for fine-tuned foundation models, without leveraging any labeled OOD data.
Predicting the Performance of Foundation Models via Agreement-on-the-line
[ "Rahul Saxena", "Aman Mehra", "Taeyoun Kim", "Christina Baek", "J Zico Kolter", "Aditi Raghunathan" ]
Workshop/R0-FoMo
poster
2404.01542
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=V70F9FByZp
@inproceedings{ yu2023automatic, title={Automatic Hallucination Assessment for Aligned Large Language Models via Transferable Adversarial Attacks}, author={Xiaodong Yu and Hao Cheng and Xiaodong Liu and Dan Roth and Jianfeng Gao}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=V70F9FByZp} }
Although remarkable progress has been achieved preventing LLMs hallucinations, using instruction tuning and retrieval augmentation, it is currently difficult to measure the reliability of LLMs using available static data that is often not challenging enough and could suffer from data leakage. Inspired by adversarial machine learning, this paper aims to develop an automatic method for generating new evaluation data by appropriately modifying existing data on which LLMs behave faithfully. Specifically, this paper presents AutoDebug, an LLM-based framework for using prompt chaining to generate transferable adversarial attacks (in the form of question-answering examples). We seek to understand the extent to which these trigger hallucination behavior in LLMs. We first implement our framework using ChatGPT and evaluate the resulting two variants of a popular open-domain question-answering dataset, Natural Questions (NQ) on a collection of open-source and proprietary LLMs under various prompting settings. Our generated evaluation data is human-readable and, as we show, humans can answer these modified questions well. Nevertheless, we observe pronounced accuracy drops across multiple LLMs including GPT-4. Our experimental results confirm that LLMs are likely to hallucinate in two categories of question-answering scenarios where (1) there are conflicts between knowledge given in the prompt and their parametric knowledge, or (2) the knowledge expressed in the prompt is complex. Finally, the adversarial examples generated by the proposed method are transferrable across all considered LLMs, making our approach viable for LLM-based debugging using more cost-effective LLMs.
Automatic Hallucination Assessment for Aligned Large Language Models via Transferable Adversarial Attacks
[ "Xiaodong Yu", "Hao Cheng", "Xiaodong Liu", "Dan Roth", "Jianfeng Gao" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=UcsXamgPtT
@inproceedings{ chitale2023task, title={Task Arithmetic with Lo{RA} for Continual Learning}, author={Rajas Chitale and Ankit Vaidya and Aditya Kane and Archana Santosh Ghotkar}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=UcsXamgPtT} }
Continual learning refers to the problem where the training data is available in sequential chunks, termed "tasks". The majority of progress in continual learning has been stunted by the problem of catastrophic forgetting, which is caused by sequential training of the model on streams of data. Moreover, it becomes computationally expensive to sequentially train large models multiple times. To mitigate both of these problems at once, we propose a novel method to continually train transformer-based vision models using low-rank adaptation and task arithmetic. Our method completely bypasses the problem of catastrophic forgetting, as well as reducing the computational requirement for training models on each task. When aided with a small memory of 10 samples per class, our method achieves performance close to full-set finetuning. We present rigorous ablations to support the prowess of our method.
Task Arithmetic with LoRA for Continual Learning
[ "Rajas Chitale", "Ankit Vaidya", "Aditya Kane", "Archana Santosh Ghotkar" ]
Workshop/R0-FoMo
poster
2311.02428
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=SkEG9q1Rtw
@inproceedings{ zhou2023batch, title={Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering}, author={Han Zhou and Xingchen Wan and Lev Proleev and Diana Mincu and Jilin Chen and Katherine Heller and Subhrajit Roy}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=SkEG9q1Rtw} }
Prompting and in-context learning (ICL) have become efficient learning paradigms for large language models (LLMs). However, LLMs suffer from prompt brittleness and various bias factors in the prompt, including but not limited to the formatting, the choice verbalizers, and the ICL examples. To address this problem that results in unexpected performance degradation, calibration methods have been developed to mitigate the effects of these biases while recovering LLM performance. In this work, we first conduct a systematic analysis of the existing calibration methods, where we both provide a unified view and reveal the failure cases. Inspired by these analyses, we propose Batch Calibration (BC), a simple yet intuitive method that controls the contextual bias from the batched input, unifies various prior approaches, and effectively addresses the aforementioned issues. BC is zero-shot, inference-only, and incurs negligible additional costs. We validate the effectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstrate state-of-the-art performance over previous calibration baselines across more than 10 natural language understanding tasks.
Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering
[ "Han Zhou", "Xingchen Wan", "Lev Proleev", "Diana Mincu", "Jilin Chen", "Katherine Heller", "Subhrajit Roy" ]
Workshop/R0-FoMo
oral
2309.17249
[ "" ]
https://huggingface.co/papers/2309.17249
1
0
0
7
1
[]
[]
[]
null
https://openreview.net/forum?id=SJwXWwc47T
@inproceedings{ hewitt2023teaching, title={Teaching language models with canonical examples}, author={John Hewitt and Sarah Li Chen and Percy Liang and Christopher D Manning}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=SJwXWwc47T} }
It is easy to write a desirable or undesirable language model behavior (e.g., knowledge---The capital of Mauritius is Port Louis---or undesirable stereotypes---Researchers are always coldhearted) but it is difficult to make the model robustly generalize from these canonical examples. We formalize this task: a learning method takes a model and simple canonical examples and must produce a model that (1) generalizes to naturalistic examples, (2) stays within a bound of the original model's loss, and (3) performs well on a ``hard negative'' distribution to test overgeneralization. We build on the Backpack language model; its predictions take the form of a sparse weighted sum over a very large sense vector bank. We select and finetune a few Backpack senses per canonical example and find that this substantially outperforms other training methods. The Backpack we work with is only 170m parameters; yet, we find that it can improve much larger models: a product-of-experts ensemble between the 35x larger GPT-J-6B and the ratio of finetuned to pretrained Backpack outperforms finetuning GPT-J itself.
Teaching language models with canonical examples
[ "John Hewitt", "Sarah Li Chen", "Percy Liang", "Christopher D Manning" ]
Workshop/R0-FoMo
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=S2FtwvKiiY
@inproceedings{ guo2023how, title={How Do Large Multimodal Models Really Fare in Classical Vision Few-Shot Challenges? A Deep Dive}, author={Qing Guo and Prashan Wanigasekara and Jian Zheng and Jacob Zhiyuan Fang and Xinwei Deng and Chenyang Tao}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=S2FtwvKiiY} }
Recent advances in multimodal foundational models have demonstrated marvelous in-context learning capabilities for diverse vision-language tasks. However, existing literature has mainly focused on few-shot learning tasks similar to their NLP counterparts. It is unclear whether these foundation models can also address classical vision challenges such as few-shot classification, which in some settings (e.g., 5-way 5-shot) necessitates sophisticated reasoning over several dozens of images -- a challenging task for learning systems. In this work, we take a deep dive to probe the potential and limitations of existing multimodal models on this problem. Our investigation reveals that while these models under careful calibration can outperform dedicated visual models in complex narratable scenes, they can falter with more abstract visual inputs. Moreover, we also investigate curriculum learning and find out how it can mitigate the performance gap via smoothly bridging verbal and nonverbal reasoning for vision language tasks.
How Do Large Multimodal Models Really Fare in Classical Vision Few-Shot Challenges? A Deep Dive
[ "Qing Guo", "Prashan Wanigasekara", "Jian Zheng", "Jacob Zhiyuan Fang", "Xinwei Deng", "Chenyang Tao" ]
Workshop/R0-FoMo
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=RvmR9gOYXB
@inproceedings{ goyal2023think, title={Think before you speak: Training Language Models With Pause Tokens}, author={Sachin Goyal and Ziwei Ji and Ankit Singh Rawat and Aditya Krishna Menon and Sanjiv Kumar and Vaishnavh Nagarajan}, booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models}, year={2023}, url={https://openreview.net/forum?id=RvmR9gOYXB} }
Language models generate responses by producing a series of tokens in immediate succession: the $(K+1)^{\rm th}$ token is an outcome of manipulating $K$ hidden vectors per layer, one vector per preceding token. What if instead we were to let the model manipulate say, $K+10$ hidden vectors, before it outputs the $(K+1)^{\rm th}$ token? We operationalize this idea by performing training and inference on language models with a (learnable) $\textit{pause}$ token, a sequence of which is appended to the input prefix. We then delay extracting the model's outputs until the last pause token is seen, thereby allowing the model to process extra computation before committing to an answer. We empirically evaluate $\textit{pause-training}$ on decoder-only models of 1B and 130M parameters with causal pretraining on C4, and on downstream tasks covering reasoning, question-answering, general understanding and fact recall. Our main finding is that inference-time delays show gains when the model is both pre-trained and finetuned with delays. For the 1B model, we witness gains on eight tasks, most prominently, a gain of $18\\%$ EM score on the QA task of SQuAD, $8\\%$ on CommonSenseQA and $1\\%$ accuracy on the reasoning task of GSM8k. Our work raises a range of conceptual and practical future research questions on making delayed next-token prediction a widely applicable new paradigm.
Think before you speak: Training Language Models With Pause Tokens
[ "Sachin Goyal", "Ziwei Ji", "Ankit Singh Rawat", "Aditya Krishna Menon", "Sanjiv Kumar", "Vaishnavh Nagarajan" ]
Workshop/R0-FoMo
poster
2310.02226
[ "" ]
https://huggingface.co/papers/2310.02226
0
2
0
6
1
[]
[]
[]