bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
11
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=RSGmZ7HZaA | @inproceedings{
khona2023stepwise,
title={Stepwise Inference in Transformers: Exploring a Synthetic Graph Navigation Task},
author={Mikail Khona and Maya Okawa and Rahul Ramesh and Kento Nishi and Robert P. Dick and Ekdeep Singh Lubana and Hidenori Tanaka},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=RSGmZ7HZaA}
} | Taking correct steps through elementary logical operations is the essence of logical reasoning, culminating in precise planning outcomes.
While such \emph{stepwise inference} approaches have demonstrated benefits in Large Language Models (LLMs), conducting an accurate quantitative evaluation is challenging, given their extensive scale, complexity, and lack of accessibility.
We introduce a minimal synthetic setup, where an autoregressive language model solves a navigation task on directed acyclic graphs (DAGs), taking inspiration from computational graphs and execution traces.
By implementing training with sample paths from start to goal node in a 'step-by-step' manner, we perform systematic experiments and develop novel analyses illustrating that stepwise navigation proves advantageous when the underlying graph is hierarchical and generalization necessitates the stitching of subpaths observed during pretraining.
Further, we observe a diversity-accuracy tradeoff while varying sampling temperature and a bias towards generating shorter paths.
We next elucidate how in-context chain-of-thought exemplars can steer the model's navigation.
Importantly, these exemplars can guide the model to follow a path of reasoning we provide, instead of relying on its potentially biased priors.
Together, this work showcases the utility and adaptability of this paradigm in exploring the complexities of logical reasoning and planning in LLMs. | Stepwise Inference in Transformers: Exploring a Synthetic Graph Navigation Task | [
"Mikail Khona",
"Maya Okawa",
"Rahul Ramesh",
"Kento Nishi",
"Robert P. Dick",
"Ekdeep Singh Lubana",
"Hidenori Tanaka"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=PFS4ffN9Yx | @inproceedings{
khattab2023dspy,
title={{DSP}y: Compiling Declarative Language Model Calls into Self-Improving Pipelines},
author={Omar Khattab and Arnav Singhvi and Paridhi Maheshwari and Zhiyuan Zhang and Keshav Santhanam and Sri Vardhamanan A and Saiful Haq and Ashutosh Sharma and Thomas T. Joshi and Hanna Moazam and Heather Miller and Matei Zaharia and Christopher Potts},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=PFS4ffN9Yx}
} | The ML community is rapidly exploring techniques for prompting language models (LMs), but existing LM pipelines often rely on hard-coded “prompt templates” discovered via trial and error. We introduce DSPy, a programming model that abstracts LM pipelines as imperative computation graphs where LMs are invoked through declarative modules. DSPy modules are parameterized so they can learn to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies and show that a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting and pipelines with expert-created demonstrations. | DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines | [
"Omar Khattab",
"Arnav Singhvi",
"Paridhi Maheshwari",
"Zhiyuan Zhang",
"Keshav Santhanam",
"Sri Vardhamanan A",
"Saiful Haq",
"Ashutosh Sharma",
"Thomas T. Joshi",
"Hanna Moazam",
"Heather Miller",
"Matei Zaharia",
"Christopher Potts"
] | Workshop/R0-FoMo | poster | 2310.03714 | [
"https://github.com/stanfordnlp/dspy"
] | https://huggingface.co/papers/2310.03714 | 8 | 30 | 1 | 13 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=NIeCTX8prp | @inproceedings{
ranjan2023fooling,
title={Fooling {GPT} with adversarial in-context examples for text classification},
author={Sudhanshu Ranjan and Chung-En Sun and Linbo Liu and Tsui-Wei Weng},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=NIeCTX8prp}
} | Deep learning-based methods helped solve NLP tasks more efficiently than traditional methods, and adversarial attacks for these methods have been extensively explored. However, Large Language Models (LLMs) have set up a new paradigm of few-shot prompting, which opens up the possibility for novel attacks. In this study, we show that LLMs can be vulnerable to adversarial prompts. We develop the first method to attack the few-shot examples in the text classification setup. We can degrade the model performance significantly during the test time by only slightly perturbing the examples based on optimization. Our method achieves a performance degradation of up to 50% without distorting the semantic meaning. | Fooling GPT with adversarial in-context examples for text classification | [
"Sudhanshu Ranjan",
"Chung-En Sun",
"Linbo Liu",
"Tsui-Wei Weng"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=NDNb6L5xjI | @inproceedings{
luo2023dricl,
title={Dr.{ICL}: Demonstration-Retrieved In-context Learning},
author={Man Luo and Xin Xu and Zhuyun Dai and Panupong Pasupat and Mehran Kazemi and Chitta Baral and Vaiva Imbrasaite and Vincent Y Zhao},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=NDNb6L5xjI}
} | In-context learning (ICL), which teaches a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs. While early studies primarily used a fixed or random set of demonstrations for all test queries, recent research suggests that retrieving semantically similar demonstrations to the input from a pool of available demonstrations results in better performance. This work expands the applicability of retrieval-based ICL approaches along several dimensions. We extend the success of retrieval-based ICL to instruction-finetuned LLMs as well as Chain-of-Thought (CoT) prompting. While the prior work utilizes general Large Language Models (LLMs), such as GPT-3, we find that retrieved demonstrations also enhance instruction-finetuned LLMs. This insight implies that training data, despite being exposed during the fine-tuning phase, can still be effectively used through retrieval and in-context demonstrations during testing, resulting in superior outcomes when compared to utilizing no demonstrations or selecting them at random. For CoT, when the demonstrations contain reasoning chains, we get improvements by retrieving based on such chains. Finally, we train a task-specific demonstration retriever that outperforms off-the-shelf retrievers. | Dr.ICL: Demonstration-Retrieved In-context Learning | [
"Man Luo",
"Xin Xu",
"Zhuyun Dai",
"Panupong Pasupat",
"Mehran Kazemi",
"Chitta Baral",
"Vaiva Imbrasaite",
"Vincent Y Zhao"
] | Workshop/R0-FoMo | poster | 2305.14128 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=MpDSo3Rglq | @inproceedings{
zhang2023trained,
title={Trained Transformers Learn Linear Models In-Context},
author={Ruiqi Zhang and Spencer Frei and Peter Bartlett},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=MpDSo3Rglq}
} | Attention-based neural network sequence models such as transformers have the capacity to act as supervised learning algorithms: They can take as input a sequence of labeled examples and output predictions for unlabeled test examples. Indeed, recent work by Garg et al. has shown that when training GPT2 architectures over random instances of linear regression problems, these models' predictions mimic those of ordinary least squares. Towards understanding the mechanisms underlying this phenomenon, we investigate the dynamics of in-context learning of linear predictors for a transformer with a single linear self-attention layer trained by gradient flow. We show that despite the non-convexity of the underlying optimization problem, gradient flow with a random initialization finds a global minimum of the objective function. Moreover, when given a prompt of labeled examples from a new linear prediction task, the trained transformer achieves small prediction error on unlabeled test examples. We further characterize the behavior of the trained transformer under distribution shifts. | Trained Transformers Learn Linear Models In-Context | [
"Ruiqi Zhang",
"Spencer Frei",
"Peter Bartlett"
] | Workshop/R0-FoMo | oral | 2306.09927 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=M958yKkxe9 | @inproceedings{
manuvinakurike2023zeroshot,
title={Zero-shot Conversational Summarization Evaluations with small Large Language Models},
author={Ramesh Manuvinakurike and Saurav Sahay and Sangeeta Manepalli and Lama Nachman},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=M958yKkxe9}
} | Large Language Models (LLMs) exhibit powerful summarization abilities. However, their capabilities on conversational summarization remains under explored. In this work we evaluate LLMs (~10 billion parameters) on conversational summarization and showcase their performance on various prompts. We show that the summaries generated by models depend on the instructions and the performance of LLMs vary with different instructions sometimes resulting steep drop in ROUGE scores if prompts are not selected carefully. We also evaluate the models with human evaluations and discuss the limitations of the models on conversational summarization. | Zero-shot Conversational Summarization Evaluations with small Large Language Models | [
"Ramesh Manuvinakurike",
"Saurav Sahay",
"Sangeeta Manepalli",
"Lama Nachman"
] | Workshop/R0-FoMo | poster | 2311.18041 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=LMg88bFhNJ | @inproceedings{
panwar2023incontext,
title={In-Context Learning and Bayesian Inference},
author={Madhur Panwar and Kabir Ahuja and Navin Goyal},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=LMg88bFhNJ}
} | In-context learning (ICL) is one of the surprising and useful features of large language models and subject of intense research. Recently, stylized meta-learning-like ICL setups have been devised that train transformers on sequences of input-output pairs $(x, f(x))$ using the language modeling loss. The function $f$ comes from a function class and generalization is checked by evaluation on sequences for unseen functions from the same class. One of the main discoveries in this line of research has been that for several function classes, such as linear regression, transformers successfully generalize to new functions in the class. However, it is unclear if transformers trained on multiple function classes (a setup closer to that of real-world LLMs) also exhibit this generalization. Moreover, the inductive biases of these models resulting in this generalization are not clearly understood. A model with unlimited training data and compute is a Bayesian predictor: it learns the pretraining distribution. In this paper, we empirically examine how far this Bayesian perspective can help us understand ICL. To this end, we generalize the previous meta-ICL setup to hierarchical meta-ICL setup which involves unions of multiple task families. We instantiate this setup on a diverse range of linear and nonlinear function families and find that transformers can do ICL in this setting as well. Where Bayesian inference is tractable, we find evidence that high-capacity transformers mimic the Bayesian predictor. Via the example of learning Fourier series, we also study the inductive bias for in-context learning. We find that in-context learning may or may not have simplicity bias depending on the pretraining data distribution. The Bayesian perspective provides insights into these inductive biases and how transformers perform a particular task when trained on multiple tasks. | In-Context Learning and Bayesian Inference | [
"Madhur Panwar",
"Kabir Ahuja",
"Navin Goyal"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=LBzGS2j4m4 | @inproceedings{
tsao2023autovp,
title={Auto{VP}: An Automated Visual Prompting Framework and Benchmark},
author={Hsi-Ai Tsao and Lei Hsiung and Pin-Yu Chen and Sijia Liu and Tsung-Yi Ho},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=LBzGS2j4m4}
} | Visual prompting (VP) is an emerging parameter-efficient fine-tuning approach to adapting pre-trained vision models to solve various downstream image-classification tasks. However, there has hitherto been little systematic study of the design space of VP and no clear benchmark for evaluating its performance. To bridge this gap, we propose AutoVP, an end-to-end expandable framework for automating VP design choices, along with 12 downstream image-classification tasks that can serve as a holistic VP-performance benchmark. Our design space covers 1) the joint optimization of the prompts; 2) the selection of pre-trained models, including image classifiers and text-image encoders; and 3) model output mapping strategies, including nonparametric and trainable label mapping. Our extensive experimental results show that AutoVP outperforms the best-known current VP methods by a substantial margin, having up to 6.7% improvement in accuracy; and attains a maximum performance increase of 27.5% compared to linear-probing (LP) baseline. AutoVP thus makes a two-fold contribution: serving both as an efficient tool for hyperparameter tuning on VP design choices, and as a comprehensive benchmark that can reasonably be expected to accelerate VP’s development. The source code is available at [https://github.com/IBM/AutoVP](https://github.com/IBM/AutoVP). | AutoVP: An Automated Visual Prompting Framework and Benchmark | [
"Hsi-Ai Tsao",
"Lei Hsiung",
"Pin-Yu Chen",
"Sijia Liu",
"Tsung-Yi Ho"
] | Workshop/R0-FoMo | poster | 2310.08381 | [
"https://github.com/IBM/AutoVP"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=KIhFggzePM | @inproceedings{
ramesh2023how,
title={How Capable Can a Transformer Become? A Study on Synthetic, Interpretable Tasks},
author={Rahul Ramesh and Mikail Khona and Robert P. Dick and Hidenori Tanaka and Ekdeep Singh Lubana},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=KIhFggzePM}
} | Transformers trained on huge text corpora exhibit a remarkable set of capabilities. Given the inherent compositional nature of language, one can expect the model to learn to compose these capabilities, potentially yielding a combinatorial explosion of what operations it can perform on an input. Motivated by the above, we aim to assess in this paper "how capable can a transformer become?". In this work, we train Transformer models on a data-generating process that involves compositions of a set of well-defined monolithic capabilities and show that: (1) Transformers generalize to exponentially or even combinatorially many functions not seen in the training data; (2) Transformers that generate the intermediate outputs of the composition are more effective at generalizing to unseen compositions; (3) The training data has a significant impact on the model's ability to compose functions (4) Attention layers in the latter half of the model seem critical to compositionality. | How Capable Can a Transformer Become? A Study on Synthetic, Interpretable Tasks | [
"Rahul Ramesh",
"Mikail Khona",
"Robert P. Dick",
"Hidenori Tanaka",
"Ekdeep Singh Lubana"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Jd8mD3SU8j | @inproceedings{
huq2023whats,
title={What{\textquoteright}s important here?: Opportunities and Challenges of {LLM} in retrieving information from Web Interface},
author={Faria Huq and Jeffrey P. Bigham and Nikolas Martelaro},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=Jd8mD3SU8j}
} | Large language models (LLMs) that have been trained on large corpus of codes exhibit a remarkable ability to understand HTML code [1]. As web interfaces are mainly constructed using HTML, we designed an in-depth study to see how the code understanding ability of LLMs can be used to retrieve and locate important elements for a user given query (i.e. task description) in web interface. In contrast with prior works, which primarily focused on autonomous web navigation, we decompose the problem as an even atomic operation - Can LLMs find out the important information in the web page for a user given query? This decomposition enables us to scrutinize the current capabilities of LLMs and uncover the opportunities and challenges they present. Our empirical experiments show that the LLMs exhibit a reasonable level of competence, there is still a substantial room for improvement. We hope our investigation will inspire follow-up works in overcoming the current challenges in this domain. | What’s important here?: Opportunities and Challenges of LLM in retrieving information from Web Interface | [
"Faria Huq",
"Jeffrey P. Bigham",
"Nikolas Martelaro"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=IaNJC1IRds | @inproceedings{
anand2023one,
title={One shot localization and segmentation of medical images with Foundation Models},
author={Deepa Anand and Gurunath Reddy and Vanika Singhal and Dattesh D. Shanbhag and Shriram KS and Uday Patil and Chitresh Bhushan and Kavitha Manickam and Dawei Gui and Rakesh Mullick and Avinash Gopal and Parminder Bhatia and Taha Kass-Hout},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=IaNJC1IRds}
} | Recent advances in Vision Transformers (ViT) and Stable Diffusion (SD) models with their ability to capture rich semantic features of the image have been used for image correspondence tasks on natural images. In this paper, we examine the ability of a variety of pre-trained ViT (DINO, DINOv2, SAM, CLIP) and SD models, trained exclusively on natural images, for solving the correspondence problems on medical images. While many works have made a case for in-domain training, we show that the models trained on natural images can offer good performance on medical images across different modalities (CT,MR,Ultrasound) sourced from various manufacturers, over multiple anatomical regions (brain, thorax, abdomen, extremities), and on wide variety of tasks. Further, we leverage the correspondence with respect to a template image to prompt a Segment Anything (SAM) model to arrive at single shot segmentation, achieving dice range of 62%-90% across tasks, using just one image as reference. We also show that our single-shot method outperforms the recently proposed few-shot segmentation method - UniverSeg (Dice range 47%-80%) on most of the semantic segmentation tasks(six out of seven) across medical imaging modalities. | One shot localization and segmentation of medical images with Foundation Models | [
"Deepa Anand",
"Gurunath Reddy",
"Vanika Singhal",
"Dattesh D. Shanbhag",
"Shriram KS",
"Uday Patil",
"Chitresh Bhushan",
"Kavitha Manickam",
"Dawei Gui",
"Rakesh Mullick",
"Avinash Gopal",
"Parminder Bhatia",
"Taha Kass-Hout"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=HAqPAqztEU | @inproceedings{
juneja2023a,
title={A Universal Prompt Generator for Large Language Models},
author={Gurusha Juneja and Amit Sharma},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=HAqPAqztEU}
} | LLMs are primarily reliant on high-quality and task-specific prompts. However, the prompt engineering process relies on clever heuristics and requires multiple iterations. Some recent works attempt to automate this process by improving upon human written prompts. However, creating high-quality prompts from scratch is still an unresolved challenge owing to its inherent complexity. In this work, we propose UniPrompt, a novel technique for generating high-quality human-like prompts from scratch. To do so, we identify characteristic features of human-generated prompts such as being detailed and consisting of multiple sections. Our proposed method, UniPrompt, takes as input a single sentence description of the task and generates human-like sectioned prompts using an auxiliary language model. We train the model in two stages. First, the model is finetuned on multiple tasks using a novel dataset curated using GPT-4 across over 500 tasks. Second, we align the auxiliary model to generate task-relevant (high accuracy) prompts by collecting a prompt preference dataset and optimizing the model using the Direct Preference Optimization method. Importantly, UniPrompt is task-agnostic: once trained, it can be used to generate prompts for any task. We find that UniPrompt outperforms human-generated prompts, GPT-generated prompts, and other prompt optimization techniques across diverse tasks on medicine, causality, and hate speech by up to 5.1 %, 7.2 %, and 11.1 % respectively. | A Universal Prompt Generator for Large Language Models | [
"Gurusha Juneja",
"Amit Sharma"
] | Workshop/R0-FoMo | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=FJo2lroF7R | @inproceedings{
madaan2023automix,
title={AutoMix: Mixing Models with Few-shot Self and Meta Verification},
author={Aman Madaan and Pranjal Aggarwal and Ankit Anand and Srividya Pranavi Potharaju and Swaroop Mishra and Pei Zhou and Aditya Gupta and Dheeraj Rajagopal and Yiming Yang and Shyam Upadhyay and Mausam . and Manaal Faruqui},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=FJo2lroF7R}
} | Large language models (LLMs) are now available in various sizes and configurations from cloud API providers. While this diversity offers a broad spectrum of choices, effectively leveraging the options to optimize computational cost and performance remains challenging. In this work, we present AutoMix, an approach that strategically routes queries to larger LMs, based on the approximate correctness of outputs from a smaller LM. Central to AutoMix is a few-shot self-verification mechanism, which estimates the reliability of its own outputs without requiring training. Given that verifications can be noisy, we employ a meta verifier in \ours to refine the accuracy of these assessments. Our experiments using LLAMA2-13B and LLAMA2-70B, on five context-grounded reasoning datasets demonstrate that AutoMix surpasses established baselines, improving the incremental benefit per cost by up to 57%. | AutoMix: Mixing Models with Few-shot Self and Meta Verification | [
"Aman Madaan",
"Pranjal Aggarwal",
"Ankit Anand",
"Srividya Pranavi Potharaju",
"Swaroop Mishra",
"Pei Zhou",
"Aditya Gupta",
"Dheeraj Rajagopal",
"Yiming Yang",
"Shyam Upadhyay",
"Mausam .",
"Manaal Faruqui"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=EztQmfnMLg | @inproceedings{
lin2023coded,
title={Coded Prompts for Large Language Models},
author={Ziqian Lin and Yicong Chen and Yuchen Zeng and Kangwook Lee},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=EztQmfnMLg}
} | While Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks and various prompting techniques have been proposed, there remains room for performance enhancement. In this work, we introduce a novel dimension to prompt design -- *coded prompts* for LLM inference. Drawing inspiration from coding theory, where coded symbols communicate or store functions of multiple information symbols, we design coded prompts to process multiple inputs simultaneously. We validate this approach through experiments on two distinct tasks: identifying the maximum prime number within a range and sentence toxicity prediction. Our results indicate that coded prompts can indeed improve task performance. We believe that coded prompts will pave a new way for innovative strategies to enhance the efficiency and effectiveness of LLMs. | Coded Prompts for Large Language Models | [
"Ziqian Lin",
"Yicong Chen",
"Yuchen Zeng",
"Kangwook Lee"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=EEIPgU1oO6 | @inproceedings{
esfandiari2023deep,
title={Deep Embedded Clustering in Few-shot Representations ({DEC}i{FR})},
author={Yasaman Esfandiari and Rodolfo Valiente Romero and Amir Rahimi},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=EEIPgU1oO6}
} | Few-shot Learning has been the center of attention in the deep learning community as it can potentially address the problem of data inaccessibility. Several approaches have been proposed to learn from a few samples efficiently, nevertheless, the majority of them use a large dataset to generalize the feature representation obtained from a single or pre-defined set of backbones before adapting to novel classes. In this paper, different from prior works that use a single best-performing backbone, we present a model-agnostic framework that does not require to "decipher" which backbone is more suitable for the specific FSL task. We propose the Deep Embedded Clustering in Few-shot Representations (DECiFR) algorithm that leverages Deep Embedded Clustering (DEC) to abstract discriminative information from the best combination of features from different backbones, by simultaneously mapping and clustering feature representations using deep neural networks. Subsequently, we propose a contrastive variant of KNN to enhance the cluster separation by propagating through the samples that minimize the inter-class distance and maximize the intra-class distance.
Empirical results show that our approach not only enhances the feature embeddings but also boosts the classification accuracy, approaching or surpassing state-of-the-art performance on numerous datasets. | Deep Embedded Clustering in Few-shot Representations (DECiFR) | [
"Yasaman Esfandiari",
"Rodolfo Valiente Romero",
"Amir Rahimi"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ED7E1fUAk2 | @inproceedings{
fereydooni2023divide,
title={Divide and Conquer: Two-Level Problem Remodeling for Large-Scale Few-Shot Learning},
author={Mohamadreza Fereydooni and Hosein Hasani and Ali Razghandi and Mahdieh Soleymani Baghshah},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=ED7E1fUAk2}
} | Few-shot learning methods have achieved notable performance in recent years. However, few-shot learning in large-scale settings with hundreds of classes is still challenging.
In this paper, we tackle the problems of large-scale few-shot learning by taking advantage of pre-trained foundation models. We recast the original problem in two levels with different granularity. At the coarse-grained level, we introduce a novel object recognition approach with robustness to sub-population shifts. At the fine-grained level, generative experts are designed for few-shot learning, specialized for different superclasses.
A Bayesian schema is considered to combine coarse-grained information with fine-grained predictions in a winner-takes-all fashion.
Extensive experiments on large-scale datasets and different architectures show that the proposed method is both effective and efficient besides its simplicity and natural problem remodeling. The code is publicly available at https://github.com/mohamadreza99/divide_and_conquer. | Divide and Conquer: Two-Level Problem Remodeling for Large-Scale Few-Shot Learning | [
"Mohamadreza Fereydooni",
"Hosein Hasani",
"Ali Razghandi",
"Mahdieh Soleymani Baghshah"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=CiRnwYfXuU | @inproceedings{
mehrabi2023jab,
title={{JAB}: Joint Adversarial Prompting and Belief Augmentation},
author={Ninareh Mehrabi and Palash Goyal and Anil Ramakrishna and Jwala Dhamala and Shalini Ghosh and Richard Zemel and Kai-Wei Chang and Aram Galstyan and Rahul Gupta},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=CiRnwYfXuU}
} | With the recent surge of language models in different applications, attention to safety and robustness of these models has gained significant importance. Here we introduce a joint framework in which we simultaneously probe and improve the robustness of a black-box target model via adversarial prompting and belief augmentation using iterative feedback loops. This framework utilizes an automated red teaming approach to probe the target model, along with a belief augmenter to generate instructions for the target model to improve its robustness to those adversarial probes. Importantly, the adversarial model and the belief generator leverage the feedback from past interactions to improve the effectiveness of the adversarial prompts and beliefs, respectively. In our experiments, we demonstrate that such a framework can reduce toxic content generation both in dynamic cases where an adversary directly interacts with a target model and static cases where we use a static benchmark dataset to evaluate our model. | JAB: Joint Adversarial Prompting and Belief Augmentation | [
"Ninareh Mehrabi",
"Palash Goyal",
"Anil Ramakrishna",
"Jwala Dhamala",
"Shalini Ghosh",
"Richard Zemel",
"Kai-Wei Chang",
"Aram Galstyan",
"Rahul Gupta"
] | Workshop/R0-FoMo | poster | 2311.09473 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=CaXs5JGpzd | @inproceedings{
hajali2023functionconstrained,
title={Function-constrained Program Synthesis},
author={Patrick Anthony Hajali and Ignas Budvytis},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=CaXs5JGpzd}
} | This work introduces: (1) a technique that allows pre-trained large language models (LLMs) to leverage user-provided code when solving programming tasks and (2) a method to iteratively generate modular sub-functions that can aid future code generation attempts when the initial code generated by the LLM is inadequate. Generating computer programs in general-purpose programming languages like Python poses a challenge for LLMs when restricted to using only code provided in the prompt. A naive approach is to present a chat-based LLM (e.g. GPT-4, Claude) with relevant code snippets and prompt the model to synthesize the target algorithm using the provided code. Alternatively, code-specific LLMs (e.g. GitHub Copilot, CodeLlama2) can generate code completions in real-time by drawing on all code available in the integrated development environment. However, restricting code-specific LLMs to use only in-context code is not straightforward, as the model is not explicitly instructed to use the user-generated code and users cannot highlight precisely which snippets of code the model should incorporate into its context for subsequent code-generations. Moreover, chat and code LLMs lack effective recovery methods, forcing users to iteratively re-prompt the model with modified prompts until a sufficient solution is reached.
Our method differs from traditional LLM-powered code-generation by constraining code-generation to an explicit function set and enabling recovery from failed attempts through automatically generated sub-functions. When the LLM cannot produce working code, we generate modular sub-functions to aid subsequent attempts at generating functional code. A by-product of our method is a library of reusable sub-functions that can solve related tasks (imitating a software team where efficiency scales with experience).
We also introduce a new “half-shot” evaluation paradigm that provides tighter estimates of LLMs' coding abilities compared to traditional zero-shot evaluation. Our proposed method encourages models to output solutions in a structured format, decreasing syntax errors that can be mistaken for poor coding ability. | Function-constrained Program Synthesis | [
"Patrick Anthony Hajali",
"Ignas Budvytis"
] | Workshop/R0-FoMo | poster | 2311.15500 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=CPpkklQWQW | @inproceedings{
nguyen2023on,
title={On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation},
author={Duy Minh Ho Nguyen and Tan Ngoc Pham and Nghiem Tuong Diep and Nghi Quoc Phan and Quang Pham and Vinh Tong and Binh T. Nguyen and Ngan Hoang Le and Nhat Ho and Pengtao Xie and Daniel Sonntag and Mathias Niepert},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=CPpkklQWQW}
} | Constructing a robust model that can effectively generalize to test samples under
distribution shifts remains a significant challenge in the field of medical imaging. The foundational models for vision and language, pre-trained on extensive sets of natural image and text data, have emerged as a promising approach. It showcases impressive learning abilities across different tasks with the need for only a limited amount of annotated samples. While numerous techniques have
focused on developing better fine-tuning strategies to adapt these models for specific domains, we instead examine their robustness to domain shifts in the medical image segmentation task. To this end, we compare the generalization performance to unseen domains of various pre-trained models after being fine-tuned on the same in-distribution dataset and show that foundation-based models enjoy better robustness than other architectures. From here, we further developed a new Bayesian uncertainty estimation for frozen models and used them as an indicator to characterize the model’s performance on out-of-distribution (OOD) data, proving particularly beneficial for real-world applications. Our experiments not only reveal the limitations of current indicators like accuracy on the line or agreement on the line commonly used in natural image applications but also emphasize the promise of the introduced Bayesian uncertainty. Specifically, lower uncertainty predictions.
usually tend to higher out-of-distribution (OOD) performance. | On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation | [
"Duy Minh Ho Nguyen",
"Tan Ngoc Pham",
"Nghiem Tuong Diep",
"Nghi Quoc Phan",
"Quang Pham",
"Vinh Tong",
"Binh T. Nguyen",
"Ngan Hoang Le",
"Nhat Ho",
"Pengtao Xie",
"Daniel Sonntag",
"Mathias Niepert"
] | Workshop/R0-FoMo | poster | 2311.11096 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=AwEQ0YrW17 | @inproceedings{
dun2023sweeping,
title={Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for {LLM} Task Adaptation},
author={Chen Dun and Mirian Del Carmen Hipolito Garcia and Guoqing Zheng and Ahmed Hassan Awadallah and Anastasios Kyrillidis and Robert Sim},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=AwEQ0YrW17}
} | Large Language Models (LLMs) have the ability to solve a variety of tasks, such as text summarization and mathematical questions, just out of the box, but they are often trained with a single task in mind.
Due to high computational costs, the current trend is to use prompt instruction tuning to better adjust monolithic, pretrained LLMs for new --but often individual-- downstream tasks.
Thus, how one would expand prompt tuning to handle --concomitantly-- heterogeneous tasks and data distributions is a widely open question.
To address this gap, we suggest the use of Mixture of Prompts, or MoPs, associated with smart gating functionality: the latter --whose design is one of the contributions of this paper-- can identify relevant skills embedded in different groups of prompts and dynamically assign combined experts (i.e., collection of prompts), based on the target task.
Additionally, MoPs are empirically agnostic to any model compression technique applied --for efficiency reasons-- as well as instruction data source and task composition.
In practice, MoPs can simultaneously mitigate prompt training "interference" in multi-task, multi-source scenarios (e.g., task and data heterogeneity across sources), as well as possible implications from model approximations.
As a highlight, MoPs manage to decrease final perplexity from $\sim20\%$ up to $\sim70\%$, as compared to baselines, in the federated scenario, and from $\sim 3\%$ up to $\sim30\%$ in the centralized scenario. | Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation | [
"Chen Dun",
"Mirian Del Carmen Hipolito Garcia",
"Guoqing Zheng",
"Ahmed Hassan Awadallah",
"Anastasios Kyrillidis",
"Robert Sim"
] | Workshop/R0-FoMo | poster | 2310.02842 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=AJiBZ1BPH5 | @inproceedings{
zhang2023zeroshot,
title={Zero-shot Improvement of Object Counting with {CLIP}},
author={Ruisu Zhang and Yicong Chen and Kangwook Lee},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=AJiBZ1BPH5}
} | We focus on the object counting limitations of vision-language models, with a particular emphasis on Contrastive Language-Image Pre-Training (CLIP) models. We assess the counting performance of CLIP using a custom dataset, which uncovers significant variations across diverse objects. To address this, we introduce a zero-shot, training-free method aimed at improving counting accuracy by manipulating the text embedding space of CLIP. Through comprehensive experiments, we demonstrate that our method not only enhances the counting capabilities of CLIP but also boosts the performance of text-to-image generative models like Stable Diffusion, particularly in generating images with precise object counts. | Zero-shot Improvement of Object Counting with CLIP | [
"Ruisu Zhang",
"Yicong Chen",
"Kangwook Lee"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=9Tze4oy4lw | @inproceedings{
albalak2023efficient,
title={Efficient Online Data Mixing For Language Model Pre-Training},
author={Alon Albalak and Liangming Pan and Colin Raffel and William Yang Wang},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=9Tze4oy4lw}
} | The data used to pretrain large language models has a decisive impact on a model’s downstream performance, which has led to a large body of work on data selection methods that aim to automatically determine the most suitable data to use for pretraining. Existing data selection methods suffer from slow and computationally expensive processes, a problem amplified by the increasing size of models and of pretraining datasets. Data mixing, on the other hand, reduces the complexity of data selection by grouping data points together and determining sampling probabilities across entire groups. However, data mixing proportions are typically fixed before training and therefore cannot adapt to changing training dynamics. To address these limitations, we develop an efficient algorithm for Online Data Mixing (ODM) that combines elements from both data selection and data mixing. Based on multi-armed bandit algorithms, our online approach optimizes the data mixing proportions during training. Remarkably, our method trains a model that reaches the final perplexity of the next best method with 19% fewer training iterations, and improves performance on the 5-shot MMLU benchmark by 1.9% relative accuracy, while adding negligible wall-clock time during pretraining. | Efficient Online Data Mixing For Language Model Pre-Training | [
"Alon Albalak",
"Liangming Pan",
"Colin Raffel",
"William Yang Wang"
] | Workshop/R0-FoMo | oral | 2312.02406 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=9Eu2NMT0Ya | @inproceedings{
jacob2023the,
title={The Consensus Game: Language Model Generation via Equilibrium Search},
author={Athul Paul Jacob and Yikang Shen and Gabriele Farina and Jacob Andreas},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=9Eu2NMT0Ya}
} | When applied to question answering and other text generation tasks, language models (LMs) may be queried generatively (by sampling answers from their output distribution) or discriminatively (by using them to score or rank a set of candidate answers). These procedures sometimes yield very different predictions. How do we reconcile mutually incompatible scoring procedures to obtain coherent LM predictions? We introduce a new, a training-free, game-theoretic procedure for language model decoding. Our approach casts language model decoding as a regularized imperfect-information sequential signaling game—which we term the concensus game—in which a generator seeks to communicate an abstract correctness parameter using natural language sentences to a discriminator. We develop computational procedures for finding approximate equilibria of this game, resulting in a decoding algorithm we call equilibrium-ranking. Applied to a large number of tasks (including reading comprehension, commonsense reasoning, mathematical problem-solving, and assistive dialog), equilibrium-ranking consistently improves performance over existing LM decoding procedures. These improvements are sometimes substantial—on multiple benchmarks, we observe that applying equilibrium-ranking to LLaMA-7B outperforms the much larger LLaMA-65B and PaLM-540B models. | The Consensus Game: Language Model Generation via Equilibrium Search | [
"Athul Paul Jacob",
"Yikang Shen",
"Gabriele Farina",
"Jacob Andreas"
] | Workshop/R0-FoMo | oral | 2310.09139 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=8KgUJqPUOb | @inproceedings{
sakhinana2023crossmodal,
title={Cross-Modal Learning for Chemistry Property Prediction: Large Language Models Meet Graph Machine Learning},
author={Sagar Sakhinana and Venkataramana Runkana},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=8KgUJqPUOb}
} | In the field of chemistry, the objective is to create novel molecules with desired properties, facilitating accurate property predictions for applications such as material design and drug screening. However, existing graph deep learning methods face limitations that curb their expressive power. To address this, we explore the integration of vast molecular domain knowledge from Large Language Models
(LLMs) with the complementary strengths of Graph Neural Networks (GNNs) to enhance performance in property prediction tasks. We introduce a Multi-Modal Fusion (MMF) framework that synergistically harnesses the analytical prowess of GNNs and the linguistic generative and predictive abilities of LLMs, thereby improving accuracy and robustness in predicting molecular properties. Our framework
combines the effectiveness of GNNs in modeling graph-structured data with the zero-shot and few-shot learning capabilities of LLMs, enabling improved predictions while reducing the risk of overfitting. Furthermore, our approach effectively addresses distributional shifts, a common challenge in real-world applications, and showcases the efficacy of learning cross-modal representations, surpassing
state-of-the-art baselines on benchmark datasets for property prediction tasks. | Cross-Modal Learning for Chemistry Property Prediction: Large Language Models Meet Graph Machine Learning | [
"Sagar Sakhinana",
"Venkataramana Runkana"
] | Workshop/R0-FoMo | poster | 2408.14964 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=7mEOK0EnbY | @inproceedings{
panigrahi2023trainable,
title={Trainable Transformer in Transformer},
author={Abhishek Panigrahi and Sadhika Malladi and Mengzhou Xia and Sanjeev Arora},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7mEOK0EnbY}
} | Recent works attribute the capability of in-context learning (ICL) in large pre-trained language models to implicitly simulating and fine-tuning an internal model (e.g., linear or 2-layer MLP) during inference. However, such constructions require large memory overhead, which makes simulation of more sophisticated internal models intractable. In this work, we propose a new efficient construction, Transformer in Transformer (in short, TINT), that allows a transformer to simulate and fine-tune more complex models during inference (e.g., pre-trained language models). In particular, we introduce innovative approximation techniques that allow a TINT model with less than 2 billion parameters to simulate and fine-tune a 125 million parameter transformer model within a single forward pass. TINT accommodates many common transformer variants and its design ideas also improve the efficiency of past instantiations of simple models inside transformers. We conduct end-to-end experiments to validate the internal fine-tuning procedure of TINT on various language modeling and downstream tasks. For example, even with a limited one-step budget, we observe TINT for a OPT-125M model improves performance by 4 − 16% absolute on average compared to OPT-125M. These findings suggest that large pre-trained language models are capable of performing intricate subroutines. To facilitate further work, a modular and extensible codebase for TINT will be open-sourced. | Trainable Transformer in Transformer | [
"Abhishek Panigrahi",
"Sadhika Malladi",
"Mengzhou Xia",
"Sanjeev Arora"
] | Workshop/R0-FoMo | poster | 2307.01189 | [
"https://github.com/abhishekpanigrahi1996/transformer_in_transformer"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=7jmtHtv9Ch | @inproceedings{
li2023overprompt,
title={OverPrompt: Enhancing Chat{GPT} through Efficient In-Context Learning},
author={Jiazheng Li and Runcong Zhao and Yongxin Yang and Yulan He and Lin Gui},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7jmtHtv9Ch}
} | The remarkable performance of pre-trained large language models has revolutionised various natural language processing applications. Due to huge parameter sizes and extensive running costs, companies or organisations tend to transfer the models to the target task by zero-shot prompting techniques. However, the prohibitive costs of tokens and time have hindered their adoption in applications. We propose OverPrompt, leveraging the in-context learning capability of LLMs to handle multiple task inputs, thereby reducing token and time costs. This approach could potentially improve task performance during API queries due to better conditional distribution mapping. Evaluated across diverse classification datasets, our experiments show that OverPrompt can achieve cost-efficient zero-shot classification without causing significant detriment to task performance, and in some cases, even improving it. An ablation study conducted on various LLMs, along with an investigation into the robustness of our prompting strategy to different input ordering, offers valuable insights into the broader applicability of our method across diverse tasks. These findings also suggest a more seamless integration of our method with LLMs through an API. | OverPrompt: Enhancing ChatGPT through Efficient In-Context Learning | [
"Jiazheng Li",
"Runcong Zhao",
"Yongxin Yang",
"Yulan He",
"Lin Gui"
] | Workshop/R0-FoMo | poster | 2305.14973 | [
"https://github.com/lijiazheng99/overprompt"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=7MEIYPueMd | @inproceedings{
allen2023fewshot,
title={Fewshot learning on global multimodal embeddings for earth observation tasks},
author={Matthew Allen and Francisco Dorr and Joseph Alejandro Gallego Mejia and Laura Mart{\'\i}nez-Ferrer and Anna Jungbluth and Freddie Kalaitzis and Ra{\'u}l Ramos-Poll{\'a}n},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7MEIYPueMd}
} | In this work we pretrain a CLIP/ViT based model using three different modalities of satellite imagery across five AOIs covering over ~10\% of Earth's total landmass, namely Sentinel 2 RGB optical imagery, Sentinel 1 SAR radar amplitude and interferometric coherence. This model uses $\sim 250$ M parameters. Then, we use the embeddings produced for each modality with a classical machine learning method to attempt different downstream tasks for earth observation related to vegetation, built up surface, croplands and permanent water. We consistently show how we reduce the need for labeled data by 99\%, so that with ~200-500 randomly selected labeled examples (around 4K-10K km$^2$) we reach performance levels analogous to those achieved with the full labeled datasets (about 150K image chips or 3M km$^2$ in each area of interest - AOI) on all modalities, AOIs and downstream tasks. This leads us to think that the model has captured significant earth features useful in a wide variety of scenarios. To enhance our model's usability in practice, its architecture allows inference in contexts with missing modalities and even missing channels within each modality. Additionally, we visually show that this embedding space, obtained with no labels, is sensible to the different earth features represented by the labelled datasets we selected. | Fewshot learning on global multimodal embeddings for earth observation tasks | [
"Matthew Allen",
"Francisco Dorr",
"Joseph Alejandro Gallego Mejia",
"Laura Martínez-Ferrer",
"Anna Jungbluth",
"Freddie Kalaitzis",
"Raúl Ramos-Pollán"
] | Workshop/R0-FoMo | poster | 2310.00119 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=7GHPcloiHq | @inproceedings{
khan2023selective,
title={Selective Prediction For Open-Ended Question Answering in Black-Box Vision-Language Models},
author={Zaid Khan and Yun Fu},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7GHPcloiHq}
} | When mistakes have serious consequences, reliable use of a model requires understanding when the predictions of the model are trustworthy. One approach is selective prediction, in which a model is allowed to abstain if it is uncertain. Existing methods for selective prediction require access to model internals, retraining, or large number of model evaluations, and cannot be used for black box models available only through an API. This is a barrier to the use of powerful commercial foundation models in risk-sensitive applications. Furthermore, existing work has largely focused on unimodal foundation models. We propose a method to improve selective prediction in a black box vision-language model by measuring consistency over the neighbors of a visual question. Although direct sampling of the neighborhood is not possible, we propose using a probing model as a proxy. We describe experiments testing the proposed method on in-distribution, out-of-distribution and adversarial questions. We find that the consistency of a vision-language model across rephrasings of a visual question can be used to identify and reject high-risk visual questions, even in out-of-distribution and adversarial settings, constituting a step towards safe use of black-box vision-language models. | Selective Prediction For Open-Ended Question Answering in Black-Box Vision-Language Models | [
"Zaid Khan",
"Yun Fu"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=7Dd8uBHo90 | @inproceedings{
zohar2023lovm,
title={{LOVM}: Language-Only Vision Model Selection},
author={Orr Zohar and Shih-Cheng Huang and Kuan-Chieh Wang and Serena Yeung},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7Dd8uBHo90}
} | Pre-trained multi-modal vision-language models (VLMs) excel in downstream applications, especially in the few- and zero-shot settings.
However, choosing the optimal VLM for some downstream applications is challenging due to task and dataset dependencies.
Exhaustive evaluation of all VLMs is impractical and requires the collection of a labeled dataset for evaluation. As the number of open-source VLM variants increases, there is a need for an efficient model selection strategy that does not require access to a curated evaluation dataset. To address this, we introduce a novel task, LOVM: **L**anguage-**O**nly **V**ision **M**odel Selection, where methods are expected to perform both model selection and performance prediction based solely on a text description of the desired downstream application. We also present an extensive LOVM benchmark consisting of ground-truth evaluations of 23 pre-trained VLMs and 35 datasets, enabling effective ranking and performance prediction of VLMs. Our code, full paper, and dataset are available at https://github.com/orrzohar/LOVM. | LOVM: Language-Only Vision Model Selection | [
"Orr Zohar",
"Shih-Cheng Huang",
"Kuan-Chieh Wang",
"Serena Yeung"
] | Workshop/R0-FoMo | poster | 2306.08893 | [
"https://github.com/orrzohar/lovm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=6adcWzmtHR | @inproceedings{
gupta2023context,
title={Context is Environment},
author={Sharut Gupta and David Lopez-Paz and Stefanie Jegelka and Kartik Ahuja},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=6adcWzmtHR}
} | Two lines of work are taking center stage in AI research. On the one hand, increasing efforts are being made to build models that generalize out-of-distribution (OOD). Unfortunately, a hard lesson so far is that no proposal convincingly outperforms a simple empirical risk minimization baseline. On the other hand, large language models (LLMs) have erupted as algorithms able to learn in-context, generalizing on-the-fly to the eclectic contextual circumstances. We argue that context is environment, and posit that in-context learning holds the key to better domain generalization. Via extensive theory and experiments, we show that paying attention to context$\unicode{x2013}\unicode{x2013}$unlabeled examples as they arrive$\unicode{x2013}\unicode{x2013}$allows our proposed In-Context Risk Minimization (ICRM) algorithm to zoom-in on the test environment risk minimizer, leading to significant OOD performance improvements. From all of this, two messages are worth taking home: researchers in domain generalization should consider environment as context, and harness the adaptive power of in-context learning. Researchers in LLMs should consider context as environment, to better structure data towards generalization. | Context is Environment | [
"Sharut Gupta",
"David Lopez-Paz",
"Stefanie Jegelka",
"Kartik Ahuja"
] | Workshop/R0-FoMo | poster | 2309.09888 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=6FwaSOEeKD | @inproceedings{
ajith2023instructeval,
title={InstructEval: Systematic Evaluation of Instruction Selection Methods},
author={Anirudh Ajith and Mengzhou Xia and Ameet Deshpande and Karthik R Narasimhan},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=6FwaSOEeKD}
} | In-context learning (ICL) performs tasks by prompting a large language model (LLM) using an instruction and a small set of annotated examples called demonstrations. Recent work has shown that precise details of the inputs used in the ICL prompt significantly impact performance, which has incentivized instruction selection algorithms. The effect of instruction-choice however is severely underexplored, with existing analyses restricted to shallow subsets of models and tasks, limiting the generalizability of their insights. We develop InstructEval, an ICL evaluation suite to conduct a thorough assessment of these techniques. The suite includes 13 open-sourced LLMs of varying scales from four model families, and covers nine tasks across three categories. Using the suite, we evaluate the relative performance of seven popular instruction selection methods over five metrics relevant to ICL. Our experiments reveal that using curated manually-written instructions or simple instructions without any task-specific descriptions often elicits superior ICL performance overall than that of automatic instruction-induction methods, pointing to a lack of generalizability among the latter. We release our evaluation suite for benchmarking instruction selection approaches and enabling more generalizable methods in this space. | InstructEval: Systematic Evaluation of Instruction Selection Methods | [
"Anirudh Ajith",
"Mengzhou Xia",
"Ameet Deshpande",
"Karthik R Narasimhan"
] | Workshop/R0-FoMo | oral | 2307.00259 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=5TsfEEwRsu | @inproceedings{
golovneva2023pathfinder,
title={{PATHFINDER}: Guided Search over Multi-Step Reasoning Paths},
author={Olga Golovneva and Sean O'Brien and Ramakanth Pasunuru and Tianlu Wang and Luke Zettlemoyer and Maryam Fazel-Zarandi and Asli Celikyilmaz},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=5TsfEEwRsu}
} | With recent advancements in large language models, methods like chain-of-thought prompting to elicit reasoning chains have been shown to improve results on reasoning tasks. However, tasks that require multiple steps of reasoning still pose significant challenges to state-of-the-art models. Drawing inspiration from the beam search algorithm, we propose PATHFINDER, a tree-search-based reasoning path generation approach. It enhances diverse branching and multi-hop reasoning through the integration of dynamic decoding, enabled by varying sampling methods and parameters. Using constrained reasoning, PATHFINDER integrates novel quality constraints, pruning, and exploration methods to enhance the efficiency and the quality of generation. Moreover, it includes scoring and ranking features
to improve candidate selection. Our approach outperforms competitive baselines on three complex arithmetic and commonsense reasoning tasks by 6% on average. Our model generalizes well to longer, unseen reasoning chains, reflecting similar complexities to beam search with large branching factors. | PATHFINDER: Guided Search over Multi-Step Reasoning Paths | [
"Olga Golovneva",
"Sean O'Brien",
"Ramakanth Pasunuru",
"Tianlu Wang",
"Luke Zettlemoyer",
"Maryam Fazel-Zarandi",
"Asli Celikyilmaz"
] | Workshop/R0-FoMo | poster | 2312.05180 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=4uiOPSvbN6 | @inproceedings{
mousavi2023enhancing,
title={Enhancing Large Language Models with Ensemble of Critics for Mitigating Toxicity and Hallucination},
author={Sajad Mousavi and Ricardo Luna Gutierrez and Desik Rengarajan and Vineet Gundecha and Ashwin Ramesh Babu and Avisek Naug and Antonio Guillen and Soumyendu Sarkar},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=4uiOPSvbN6}
} | We propose a self-correction mechanism for Large Language Models (LLMs) to mitigate issues such as toxicity and fact hallucination. This method involves refining model outputs through an ensemble of critics and the model's own feedback. Drawing inspiration from human behavior, we explore whether LLMs can emulate the self-correction process observed in humans who often engage in self-reflection and seek input from others to refine their understanding of complex topics. Our approach is model-agnostic and can be applied across various domains to enhance trustworthiness by addressing fairness, bias, and robustness concerns. We consistently observe performance improvements in LLMs for reducing toxicity and correcting factual errors. | Enhancing Large Language Models with Ensemble of Critics for Mitigating Toxicity and Hallucination | [
"Sajad Mousavi",
"Ricardo Luna Gutierrez",
"Desik Rengarajan",
"Vineet Gundecha",
"Ashwin Ramesh Babu",
"Avisek Naug",
"Antonio Guillen",
"Soumyendu Sarkar"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=3MpDQ0YA7V | @inproceedings{
krasheninnikov2023meta,
title={Meta- (out-of-context) learning in neural networks},
author={Dmitrii Krasheninnikov and Egor Krasheninnikov and Bruno Mlodozeniec and David Krueger},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=3MpDQ0YA7V}
} | Brown et al. (2020) famously introduced the phenomenon of in-context learning in large language models (LLMs). We establish the existence of a phenomenon we call **meta-out-of-context learning (meta-OCL)** via carefully designed synthetic experiments with LLMs. Our results suggest that meta-OCL leads LLMs to more readily “internalize” the semantic content of text that is, *or appears to be*, broadly useful (such as true statements, or text from authoritative sources) and use it in appropriate circumstances. We further demonstrate meta-OCL in a synthetic computer vision setting, and propose two hypotheses for the emergence of meta-OCL: one relying on the way models store knowledge in their parameters, and another suggesting that the implicit gradient alignment bias of gradient-descent-based optimizers may be responsible. Finally, we reflect on what our results might imply about capabilities of future AI systems, and discuss potential risks. Our code is available at https://github.com/krasheninnikov/internalization. | Meta- (out-of-context) learning in neural networks | [
"Dmitrii Krasheninnikov",
"Egor Krasheninnikov",
"Bruno Mlodozeniec",
"David Krueger"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=2gytoWpJGf | @inproceedings{
lowe2023zeroshot,
title={Zero-shot Clustering of Embeddings with Pretrained and Self-Supervised Learnt Encoders},
author={Scott C Lowe and Joakim Bruslund Haurum and Sageev Oore and Thomas B. Moeslund and Graham W. Taylor},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=2gytoWpJGf}
} | We explore whether large pretrained models can provide a useful representation space for datasets they were not trained on, and whether these representations can be used to group novel unlabelled data into meaningful clusters. To this end, we conduct experiments using image encoders pretrained on ImageNet using either supervised or self-supervised training techniques. These encoders are deployed on image datasets that were not seen during training, and we investigate whether their embeddings can be clustered with conventional clustering algorithms. We find that it is possible to create well-defined clusters using self-supervised feature encoders, especially when using the Agglomerative Clustering method, and that it is possible to do so even for very fine-grained datasets such as NABirds. We also find indications that the Silhouette score is a good proxy of cluster quality for self-supervised feature encoders when no ground-truth is available. | Zero-shot Clustering of Embeddings with Pretrained and Self-Supervised Learnt Encoders | [
"Scott C Lowe",
"Joakim Bruslund Haurum",
"Sageev Oore",
"Thomas B. Moeslund",
"Graham W. Taylor"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=2LkTVY15SM | @inproceedings{
foster2023flexible,
title={Flexible visual prompts for in context learning in computer vision},
author={Thomas Foster and Ioana Croitoru and Robert Dorfman and Christoffer Edlund and Thomas Varsavsky and Jon Almaz{\'a}n},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=2LkTVY15SM}
} | In this work, we address in-context learning (ICL) for the task of image segmentation, introducing a novel approach that adapts a modern Video Object Segmentation (VOS) technique for visual in-context learning. This adaptation is inspired by the VOS method's ability to efficiently and flexibly learn objects from a few examples. Through evaluations across a range of support set sizes and on diverse segmentation datasets, our method consistently surpasses existing techniques. Notably, it excels with data containing classes not encountered during training. Additionally, we propose a technique for support set selection, which involves choosing the most relevant images to include in this set. By employing support set selection, the performance increases for all tested methods without the need for additional training or prompt tuning. The code can be found at https://github.com/v7labs/XMem_ICL. | Flexible visual prompts for in context learning in computer vision | [
"Thomas Foster",
"Ioana Croitoru",
"Robert Dorfman",
"Christoffer Edlund",
"Thomas Varsavsky",
"Jon Almazán"
] | Workshop/R0-FoMo | poster | [
"https://github.com/v7labs/xmem_icl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=2J8xnFLMgF | @inproceedings{
shi2023why,
title={Why Larger Language Models Do In-context Learning Differently?},
author={Zhenmei Shi and Junyi Wei and Zhuoyan Xu and Yingyu Liang},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=2J8xnFLMgF}
} | Large language models (LLM) have emerged as a powerful tool for many AI problems and are deeply involved in many aspects of human activity. One important emergent ability is in-context learning (ICL), where LLM can perform well on unseen tasks based on a brief series of task examples without necessitating any adjustments to the model's parameters. Many works trying to study ICL and one recent interesting counter-intuitive observation is that different scale language models may have different ICL behaviors. Despite the tremendous success made by ICL, why different ICL behaviors remains a mystery. In this work, we are trying to answer this question. As a limited understanding of the ICL mechanism, we study a simplified setting, one-layer single-head linear self-attention network pretrained on linear regression in-context task. We characterize language model scale as the rank of key and query matrix in attention. We show that smaller language models are more robust to noise, while larger language models are easily distracted, leading to different ICL behaviors. We also conduct ICL experiments utilizing the LLaMA model families. The results are consistent with previous work and our analysis. | Why Larger Language Models Do In-context Learning Differently? | [
"Zhenmei Shi",
"Junyi Wei",
"Zhuoyan Xu",
"Yingyu Liang"
] | Workshop/R0-FoMo | poster | 2405.19592 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=1fuyNbblEt | @inproceedings{
chen2023analyzing,
title={Analyzing Chat{GPT}{\textquoteright}s Behavior Shifts Over Time},
author={Lingjiao Chen and Matei Zaharia and James Zou},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=1fuyNbblEt}
} | GPT-3.5 and GPT-4 are the two most widely used large language model (LLM) services. However, when and how these models are updated over time is opaque. Here, we evaluate the March 2023 and June 2023 versions of GPT-3.5 and GPT-4 on two tasks: 1) solving math problems, and 2) generating code. We find that the performance and behavior of both GPT-3.5 and GPT-4 can vary greatly over time. For example, GPT-4 (March 2023) was reasonable at identifying prime vs. composite numbers ($84\%$ accuracy) but GPT-4 (June 2023) was poor on these same questions ($51\%$ accuracy). This is partly explained by a drop in GPT-4's amenity to follow chain-of-thought prompting. Interestingly, GPT-3.5 was much better in June than in March in this task. Both GPT-4 and GPT-3.5 had more formatting mistakes in code generation in June than in March. We provide evidence that GPT-4's ability to follow user instructions has decreased over time, which is one common factor behind the many behavior drifts. Overall, our findings show that the behavior of the ``same'' LLM service can change substantially in a relatively short amount of time, highlighting the need for continuous monitoring of LLMs. | Analyzing ChatGPT’s Behavior Shifts Over Time | [
"Lingjiao Chen",
"Matei Zaharia",
"James Zou"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=1G7n7LW3mF | @inproceedings{
kroeger2023are,
title={Are Large Language Models Post Hoc Explainers?},
author={Nicholas Kroeger and Dan Ley and Satyapriya Krishna and Chirag Agarwal and Himabindu Lakkaraju},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=1G7n7LW3mF}
} | Large Language Models (LLMs) are increasingly used as powerful tools for a plethora of natural language processing (NLP) applications. A recent innovation, in-context learning (ICL), enables LLMs to learn new tasks by supplying a few examples in the prompt during inference time, thereby eliminating the need for model fine-tuning. While LLMs have been utilized in several applications, their applicability in explaining the behavior of other models remains relatively unexplored. Despite the growing number of new explanation techniques, many require white-box access to the model and/or are computationally expensive, highlighting a need for next-generation post hoc explainers. In this work, we present the first framework to study the effectiveness of LLMs in explaining other predictive models. More specifically, we propose a novel framework encompassing multiple prompting strategies: i) Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL, and iv) Explanation-based ICL, with varying levels of information about the underlying ML model and the local neighborhood of the test sample. We conduct extensive experiments with real-world benchmark datasets to demonstrate that LLM generated explanations perform on par with state-of-the-art post hoc explainers using their ability to leverage ICL examples and their internal knowledge in generating model explanations. On average, across four datasets and two ML models, we observe that LLMs identify the most important feature with 72.19% accuracy, indicating promising avenues for further research into LLM based explanation frameworks within explainable artificial intelligence (XAI). | Are Large Language Models Post Hoc Explainers? | [
"Nicholas Kroeger",
"Dan Ley",
"Satyapriya Krishna",
"Chirag Agarwal",
"Himabindu Lakkaraju"
] | Workshop/R0-FoMo | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=0hTtit3AAm | @inproceedings{
li2023clipav,
title={{CLIPA}-v2: Scaling {CLIP} Training with 81.1\% Zero-shot ImageNet Accuracy within a \$10,000 Budget},
author={Xianhang Li and Zeyu Wang and Cihang Xie},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=0hTtit3AAm}
} | The recent work CLIPA presents an inverse scaling law for CLIP training --- whereby the larger the image/text encoders used, the shorter the sequence length of image/text tokens that can be applied in training. This finding enables us to train high-performance CLIP models with significantly reduced computations. Building upon this work, we hereby present CLIPA-v2 with two key contributions. Technically, we find this inverse scaling law is also applicable in the finetuning stage, enabling further reduction in computational needs. Empirically, we explore CLIPA at scale, extending the experiments up to the H/14 model with approximately 13B image-text pairs seen during training.
Our results are exciting --- by only allocating a budget of $\textdollar$10,000, our CLIP model achieves an impressive zero-shot ImageNet accuracy of 81.1%, surpassing the prior best CLIP model (from OpenCLIP, 80.1%) by 1.0\% and meanwhile reducing the computational cost by approximately $39\times$. Moreover, with an additional investment of $4,000, we can further elevate the zero-shot ImageNet accuracy to 81.8%.
By upscaling a G/14 model, we've achieved an impressive state-of-the-art zero-shot ImageNet accuracy of 83.0%, relying solely on open-source data. | CLIPA-v2: Scaling CLIP Training with 81.1 | [
"Xianhang Li",
"Zeyu Wang",
"Cihang Xie"
] | Workshop/R0-FoMo | poster | [
"https://github.com/ucsc-vlaa/clipa"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=0GsHDvnzHg | @inproceedings{
bendou2023inferring,
title={Inferring Latent Class Statistics from Text for Robust Visual Few-Shot Learning},
author={Yassir Bendou and Bastien Pasdeloup and Giulia Lioi and Vincent Gripon and Fabien Cardinaux and Ghouthi BOUKLI HACENE and Lukas Mauch},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=0GsHDvnzHg}
} | In the realm of few-shot learning, foundation models like CLIP have proven effective but exhibit limitations in cross-domain robustness especially in few-shot settings. Recent works add text as an extra modality to enhance the performance of these models. Most of these approaches treat text as an auxiliary modality without fully exploring its potential to elucidate the underlying class visual features distribution. In this paper, we present a novel approach that leverages text-derived statistics to predict the mean and covariance of the visual feature distribution for each class. This predictive framework enriches the latent space, yielding more robust and generalizable few-shot learning models. We demonstrate the efficacy of incorporating both mean and covariance statistics in improving few-shot classification performance across various datasets. Our method shows that we can use text to predict the mean and covariance of the distribution offering promising improvements in few-shot learning scenarios. | Inferring Latent Class Statistics from Text for Robust Visual Few-Shot Learning | [
"Yassir Bendou",
"Bastien Pasdeloup",
"Giulia Lioi",
"Vincent Gripon",
"Fabien Cardinaux",
"Ghouthi BOUKLI HACENE",
"Lukas Mauch"
] | Workshop/R0-FoMo | poster | 2311.14544 | [
"https://github.com/ybendou/fs-text2stats"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=zrw68dPsdt | @inproceedings{
hodgkinson2023a,
title={A {PAC}-Bayesian Perspective on the Interpolating Information Criterion},
author={Liam Hodgkinson and Chris van der Heide and Robert Salomone and Fred Roosta and Michael Mahoney},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=zrw68dPsdt}
} | Deep learning is renowned for its theory-practice gap, whereby principled theory typically fails to provide much beneficial guidance for implementation in practice. This has been highlighted recently by the benign overfitting phenomenon: when neural networks become sufficiently large to interpolate the dataset perfectly, model performance appears to improve with increasing model size, in apparent contradiction with the well-known bias--variance tradeoff. While such phenomena have proven challenging to theoretically study for general models, the recently proposed Interpolating Information Criterion (IIC) provides a valuable theoretical framework to examine performance for overparameterized models. Using the IIC, a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence generalization performance in the interpolating regime. From the provided bound, we quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, optimizer, and parameter-initialization scheme; the spectrum of the empirical neural tangent kernel; curvature of the loss landscape; and noise present in the data. | A PAC-Bayesian Perspective on the Interpolating Information Criterion | [
"Liam Hodgkinson",
"Chris van der Heide",
"Robert Salomone",
"Fred Roosta",
"Michael Mahoney"
] | Workshop/M3L | poster | 2311.07013 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=zarvq21MVP | @inproceedings{
huang2023graph,
title={Graph Neural Networks Benefit from Structural Information Provably: A Feature Learning Perspective},
author={Wei Huang and Yuan Cao and Haonan Wang and Xin Cao and Taiji Suzuki},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=zarvq21MVP}
} | Graph neural networks (GNNs) have shown remarkable capabilities in learning from graph-structured data, outperforming traditional multilayer perceptrons (MLPs) in numerous graph applications. Despite these advantages, there has been limited theoretical exploration into why GNNs are so effective, particularly from the perspective of feature learning. This study aims to address this gap by examining the role of graph convolution in feature learning theory under a specific data generative model. We undertake a comparative analysis of the optimization and generalization between two-layer graph convolutional networks (GCNs) and their convolutional neural network (CNN) counterparts. Our findings reveal that graph convolution significantly enhances the regime of low test error over CNNs. This highlights a substantial discrepancy between GNNs and MLPs in terms of generalization capacity, a conclusion further supported by our empirical simulations on both synthetic and real-world datasets. | Graph Neural Networks Benefit from Structural Information Provably: A Feature Learning Perspective | [
"Wei Huang",
"Yuan Cao",
"Haonan Wang",
"Xin Cao",
"Taiji Suzuki"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=zaeQGiPVYY | @inproceedings{
ahn2023linear,
title={Linear attention is (maybe) all you need (to understand transformer optimization)},
author={Kwangjun Ahn and Xiang Cheng and Minhak Song and Chulhee Yun and Ali Jadbabaie and Suvrit Sra},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=zaeQGiPVYY}
} | Transformer training is notoriously difficult, requiring a careful design of optimizers and use of various heuristics. We make progress towards understanding the subtleties of training transformers by carefully studying a simple yet canonical linearized shallow transformer model. Specifically, we train linear transformers to solve regression tasks, inspired by J. von Oswald et al. (ICML 2023), and K. Ahn et al. (NeurIPS 2023). Most importantly, we observe that our proposed linearized models can reproduce several prominent aspects of transformer training dynamics. Consequently, the results obtained in this paper suggest that a simple linearized transformer model could actually be a valuable, realistic abstraction for understanding transformer optimization. | Linear attention is (maybe) all you need (to understand transformer optimization) | [
"Kwangjun Ahn",
"Xiang Cheng",
"Minhak Song",
"Chulhee Yun",
"Ali Jadbabaie",
"Suvrit Sra"
] | Workshop/M3L | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=xzJ8Xt6wy7 | @inproceedings{
phunyaphibarn2023large,
title={Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study},
author={Prin Phunyaphibarn and Junghyun Lee and Bohan Wang and Huishuai Zhang and Chulhee Yun},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=xzJ8Xt6wy7}
} | Although gradient descent with momentum is widely used in modern deep learning, a concrete understanding of its effects on the training trajectory still remains elusive. In this work, we empirically show that momentum gradient descent with a large learning rate and learning rate warmup displays large catapults, driving the iterates towards flatter minima than those found by gradient descent. We then provide empirical evidence and theoretical intuition that the large catapult is caused by momentum ``amplifying'' the self-stabilization (Damian et al., 2023). | Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study | [
"Prin Phunyaphibarn",
"Junghyun Lee",
"Bohan Wang",
"Huishuai Zhang",
"Chulhee Yun"
] | Workshop/M3L | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=xxYfmRTwyX | @inproceedings{
yang2023feature,
title={Feature Learning in Infinite-Depth Neural Networks},
author={Greg Yang and Dingli Yu and Chen Zhu and Soufiane Hayou},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=xxYfmRTwyX}
} | By classifying infinite-width neural networks and identifying the *optimal* limit, Tensor Programs IV and V demonstrated a universal way, called $\mu$P, for *widthwise hyperparameter transfer*, i.e., predicting optimal hyperparameters of wide neural networks from narrow ones. Here we investigate the analogous classification for *depthwise parametrizations* of deep residual networks (resnets). We classify depthwise parametrizations of block multiplier and learning rate by their infinite-width-then-depth limits. In resnets where each block has only one layer, we identify a unique optimal parametrization, called Depth-$\mu$P that extends $\mu$P and show empirically it admits depthwise hyperparameter transfer. We identify *feature diversity* as a crucial factor in deep networks, and Depth-$\mu$P can be characterized as maximizing both feature learning and feature diversity. Exploiting this, we find that absolute value, among all homogeneous nonlinearities, maximizes feature diversity and indeed empirically leads to significantly better performance. However, if each block is deeper (such as modern transformers), then we find fundamental limitations in all possible infinite-depth limits of such parametrizations, which we illustrate both theoretically and empirically on simple networks as well as Megatron transformer trained on Common Crawl. | Feature Learning in Infinite-Depth Neural Networks | [
"Greg Yang",
"Dingli Yu",
"Chen Zhu",
"Soufiane Hayou"
] | Workshop/M3L | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=wsgXCcqiQY | @inproceedings{
dhuliawala2023variational,
title={Variational Classification},
author={Shehzaad Dhuliawala and Mrinmaya Sachan and Carl Allen},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=wsgXCcqiQY}
} | We present *variational classification* (VC), a latent variable generalisation of neural network softmax classification under cross-entropy loss. Our approach provides a novel probabilistic interpretation of the highly familiar softmax classification model, to which it relates comparably to variational vs deterministic autoencoders. We derive a training objective based on the evidence lower bound (ELBO) that is non-trivial to optimize, and an adversarial approach to maximise it. We reveal an inherent inconsistency within softmax classification that VC addresses, while also allowing flexible choices of distributions in the latent space in place of assumptions implicit in standard softmax classifiers. Empirical evaluation demonstrates that VC maintains accuracy while improving properties such as calibration and adversarial robustness, particularly under distribution shift and low data settings. This work brings new theoretical insight to modern machine learning practice. | Variational Classification | [
"Shehzaad Dhuliawala",
"Mrinmaya Sachan",
"Carl Allen"
] | Workshop/M3L | poster | 2305.10406 | [
"https://github.com/shehzaadzd/variational-classification"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=vXkC6AOupO | @inproceedings{
dherin2023implicit,
title={Implicit biases in multitask and continual learningfrom a backward error analysis perspective},
author={Benoit Dherin},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=vXkC6AOupO}
} | Using backward error analysis, we compute implicit training biases in multitask and continual learning settings for neural networks trained with stochastic gradient descent. In particular, we derive modified losses that are implicitly minimized during training. They have three terms: the original loss, accounting for convergence, an implicit flatness regularization term proportional to the learning rate, and a last term, the conflict term, which can theoretically be detrimental to both convergence and implicit regularization.
In multitask, the conflict term is a well-known quantity, measuring the gradient alignment between the tasks, while in continual learning the conflict term is a new quantity in deep learning optimization, although a basic tool in differential geometry: The Lie bracket between the task gradients. | Implicit biases in multitask and continual learningfrom a backward error analysis perspective | [
"Benoit Dherin"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tMCsGRtzK2 | @inproceedings{
ebrahimpour-boroojeny2023spectrum,
title={Spectrum Extraction and Clipping for Implicitly Linear Layers},
author={Ali Ebrahimpour-Boroojeny and Matus Telgarsky and Hari Sundaram},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=tMCsGRtzK2}
} | We show the effectiveness of automatic differentiation in efficiently and correctly computing and controlling the spectrum of implicitly linear operators, a rich family of layer types including all standard convolutional and dense layers. we provide the first clipping method which is correct for general convolution layers, and illuminate the representational limitation that caused correctness issues in prior work. by comparing the accuracy and performance of our methods to existing methods, using various experiments, show they lead to better generalization and adversarial robustness of the models. in addition to these advantages over the state-of-the-art methods, we show they are much faster than the alternatives. | Spectrum Extraction and Clipping for Implicitly Linear Layers | [
"Ali Ebrahimpour-Boroojeny",
"Matus Telgarsky",
"Hari Sundaram"
] | Workshop/M3L | poster | 2402.16017 | [
"https://github.com/ali-e/fastclip"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=qxy72wUf90 | @inproceedings{
wang2023the,
title={The Noise Geometry of Stochastic Gradient Descent: A Quantitative and Analytical Characterization},
author={Mingze Wang and Lei Wu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=qxy72wUf90}
} | Empirical studies have demonstrated that the noise in stochastic gradient descent (SGD) aligns favorably with the local geometry of loss landscape. However, theoretical and quantitative explanations for this phenomenon remain sparse. In this paper, we offer a comprehensive theoretical investigation into the aforementioned {\em noise geometry} for over-parameterized linear (OLMs) models and two-layer neural networks. We scrutinize both average and directional alignments, paying special attention to how factors like sample size and input data degeneracy affect the alignment strength. As a specific application, we leverage our noise geometry characterizations to study how SGD escapes from sharp minima, revealing that the escape direction has significant components along flat directions. This is in stark contrast to GD, which escapes only along the sharpest directions. To substantiate our theoretical findings, both synthetic and real-world experiments are provided. | The Noise Geometry of Stochastic Gradient Descent: A Quantitative and Analytical Characterization | [
"Mingze Wang",
"Lei Wu"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ovTv99C921 | @inproceedings{
alvarado2023curvaturedimension,
title={Curvature-Dimension Tradeoff for Generalization in Hyperbolic Space},
author={Nico Alvarado and Hans Lobel and Mircea Petrache},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=ovTv99C921}
} | The inclusion of task-relevant geometric embeddings in deep learning models is an important emerging direction of research, particularly when using hierarchical data. For instance, negatively curved geometries such as hyperbolic spaces are known to allow low-distortion embedding of tree-like hierarchical structures, which Euclidean spaces do not afford. Learning techniques for hyperbolic spaces, such as Hyperbolic Neural Networks (HNNs), have shown empirical accuracy improvement over classical Deep Neural Networks in tasks involving semantic or multi-scale information, such as recommender systems or molecular generation. However, no research has investigated generalization properties specific to such geometries. In this work, we introduce generalization bounds for learning tasks in hyperbolic spaces, marking the first time such bounds have been proposed. We highlight a previously unnoticed and important difference with Euclidean embedding models, namely, under embeddings into spaces of negative curvature $-\kappa<0$ and dimension $d$, only the product $\sqrt{\kappa}\ d$ influences generalization bounds. Hence, the curvature parameter of the space can be varied at fixed $d$ with the same effect on generalization as when varying $d$. | Curvature-Dimension Tradeoff for Generalization in Hyperbolic Space | [
"Nico Alvarado",
"Hans Lobel",
"Mircea Petrache"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ocN0nmbAVo | @inproceedings{
qiu2023complexity,
title={Complexity Matters: Dynamics of Feature Learning in the Presence of Spurious Correlations},
author={GuanWen Qiu and Da Kuang and Surbhi Goel},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=ocN0nmbAVo}
} | Existing research often posits spurious features as "easier" to learn than core features in neural network optimization, but the nuanced impact of their relative simplicity remains under-explored. In this paper, we propose a theoretical framework and associated synthetic dataset grounded in boolean function analysis. Our framework allows for fine-grained control on both the relative complexity (compared to core features) and correlation strength (with respect to the label) of spurious features. Experimentally, we observe that the presence of _stronger_ spurious correlations or _simpler_ spurious features leads to a slower rate of learning for the core features in networks when trained with (stochastic) gradient descent. Perhaps surprisingly, we also observe that spurious features are not forgotten even when the network has _perfectly_ learned the core features. We give theoretical justifications for these observations for the special case of learning with parity features on a one-layer hidden network. Our findings justify the success of retraining the last layer for accelerating core feature convergence and identify limitations of debiasing algorithms that exploit early learning of spurious features. We corroborate our findings through experiments on real-world vision datasets, thereby validating the practical relevance of our framework. | Complexity Matters: Dynamics of Feature Learning in the Presence of Spurious Correlations | [
"GuanWen Qiu",
"Da Kuang",
"Surbhi Goel"
] | Workshop/M3L | poster | 2403.03375 | [
"https://github.com/NayutaQiu/Boolean_Spurious"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oB6tknFuXF | @inproceedings{
sabanayagam2023unveiling,
title={Unveiling the Hessian's Connection to the Decision Boundary},
author={Mahalakshmi Sabanayagam and Freya Behrens and Urte Adomaityte and Anna Dawid},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=oB6tknFuXF}
} | Understanding the properties of well-generalizing minima is at the heart of deep learning research. On the one hand, the generalization of neural networks has been connected to the decision boundary complexity, which is hard to study in the high-dimensional input space. Conversely, the flatness of a minimum has become a controversial proxy for generalization. In this work, we provide the missing link between the two approaches and show that the Hessian top eigenvectors characterize the decision boundary learned by the neural network. Notably, the number of outliers in the Hessian spectrum is proportional to the complexity of the decision boundary. Based on this finding, we provide a new and straightforward approach to studying the complexity of a high-dimensional decision boundary. | Unveiling the Hessian's Connection to the Decision Boundary | [
"Mahalakshmi Sabanayagam",
"Freya Behrens",
"Urte Adomaityte",
"Anna Dawid"
] | Workshop/M3L | poster | 2306.07104 | [
"https://github.com/shmoo137/hessian-and-decision-boundary"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=mCjgbk31w1 | @inproceedings{
zhang2023nonparametric,
title={Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks},
author={Zixuan Zhang and Kaiqi Zhang and Minshuo Chen and Yuma Takeda and Mengdi Wang and Tuo Zhao and Yu-Xiang Wang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=mCjgbk31w1}
} | Convolutional residual neural networks (ConvResNets), though overparameterized, can achieve remarkable prediction performance in practice, which cannot be well explained by conventional wisdom. To bridge this gap, we study the performance of ConvResNeXts, which cover ConvResNets as a special case, trained with weight decay from the perspective of nonparametric classification. Our analysis allows for infinitely many building blocks in ConvResNeXts, and shows that weight decay implicitly enforces sparsity on these blocks. Specifically, we consider a smooth target function supported on a low-dimensional manifold, then prove that ConvResNeXts can adapt to the function smoothness and low-dimensional structures and efficiently learn the function without suffering from the curse of dimensionality. Our findings partially justify the advantage of overparameterized ConvResNeXts over conventional machine learning models. | Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks | [
"Zixuan Zhang",
"Kaiqi Zhang",
"Minshuo Chen",
"Yuma Takeda",
"Mengdi Wang",
"Tuo Zhao",
"Yu-Xiang Wang"
] | Workshop/M3L | poster | 2307.01649 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=lIdMba8zHg | @inproceedings{
lobacheva2023large,
title={Large Learning Rates Improve Generalization: But How Large Are We Talking About?},
author={Ekaterina Lobacheva and Eduard Pokonechny and Maxim Kodryan and Dmitry Vetrov},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=lIdMba8zHg}
} | Inspired by recent research that recommends starting neural networks training with large learning rates (LRs) to achieve the best generalization, we explore this hypothesis in detail. Our study clarifies the initial LR ranges that provide optimal results for subsequent training with a small LR or weight averaging. We find that these ranges are in fact significantly narrower than generally assumed. We conduct our main experiments in a simplified setup that allows precise control of the learning rate hyperparameter and validate our key findings in a more practical setting. | Large Learning Rates Improve Generalization: But How Large Are We Talking About? | [
"Ekaterina Lobacheva",
"Eduard Pokonechny",
"Maxim Kodryan",
"Dmitry Vetrov"
] | Workshop/M3L | poster | 2311.11303 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=kuAQRCHQNX | @inproceedings{
kosson2023understanding,
title={Understanding the Role of Noisy Statistics in the Regularization Effect of Batch Normalization},
author={Atli Kosson and Dongyang Fan and Martin Jaggi},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=kuAQRCHQNX}
} | Normalization layers have been shown to benefit the training stability and generalization of deep neural networks in various ways. For Batch Normalization (BN), the noisy statistics have been observed to have a regularization effect that depends on the batch size. Following this observation, Hoffer et. al. proposed Ghost Batch Normalization (GBN), where BN is explicitly performed independently on smaller sub-batches, resulting in improved generalization in many settings. In this study we analyze and isolate the effect of the noisy statistics by comparing BN and GBN, introducing a noise injection method. We then quantitatively assess the effects of the noise, juxtaposing it with other regularizers like dropout and examining its potential role in the generalization disparities between batch normalization and its alternatives, including layer normalization and normalization-free methods. | Understanding the Role of Noisy Statistics in the Regularization Effect of Batch Normalization | [
"Atli Kosson",
"Dongyang Fan",
"Martin Jaggi"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=kXf5CfXBbU | @inproceedings{
chen2023generalization,
title={Generalization Guarantees of Deep ResNets in the Mean-Field Regime},
author={Yihang Chen and Fanghui Liu and Yiping Lu and Grigorios Chrysos and Volkan Cevher},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=kXf5CfXBbU}
} | Despite the widespread empirical success of ResNet, the generalization ability of deep ResNet is rarely explored beyond the lazy-training regime. In this work, we investigate ResNet in the limit of infinitely deep and wide neural networks, of which the gradient flow is described by a partial differential equation in the large-neural network limit, i.e., the \emph{mean-field} regime.
To derive the generalization bounds under this setting, our analysis necessitates a shift from the conventional time-invariant Gram matrix employed in the lazy training regime to a time-variant, distribution-dependent version tailored to the mean-field regime.
To this end, we provide a lower bound on the minimum eigenvalue of the Gram matrix under the mean-field regime.
Besides, the traceability of the dynamic of Kullback-Leibler (KL) divergence is also required under the mean-field regime.
We therefore establish the linear convergence of the empirical error and estimate the upper bound of the KL divergence over parameters distribution.
The above two results are employed to build the uniform convergence for generalization bound via Rademacher complexity.
Our results offer new insights into the generalization ability of deep ResNet beyond the lazy training regime and contribute to advancing the understanding of the fundamental properties of deep neural networks. | Generalization Guarantees of Deep ResNets in the Mean-Field Regime | [
"Yihang Chen",
"Fanghui Liu",
"Yiping Lu",
"Grigorios Chrysos",
"Volkan Cevher"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jRqhooP4f9 | @inproceedings{
kumano2023theoretical,
title={Theoretical Explanation for Generalization from Adversarial Perturbations},
author={Soichiro Kumano and Hiroshi Kera and Toshihiko Yamasaki},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=jRqhooP4f9}
} | It is not fully understood why adversarial examples can deceive neural networks and transfer between different networks. To elucidate this, several studies hypothesized that adversarial perturbations contain data features that are imperceptible to humans but still recognizable by neural networks. Empirical evidence has shown that neural networks trained on mislabeled samples with these perturbations can generalize to natural test data. However, a theoretical understanding of this counterintuitive phenomenon is limited. In this study, assuming orthogonal training samples, we first prove that one-hidden-layer neural networks can learn natural data structures from adversarial perturbations. Our results indicate that, under mild conditions, the decision boundary from learning perturbations aligns with that from natural data, except for specific points in the input space. | Theoretical Explanation for Generalization from Adversarial Perturbations | [
"Soichiro Kumano",
"Hiroshi Kera",
"Toshihiko Yamasaki"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=iKiEzVD0DC | @inproceedings{
huang2023incontext,
title={In-Context Convergence of Transformers},
author={Yu Huang and Yuan Cheng and Yingbin Liang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=iKiEzVD0DC}
} | Transformers have recently revolutionized many domains in modern machine learning and one salient discovery is their remarkable in-context learning capability, where models can solve an unseen task by utilizing task-specific prompts without further parameters fine-tuning. This also inspired recent theoretical studies aiming to understand the in-context learning mechanism of transformers, which however focused only on $\textbf{linear}$ transformers. In this work, we take the first step toward studying the learning dynamics of a one-layer transformer with $\textbf{softmax}$ attention trained via gradient descent in order to in-context learn linear function classes. We consider a structured data model, where each token is randomly sampled from a set of feature vectors in either balanced or imbalanced fashion. For data with balanced features, we establish the finite-time convergence guarantee with near-zero prediction error by navigating our analysis over two phases of the training dynamics of the attention map. More notably, for data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process, where the transformer first converges to a near-zero prediction error for the query tokens of dominant features, and then converges later to a near-zero prediction error for the query tokens of under-represented features, respectively via one and four training phases. Our proof features new techniques for analyzing the competing strengths of two types of attention weights, the change of which determines different phases. | In-Context Convergence of Transformers | [
"Yu Huang",
"Yuan Cheng",
"Yingbin Liang"
] | Workshop/M3L | oral | 2310.05249 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iBDcaBLhz2 | @inproceedings{
dandi2023how,
title={How Two-Layer Neural Networks Learn, One (Giant) Step at a Time},
author={Yatin Dandi and Florent Krzakala and Bruno Loureiro and Luca Pesce and Ludovic Stephan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=iBDcaBLhz2}
} | We investigate theoretically how the features of a $2$-layer neural network adapt to the structure of the target function through a few large batch gradient descent steps, leading to improvement in the approximation capacity with respect to the initialization.
We compare the influence of batch size and that of multiple (but finitely many) steps. For a single gradient step, a batch of size $n =\mathcal{O}(d)$ is both necessary and sufficient to align with the target function, although only a single direction can be learned. In contrast, $n=\mathcal{O}(d^2)$ is essential for neurons to specialize to multiple relevant directions of the target with a single gradient step. Even in this case, we show there might exist ``hard'' directions requiring $n=\mathcal{O}(d^\ell)$ samples to be learned, where $\ell$ is known as the leap index of the target. The picture drastically improves over multiple gradient steps: we show that a batch-size of $n =\mathcal{O}(d)$ is indeed enough to learn multiple target directions satisfying a staircase property, where more and more directions can be learned over time. Finally, we discuss how these directions allow to drastically improve the approximation capacity and generalization error over the initialization, illustrating a separation of scale between the random features/lazy regime, and the feature learning regime. Our technical analysis leverages a combination of techniques related to concentration, projection-based conditioning, and Gaussian equivalence which we believe are of independent interest. By pinning down the conditions necessary for specialization and learning, our results highlight the interaction between batch size and number of iterations, and lead to a hierarchical depiction where learning performance exhibits a stairway to accuracy over time and batch size, shedding new light on how neural nets adapt to features of the data. | How Two-Layer Neural Networks Learn, One (Giant) Step at a Time | [
"Yatin Dandi",
"Florent Krzakala",
"Bruno Loureiro",
"Luca Pesce",
"Ludovic Stephan"
] | Workshop/M3L | poster | 2305.18270 | [
"https://github.com/lucpoisson/giantstep"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=hWDqKtIwSo | @inproceedings{
wang2023two,
title={Two Facets of {SDE} Under an Information-Theoretic Lens: Generalization of {SGD} via Training Trajectories and via Terminal States},
author={Ziqiao Wang and Yongyi Mao},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=hWDqKtIwSo}
} | Stochastic differential equations (SDEs) have been shown recently to well characterize the dynamics of training machine learning models with SGD. This provides two opportunities for better understanding the generalization behaviour of SGD through its SDE approximation. Firstly, viewing SGD as full-batch gradient descent with Gaussian gradient noise allows us to obtain trajectories-based generalization bound using the information-theoretic bound. Secondly, assuming mild conditions, we estimate the steady-state weight distribution of SDE and use the information-theoretic bound to establish terminal-state-based generalization bounds. | Two Facets of SDE Under an Information-Theoretic Lens: Generalization of SGD via Training Trajectories and via Terminal States | [
"Ziqiao Wang",
"Yongyi Mao"
] | Workshop/M3L | poster | 2211.10691 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=hVcGX9oOeE | @inproceedings{
gong2023unraveling,
title={Unraveling the Complexities of Simplicity Bias: Mitigating and Amplifying Factors},
author={Xuchen Gong and Tianwen Fu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=hVcGX9oOeE}
} | The success of neural networks depends on the generalization ability, while Shah et al. conclude that the inherent bias towards simplistic features, a phenomenon called *Simplicity Bias*, hurts generalization by preferring simple but noisy features to complex yet predictive ones. We aim to understand the scenarios when simplicity bias occurs more severely and the factors that help mitigate its effects. We show that many traditional insights such as increasing training size and increasing informative feature dimensions are not as effective as balancing the modes of our data distribution, distorting the simplistic features, or even searching for a good initialization. Our empirical results reveal intriguing factors of simplicity bias, and we call for future investigations to a more thorough understanding of simplicity bias and its interplay with the related fields. | Unraveling the Complexities of Simplicity Bias: Mitigating and Amplifying Factors | [
"Xuchen Gong",
"Tianwen Fu"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=gLwzzmh79K | @inproceedings{
tarzanagh2023transformers,
title={Transformers as Support Vector Machines},
author={Davoud Ataee Tarzanagh and Yingcong Li and Christos Thrampoulidis and Samet Oymak},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=gLwzzmh79K}
} | The transformer architecture has led to revolutionary advancements in NLP. The attention layer within the transformer admits a sequence of input tokens $X$ and makes them interact through pairwise similarities computed as $\texttt{softmax}(XQK^\top X^\top)$, where $(K,Q)$ are the trainable key-query parameters. In this work, we establish a formal equivalence between the optimization geometry of self-attention and a hard-margin SVM problem that separates optimal input tokens from non-optimal tokens using linear constraints on the outer-products of token pairs. This formalism allows us to characterize the implicit bias of 1-layer transformers optimized with gradient descent: (1) Optimizing the attention layer, parameterized by $(K,Q)$, with vanishing regularization, converges in direction to an SVM solution minimizing the nuclear norm of the combined parameter $W:=KQ^\top$. Instead, directly parameterizing by $W$ minimizes a Frobenius norm SVM objective. (2) Complementing this, for $W$-parameterization, we prove the local/global directional convergence of gradient descent under suitable geometric conditions, and propose a more general SVM equivalence that predicts the implicit bias of attention with nonlinear heads/MLPs. | Transformers as Support Vector Machines | [
"Davoud Ataee Tarzanagh",
"Yingcong Li",
"Christos Thrampoulidis",
"Samet Oymak"
] | Workshop/M3L | poster | 2308.16898 | [
"https://github.com/umich-sota/tf-as-svm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eHZhjP5QdR | @inproceedings{
kim2023symmetric,
title={Symmetric Mean-field Langevin Dynamics for Distributional Minimax Problems},
author={Juno Kim and Kakei Yamamoto and Kazusato Oko and Zhuoran Yang and Taiji Suzuki},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=eHZhjP5QdR}
} | In this paper, we extend mean-field Langevin dynamics to minimax optimization over probability distributions for the first time with symmetric and provably convergent updates. We propose \emph{mean-field Langevin averaged gradient} (MFL-AG), a single-loop algorithm that implements gradient descent ascent in the distribution spaces with a novel weighted averaging, and establish average-iterate convergence to the mixed Nash equilibrium. We also study both time and particle discretization regimes and prove a new uniform-in-time propagation of chaos result which accounts for the dependency of the particle interactions on all previous distributions. Furthermore, we propose \emph{mean-field Langevin anchored best response} (MFL-ABR), a symmetric double-loop algorithm based on best response dynamics with linear last-iterate convergence. Finally, we study applications to zero-sum Markov games and conduct simulations demonstrating long-term optimality. | Symmetric Mean-field Langevin Dynamics for Distributional Minimax Problems | [
"Juno Kim",
"Kakei Yamamoto",
"Kazusato Oko",
"Zhuoran Yang",
"Taiji Suzuki"
] | Workshop/M3L | poster | 2312.01127 | [
""
] | https://huggingface.co/papers/2312.01127 | 0 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=dq5QGXGxoJ | @inproceedings{
izzo2023a,
title={A Theoretical Study of Dataset Distillation},
author={Zachary Izzo and James Zou},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=dq5QGXGxoJ}
} | Modern machine learning models are often trained using massive amounts of data. Such large datasets come at a high cost in terms of both storage and computation, especially when the data will need to be used repeatedly (e.g., for neural architecture search or continual learning). _Dataset distillation_ (DD) describes the process of constructing a smaller ``distilled'' dataset (usually consisting of synthetic examples), such that models trained on the distilled dataset will be similar to models trained on the original dataset. In this paper, we study DD from a theoretical perspective. We show that for generalized linear models, it is possible to construct a distilled dataset with only a _single point_ which will exactly recover the model trained on the original dataset, regardless of the original number of points. We provide a specialized distillation for linear regression with size independent of the original number of points, but which perfectly reconstructs the model obtained from the original dataset with _any_ data-independent regularizer, or by combining the original dataset with any additional data. We also provide impossibility results showing that similar constructions are impossible for logistic regression, and that DD cannot be accomplished in general for kernel regression, even if the goal is only to recover a single model. | A Theoretical Study of Dataset Distillation | [
"Zachary Izzo",
"James Zou"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=dE5MEi9906 | @inproceedings{
fu2023transformers,
title={Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models},
author={Deqing Fu and Tianqi Chen and Robin Jia and Vatsal Sharan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=dE5MEi9906}
} | Transformers are remarkably good at *in-context learning* (ICL)---learning from demonstrations without parameter updates---but how they perform ICL remains a mystery. Recent work suggests that Transformers may learn in-context by internally running Gradient Descent, a first-order optimization method. In this paper, we instead demonstrate that Transformers learn to implement higher-order optimization methods to perform ICL. Focusing on in-context linear regression, we show that Transformers learn to implement an algorithm very similar to *Iterative Newton's Method*, a higher-order optimization method, rather than Gradient Descent. Empirically, we show that predictions from successive Transformer layers closely match different iterations of Newton's Method *linearly*, with each middle layer roughly computing 3 iterations. In contrast, *exponentially* more Gradient Descent steps are needed to match an additional Transformers layer;
this suggests that Transformers have an comparable rate of convergence with high-order methods such as Iterative Newton, which are exponentially faster than Gradient Descent. We also show that Transformers can learn in-context on ill-conditioned data, a setting where Gradient Descent struggles but Iterative Newton succeeds. Finally, we show theoretical results which support our empirical findings and have a close correspondence with them: we prove that Transformers can implement $k$ iterations of Newton's method with $\mathcal{O}(k)$ layers. | Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models | [
"Deqing Fu",
"Tianqi Chen",
"Robin Jia",
"Vatsal Sharan"
] | Workshop/M3L | poster | 2310.17086 | [
"https://github.com/deqingfu/transformers-icl-higher-order"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=c71B6zW70d | @inproceedings{
schweighofer2023introducing,
title={Introducing an Improved Information-Theoretic Measure of Predictive Uncertainty},
author={Kajetan Schweighofer and Lukas Aichberger and Mykyta Ielanskyi and Sepp Hochreiter},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=c71B6zW70d}
} | Applying a machine learning model for decision-making in the real world requires to distinguish what the model knows from what it does not. A critical factor in assessing the knowledge of a model is to quantify its predictive uncertainty. Predictive uncertainty is commonly measured by the entropy of the Bayesian model average (BMA) predictive distribution. Yet, the properness of this current measure of predictive uncertainty was recently questioned. We provide new insights regarding those limitations. Our analyses show that the current measure erroneously assumes that the BMA predictive distribution is equivalent to the predictive distribution of the true model that generated the dataset. Consequently, we introduce a theoretically grounded measure to overcome these limitations. We experimentally verify the benefits of our introduced measure of predictive uncertainty. We find that our introduced measure behaves more reasonably in controlled synthetic tasks. Moreover, our evaluations on ImageNet demonstrate that our introduced measure is advantageous in real-world applications utilizing predictive uncertainty. | Introducing an Improved Information-Theoretic Measure of Predictive Uncertainty | [
"Kajetan Schweighofer",
"Lukas Aichberger",
"Mykyta Ielanskyi",
"Sepp Hochreiter"
] | Workshop/M3L | poster | 2311.08309 | [
""
] | https://huggingface.co/papers/2311.08309 | 1 | 0 | 0 | 4 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=aBeZ3jid9i | @inproceedings{
wibisono2023on,
title={On the Role of Unstructured Training Data in Transformers' In-Context Learning Capabilities},
author={Kevin Christian Wibisono and Yixin Wang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=aBeZ3jid9i}
} | Transformers have exhibited impressive in-context learning (ICL) capabilities: they can generate predictions for new query inputs based on sequences of inputs and outputs (i.e., prompts) without parameter updates. Efforts to provide theoretical explanations for the emergence of these abilities have primarily focused on the structured data setting, where input-output pairings in the training data are known. This scenario can enable simplified transformers (e.g., ones comprising a single attention layer without the softmax activation) to achieve notable ICL performance. However, transformers are primarily trained on unstructured data that rarely include such input-output pairings. To better understand how ICL emerges, we propose to study transformers that are trained on unstructured data, namely data that lack prior knowledge of input-output pairings. This new setting elucidates the pivotal role of softmax attention in the robust ICL abilities of transformers, particularly those with a single attention layer. We posit that the significance of the softmax activation partially stems from the equivalence of softmax-based attention models with mixtures of experts, facilitating the implicit inference of input-output pairings in the test prompts. Additionally, a probing analysis reveals where these pairings are learned within the model. While subsequent layers predictably encode more information about these pairings, we find that even the first attention layer contains a significant amount of pairing information. | On the Role of Unstructured Training Data in Transformers' In-Context Learning Capabilities | [
"Kevin Christian Wibisono",
"Yixin Wang"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=a1JCT4NPyP | @inproceedings{
huben2023attentiononly,
title={Attention-Only Transformers and Implementing {MLP}s with Attention Heads},
author={Robert Huben and Valerie Morris},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=a1JCT4NPyP}
} | The transformer architecture is widely used in machine learning models and consists of two alternating sublayers: attention heads and MLPs. We prove that an MLP neuron can be implemented by a masked attention head with internal dimension 1 so long as the MLP's activation function comes from a restricted class including SiLU and close approximations of ReLU and GeLU. This allows one to convert an MLP-and-attention transformer into an attention-only transformer at the cost of greatly increasing the number of attention heads. | Attention-Only Transformers and Implementing MLPs with Attention Heads | [
"Robert Huben",
"Valerie Morris"
] | Workshop/M3L | poster | 2309.08593 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZBqB3XiN6M | @inproceedings{
bombari2023privacy,
title={Privacy at Interpolation: Precise Analysis for Random and {NTK} Features},
author={Simone Bombari and Marco Mondelli},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=ZBqB3XiN6M}
} | Deep learning models are able to
memorize the training set. This makes them vulnerable to recovery attacks, raising privacy concerns to users, and many widespread algorithms such as empirical risk minimization (ERM) do not directly enforce safety guarantees. In this paper, we study the safety of ERM models when the training samples are interpolated (i.e., *at interpolation*) against a family of powerful black-box information retrieval attacks. Our analysis quantifies this safety via two separate terms: *(i)* the model *stability* with respect to individual training samples, and *(ii)* the *feature alignment* between attacker query and original data. While the first term is well established in learning theory and it
is connected to the generalization error in classical work, the second one is, to the best of our knowledge, novel.
Our key technical result characterizes precisely the feature alignment for the two prototypical settings of random features (RF) and neural tangent kernel (NTK) regression.
This proves that privacy strengthens with an increase in generalization capability, unveiling the role of the model and of its activation function.
Numerical experiments show an agreement with our theory not only for RF/NTK models, but also for deep neural networks trained on standard datasets (MNIST, CIFAR-10). | Privacy at Interpolation: Precise Analysis for Random and NTK Features | [
"Simone Bombari",
"Marco Mondelli"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Z7UaGFmg8O | @inproceedings{
kausik2023denoising,
title={Denoising Low-Rank Data Under Distribution Shift: Double Descent and Data Augmentation},
author={Chinmaya Kausik and Kashvi Srivastava and Rishi Sonthalia},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Z7UaGFmg8O}
} | Despite the importance of denoising in modern machine learning and ample empirical work on supervised denoising, its theoretical understanding is still relatively scarce. One concern about studying supervised denoising is that one might not always have noiseless training data from the test distribution. It is more reasonable to have access to noiseless training data from a different dataset than the test dataset. Motivated by this, we study supervised denoising and noisy-input regression under distribution shift. We add three considerations to increase the applicability of our theoretical insights to real-life data and modern machine learning. First, while most past theoretical work assumes that the data covariance matrix is full-rank and well-conditioned, empirical studies have shown that real-life data is approximately low-rank. Thus, we assume that our data matrices are low-rank. Second, we drop independence assumptions on our data. Third, the rise in computational power and dimensionality of data have made it important to study non-classical regimes of learning. Thus, we work in the non-classical proportional regime, where data dimension $d$ and number of samples $N$ grow as $d/N = c + o(1)$.
For this setting, we derive general test error expressions for both denoising and noisy-input regression, and study when overfitting the noise is benign, tempered or catastrophic. We show that the test error exhibits double descent under general distribution shift, providing insights for data augmentation and the role of noise as an implicit regularizer. We also perform experiments using real-life data, where we match the theoretical predictions with under 1\% MSE error for low-rank data. | Denoising Low-Rank Data Under Distribution Shift: Double Descent and Data Augmentation | [
"Chinmaya Kausik",
"Kashvi Srivastava",
"Rishi Sonthalia"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=XMHpZIIOXk | @inproceedings{
moniri2023a,
title={A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks},
author={Behrad Moniri and Donghwan Lee and Hamed Hassani and Edgar Dobriban},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=XMHpZIIOXk}
} | Feature learning is thought to be one of the fundamental reasons for the success of deep neural networks.
It is rigorously known that in two-layer fully-connected neural networks under certain conditions, one step of gradient descent on the first layer followed by ridge regression on the second layer can lead to feature learning; characterized by the appearance of a separated rank-one component---spike---in the spectrum of the feature matrix.
However, with a constant gradient descent step size, this spike only carries information from the linear component of the target function and therefore learning non-linear components is impossible.
We show that with a learning rate that grows with the sample size,
such training in fact introduces
multiple rank-one components,
each corresponding to a specific polynomial feature.
We further prove that the limiting large-dimensional and large sample training and test errors of the updated neural networks are fully characterized by these spikes.
By precisely analyzing the improvement in the loss, we demonstrate that these non-linear features can enhance learning. | A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks | [
"Behrad Moniri",
"Donghwan Lee",
"Hamed Hassani",
"Edgar Dobriban"
] | Workshop/M3L | poster | 2310.07891 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=WvZV3JvmeR | @inproceedings{
xu2023benign,
title={Benign Overfitting and Grokking in Re{LU} Networks for {XOR} Cluster Data},
author={Zhiwei Xu and Yutong Wang and Spencer Frei and Gal Vardi and Wei Hu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=WvZV3JvmeR}
} | Neural networks trained by gradient descent (GD) have exhibited a number of surprising generalization behaviors. First, they can achieve a perfect fit to noisy training data and still generalize near-optimally, showing that overfitting can sometimes be benign. Second, they can undergo a period of classical, harmful overfitting---achieving a perfect fit to training data with near-random performance on test data---before transitioning (''grokking'') to near-optimal generalization later in training. In this work, we show that both of these phenomena provably occur in two-layer ReLU networks trained by GD on XOR cluster data where a constant fraction of the training labels are flipped. In this setting, we show that after the first step of GD, the network achieves 100\% training accuracy, perfectly fitting the noisy labels in the training data, but achieves near-random test accuracy. At a later training step, the network achieves near-optimal test accuracy while still fitting the random labels in the training data, exhibiting a ''grokking'' phenomenon. This provides the first theoretical result of benign overfitting in neural network classification when the data distribution is not linearly separable. Our proofs rely on analyzing the feature learning process under GD, which reveals that the network implements a non-generalizable linear classifier after one step and gradually learns generalizable features in later steps. | Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data | [
"Zhiwei Xu",
"Yutong Wang",
"Spencer Frei",
"Gal Vardi",
"Wei Hu"
] | Workshop/M3L | oral | 2310.02541 | [
""
] | https://huggingface.co/papers/2310.02541 | 2 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=WooXHaAvKQ | @inproceedings{
zhou2023how,
title={How does Gradient Descent Learn Features --- A Local Analysis for Regularized Two-Layer Neural Networks},
author={Mo Zhou and Rong Ge},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=WooXHaAvKQ}
} | The ability of learning useful features is one of the major advantages of neural networks. Although recent works show that neural network can operate in a neural tangent kernel (NTK) regime that does not allow feature learning, many works also demonstrate the potential for neural networks to go beyond NTK regime and perform feature learning. Recently, a line of work highlighted the feature learning capabilities of the early stages of gradient-based training. In this paper we consider another mechanism for feature learning via gradient descent through a local convergence analysis. We show that once the loss is below a certain threshold, gradient descent with a carefully regularized objective will capture ground-truth directions. Our results demonstrate that feature learning not only happens at the initial gradient steps, but can also occur towards the end of training. | How does Gradient Descent Learn Features — A Local Analysis for Regularized Two-Layer Neural Networks | [
"Mo Zhou",
"Rong Ge"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=WGWM0MzWAg | @inproceedings{
chen2023understanding,
title={Understanding Transferable Representation Learning and Zero-shot Transfer in {CLIP}},
author={Zixiang Chen and Yihe Deng and Yuanzhi Li and Quanquan Gu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=WGWM0MzWAg}
} | Multi-modal learning has become increasingly popular due to its ability to leverage information from different data sources. Recently, CLIP has emerged as an effective approach that employs vision-language contrastive pretraining to learn joint image and text representations and exhibits remarkable performance in zero-shot learning and text-guided natural image generation. Despite the huge practical success of CLIP, its theoretical understanding remains elusive. In this paper, we formally study transferrable representation learning underlying CLIP and demonstrate how features from different modalities get aligned. We also analyze its zero-shot transfer performance on the downstream tasks. Inspired by our analysis, we propose a new CLIP-type approach, which achieves better performance than CLIP and other state-of-the-art methods on benchmark datasets. | Understanding Transferable Representation Learning and Zero-shot Transfer in CLIP | [
"Zixiang Chen",
"Yihe Deng",
"Yuanzhi Li",
"Quanquan Gu"
] | Workshop/M3L | oral | 2310.00927 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Vg6oMb7fbh | @inproceedings{
zhao2023provably,
title={Provably Efficient {CV}aR {RL} in Low-rank {MDP}s},
author={Yulai Zhao and Wenhao Zhan and Xiaoyan Hu and Ho-fung Leung and Farzan Farnia and Wen Sun and Jason Lee},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Vg6oMb7fbh}
} | We study risk-sensitive Reinforcement Learning (RL), where we aim to maximize
the Conditional Value at Risk (CVaR) with a fixed risk tolerance $\tau$.
Prior theoretical work studying risk-sensitive RL focuses on the tabular Markov Decision Processes (MDPs) setting.
To extend CVaR RL to settings where state space is large, function approximation must be deployed.
We study CVaR RL in low-rank MDPs with nonlinear function approximation. Low-rank MDPs assume the underlying transition kernel admits a low-rank decomposition, but unlike prior linear models, low-rank MDPs do not assume the feature or state-action representation is known.
We propose a novel Upper Confidence Bound (UCB) bonus-driven algorithm to carefully balance the interplay between exploration, exploitation, and representation learning in CVaR RL.
We prove that our algorithm achieves a sample complexity of $\tilde{O}\left(\frac{H^7 A^2 d^4}{\tau^2 \epsilon^2}\right)$ rate to yield an $\epsilon$-optimal CVaR, where $H$ is the length of each episode, $A$ is the capacity of action space, and $d$ is the dimension of representations.
Computational-wise, we design a novel discretized Least-Squares Value Iteration (LSVI) algorithm for the CVaR objective as the planning oracle and show that we can find the near-optimal policy in a polynomial running time with a Maximum Likelihood Estimation oracle.
To our knowledge, this is the first provably efficient CVaR RL algorithm in low-rank MDPs. | Provably Efficient CVaR RL in Low-rank MDPs | [
"Yulai Zhao",
"Wenhao Zhan",
"Xiaoyan Hu",
"Ho-fung Leung",
"Farzan Farnia",
"Wen Sun",
"Jason Lee"
] | Workshop/M3L | poster | 2311.11965 | [
""
] | https://huggingface.co/papers/2311.11965 | 1 | 0 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=StN285pphC | @inproceedings{
mehra2023analysis,
title={Analysis of Task Transferability in Large Pre-trained Classifiers},
author={Akshay Mehra and Yunbei Zhang and Jihun Hamm},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=StN285pphC}
} | Transfer learning is a cornerstone of modern machine learning, enabling models to transfer the knowledge acquired from a source task to downstream target tasks with minimal fine-tuning. However, the relationship between the source task performance and the downstream target task performance (i.e., transferability) is poorly understood. In this work, we rigorously analyze the transferability of large pre-trained models on downstream classification tasks after linear fine-tuning. We use a novel Task Transfer Analysis approach that transforms the distribution (and classifier) of the source task to produce a new distribution (and classifier) similar to that of the target task. Using this, we propose an upper bound on transferability composed of the Wasserstein distance between the transformed source and the target distributions, the conditional entropy between the label distributions of the two tasks, and the weighted loss of the source classifier on the source task. We propose an optimization problem that minimizes the proposed bound to estimate transferability. Using state-of-the-art pre-trained models, we show that the proposed upper bound accurately estimates transferability on various datasets and demonstrates the importance of high relatedness between the source and target tasks for achieving high transferability. | Analysis of Task Transferability in Large Pre-trained Classifiers | [
"Akshay Mehra",
"Yunbei Zhang",
"Jihun Hamm"
] | Workshop/M3L | poster | 2307.00823 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Shqnglu4En | @inproceedings{
tahmasebi2023on,
title={On Scale-Invariant Sharpness Measures},
author={Behrooz Tahmasebi and Ashkan Soleymani and Stefanie Jegelka and Patrick Jaillet},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Shqnglu4En}
} | Recently, there has been a substantial surge of interest in the development of optimization algorithms tailored for overparameterized models. This interest centers around the objective of minimizing a concept of sharpness in conjunction with the original loss function, e.g., the Sharpness-Aware Minimization (SAM) algorithm shown effective in practice. Nevertheless, the majority of sharpness measures exhibit sensitivity to parameter scaling in neural networks, and they may even experience significant magnification when subjected to rescaling operations. Motivated by this issue, in this paper, we introduce a new class of scale-invariant sharpness measures, that gives rise to a new class of scale-invariant sharpness-aware objective functions. Furthermore, we prove that the newly introduced objective functions are explicitly biased towards the minimization of our scale-invariant sharpness measures. | On Scale-Invariant Sharpness Measures | [
"Behrooz Tahmasebi",
"Ashkan Soleymani",
"Stefanie Jegelka",
"Patrick Jaillet"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=SU6KGZweUJ | @inproceedings{
chen2023gibbsbased,
title={Gibbs-Based Information Criteria and the Over-Parameterized Regime},
author={Haobo Chen and Yuheng Bu and Gregory Wornell},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=SU6KGZweUJ}
} | Double-descent refers to the unexpected drop in test loss of a learning algorithm beyond an interpolating threshold with over-parameterization, which is not predicted by information criteria in their classical forms due to the limitations in the standard asymptotic approach. We update these analyses using the information risk minimization framework and provide Bayesian Information Criterion (BIC) for models trained by the Gibbs algorithm. Notably, the BIC penalty term for the Gibbs algorithm corresponds to a specific information measure, i.e., KL divergence. We extend this information-theoretic analysis to over-parameterized models by characterizing the Gibbs-based BIC for the random feature model in the regime where the number of parameters $p$ and the number of samples $n$ tend to infinity, with $p/n$ fixed. Our experiments demonstrate that the Gibbs-based BIC can select the high-dimensional model and reveal the mismatch between marginal likelihood and population risk in the over-parameterized regime, providing new insights for understanding the double-descent phenomenon. | Gibbs-Based Information Criteria and the Over-Parameterized Regime | [
"Haobo Chen",
"Yuheng Bu",
"Gregory Wornell"
] | Workshop/M3L | poster | 2306.05583 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=QPMfCLnIqf | @inproceedings{
mohamadi2023grokking,
title={Grokking modular arithmetic can be explained by margin maximization},
author={Mohamad Amin Mohamadi and Zhiyuan Li and Lei Wu and Danica Sutherland},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=QPMfCLnIqf}
} | We present a margin-based generalization theory explaining the “grokking” phenomenon (Power et, al. 2022), where the model generalizes long after overfitting to arithmetic datasets. Specifically, we study two-layer quadratic networks on mod-$p$ arithmetic problems, and show that solutions with maximal margin normalized by $\ell_\infty$ norm generalize with $\tilde O(p^{5/3})$ samples. To the best of our knowledge, this is the first sample complexity bound strictly better than a trivial $O(p^2)$ complexity for modular addition. Empirically, we find that GD on unregularized $\ell_2$ or cross entropy loss tend to maximize the margin. In contrast, we show that kernel-based models, such as networks that are well-approximated by their neural tangent kernel, need $\Omega(p^2)$ samples to achieve non-trivial $\ell_2$ loss. Our theory suggests that grokking might be caused by overfitting in the kernel regime of early training, followed by generalization as gradient descent eventually leaves the kernel regime and maximizes the normalized margin. | Grokking modular arithmetic can be explained by margin maximization | [
"Mohamad Amin Mohamadi",
"Zhiyuan Li",
"Lei Wu",
"Danica Sutherland"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=QBOV4DqFh6 | @inproceedings{
ayed2023overparameterised,
title={Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: {\textbackslash}{\textbackslash} Global Convergence Guarantees and Feature Learning},
author={Fadhel Ayed and Francois Caron and Paul Jung and Juho Lee and Hoil Lee and Hongseok Yang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=QBOV4DqFh6}
} | We consider gradient-based optimisation of wide, shallow neural networks with hidden-node ouputs scaled by positive scale parameters. The scale parameters are non-identical, differing from classical Neural Tangent Kernel (NTK) parameterisation. We prove that, for large networks, with high probability, gradient flow converges to a global minimum AND can learn features, unlike in the NTK regime. | Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling:
Global Convergence Guarantees and Feature Learning | [
"Fadhel Ayed",
"Francois Caron",
"Paul Jung",
"Juho Lee",
"Hoil Lee",
"Hongseok Yang"
] | Workshop/M3L | poster | [
"https://github.com/anomdoubleblind/asymmetrical_scaling"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=PYZ2lNVxgz | @inproceedings{
keles2023on,
title={On the Computational Complexity of Inverting Generative Models},
author={Feyza Duman Keles and Chinmay Hegde},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=PYZ2lNVxgz}
} | The objective of generative model inversion is to identify a size-$n$ latent vector that produces a generative model output that closely matches a given target. This operation is a core computational primitive in numerous modern applications involving computer vision and NLP. However, the problem is known to be computationally challenging and NP-hard in the worst case. This paper aims to provide a fine-grained view of the landscape of computational hardness for this problem. We establish several new hardness lower bounds for both exact and approximate model inversion. In exact inversion, the goal is to determine whether a target is contained within the range of a given generative model. Under the strong exponential time hypothesis (SETH), we demonstrate that the computational complexity of exact inversion is lower bounded by $\Omega(2^n)$ via a reduction from $k$-SAT; this is a strengthening of known results. For the more practically relevant problem of approximate inversion, the goal is to determine whether a point in the model range is close to a given target with respect to the $\ell_p$-norm. When $p$ is a positive odd integer, under SETH, we provide an $\Omega(2^n)$ complexity lower bound via a reduction from the closest vectors problem (CVP). Finally, when $p$ is even, under the exponential time hypothesis (ETH), we provide a lower bound of $2^{\Omega (n)}$ via a reduction from Half-Clique and Vertex-Cover. | On the Computational Complexity of Inverting Generative Models | [
"Feyza Duman Keles",
"Chinmay Hegde"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=M5SPJzYsWF | @inproceedings{
xu2023flowbased,
title={Flow-based Distributionally Robust Optimization},
author={Chen Xu and Jonghyeok Lee and Xiuyuan Cheng and Yao Xie},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=M5SPJzYsWF}
} | Flow-based models establish a continuous-time invertible transport map between a data distribution and a pre-specified target distribution, such as the standard Gaussian in normalizing flow. In this work, we study beyond the constraint of known target distributions. We specifically aim to find the worst-case distribution in distributional robust optimization (DRO), which is an infinite-dimensional problem that becomes particularly challenging in high-dimensional settings. To this end, we introduce a computational tool called FlowDRO Specifically, we reformulate the difficult task of identifying the worst-case distribution within a Wasserstein-2 uncertainty set into a more manageable form, i.e., training parameters of a corresponding flow-based neural network. Notably, the proposed FlowDRO is applicable to general risk functions and data distributions in DRO. We demonstrate the effectiveness of the proposed approach in various high-dimensional problems that can be viewed as DRO, including adversarial attack and differential privacy. | Flow-based Distributionally Robust Optimization | [
"Chen Xu",
"Jonghyeok Lee",
"Xiuyuan Cheng",
"Yao Xie"
] | Workshop/M3L | poster | 2310.19253 | [
"https://github.com/hamrel-cxu/flowdro"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=LyF1LmzXtU | @inproceedings{
lin2023transformers,
title={Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining},
author={Licong Lin and Yu Bai and Song Mei},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=LyF1LmzXtU}
} | Large transformer models pretrained on offline reinforcement learning datasets have demonstrated remarkable in-context reinforcement learning (ICRL) capabilities, where they can make good decisions when prompted with interaction trajectories from unseen environments. However, when and how transformers can be trained to perform ICRL have not been theoretically well-understood. In particular, it is unclear which reinforcement-learning algorithms transformers can perform in context, and how distribution mismatch in offline training data affects the learned algorithms.
This paper provides a theoretical framework that analyzes supervised pretraining for ICRL. This includes two recently proposed training methods --- algorithm distillation and decision-pretrained transformers. First, assuming model realizability, we prove the supervised-pretrained transformer will imitate the conditional expectation of the expert algorithm given the observed trajectory. The generalization error will scale with model capacity and a distribution divergence factor between the expert and offline algorithms. Second, we show transformers with ReLU attention can efficiently approximate near-optimal online reinforcement learning algorithms like LinUCB and Thompson sampling for stochastic linear bandits, and UCB-VI for tabular Markov decision processes. This provides the first quantitative analysis of the ICRL capabilities of transformers pretrained from offline trajectories. | Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining | [
"Licong Lin",
"Yu Bai",
"Song Mei"
] | Workshop/M3L | poster | 2310.08566 | [
"https://github.com/licong-lin/in-context-rl"
] | https://huggingface.co/papers/2310.08566 | 2 | 0 | 0 | 3 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=LaFMLwI3rM | @inproceedings{
guo2023how,
title={How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations},
author={Tianyu Guo and Wei Hu and Song Mei and Huan Wang and Caiming Xiong and Silvio Savarese and Yu Bai},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=LaFMLwI3rM}
} | While large language models based on the transformer architecture have demonstrated remarkable in-context learning (ICL) capabilities, understandings of such capabilities are still in an early stage, where existing theory and mechanistic understanding focus mostly on simple scenarios such as learning simple function classes. This paper takes initial steps on understanding ICL in more complex scenarios, by studying learning with \emph{representations}. Concretely, we construct synthetic in-context learning problems with a compositional structure, where the label depends on the input through a possibly complex but \emph{fixed} representation function, composed with a linear function that \emph{differs} in each instance. By construction, the optimal ICL algorithm first transforms the inputs by the representation function, and then performs linear ICL on top of the transformed dataset. We show theoretically the existence of transformers that approximately implement such algorithms with mild depth and size. Empirically, we find trained transformers consistently achieve near-optimal ICL performance in this setting, and exhibit the desired dissection where lower layers transforms the dataset and upper layers perform linear ICL. Through extensive probing and a new pasting experiment, we further reveal several mechanisms within the trained transformers, such as concrete copying behaviors on both the inputs and the representations, linear ICL capability of the upper layers alone, and a post-ICL representation selection mechanism in a harder mixture setting. These observed mechanisms align well with our theory and may shed light on how transformers perform ICL in more realistic scenarios. | How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations | [
"Tianyu Guo",
"Wei Hu",
"Song Mei",
"Huan Wang",
"Caiming Xiong",
"Silvio Savarese",
"Yu Bai"
] | Workshop/M3L | poster | 2310.10616 | [
""
] | https://huggingface.co/papers/2310.10616 | 0 | 1 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=KzR07JhgtW | @inproceedings{
laidlaw2023a,
title={A Theoretical Explanation of Deep {RL} Performance in Stochastic Environments},
author={Cassidy Laidlaw and Banghua Zhu and Stuart Russell and Anca Dragan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=KzR07JhgtW}
} | Reinforcement learning (RL) theory has largely focused on proving minimax sample complexity bounds. These require _strategic_ exploration algorithms that use relatively limited function classes for representing the policy or value function. Our goal is to explain why deep RL algorithms often perform well in practice, despite using _random_ exploration and much more expressive function classes like neural networks. Our work arrives at an explanation by showing that many stochastic MDPs can be solved by performing only a few steps of value iteration on the random policy's Q function and then acting greedily. When this is true, we find that it is possible to separate the _exploration_ and _learning_ components of RL, making it much easier to analyze. We introduce a new RL algorithm, SQIRL, that iteratively learns a near-optimal policy by exploring randomly to collect rollouts and then performing a limited number of steps of fitted-Q iteration over those rollouts. We find that any regression algorithm that satisfies basic in-distribution generalization properties can be used in SQIRL to efficiently solve common MDPs. This can explain why deep RL works with complex function approximators like neural networks, since it is empirically established that neural networks generalize well in-distribution. Furthermore, SQIRL explains why random exploration works well in practice, since we show many environments can be solved by effectively estimating the random policy's Q-function and then applying zero or a few steps of value iteration. We leverage SQIRL to derive instance-dependent sample complexity bounds for RL that are exponential only in an "effective horizon" of lookahead—which is typically much smaller than the full horizon—and on the complexity of the class used for function approximation. Empirically, we also find that SQIRL performance strongly correlates with PPO and DQN performance in a variety of stochastic environments, supporting that our theoretical analysis is predictive of practical performance. | A Theoretical Explanation of Deep RL Performance in Stochastic Environments | [
"Cassidy Laidlaw",
"Banghua Zhu",
"Stuart Russell",
"Anca Dragan"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Kx4gLWx2ze | @inproceedings{
mei2023deep,
title={Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models},
author={Song Mei and Yuchen Wu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Kx4gLWx2ze}
} | We investigate the efficiency of deep neural networks for approximating scoring functions in diffusion-based generative modeling. While existing approximation theories leverage the smoothness of score functions, they suffer from the curse of dimensionality for intrinsically high-dimensional data. This limitation is pronounced in graphical models such as Markov random fields, where the approximation efficiency of score functions remains unestablished.
To address this, we note score functions can often be well-approximated in graphical models through variational inference denoising algorithms. Furthermore, these algorithms can be efficiently represented by neural networks. We demonstrate this through examples, including Ising models, conditional Ising models, restricted Boltzmann machines, and sparse encoding models. Combined with off-the-shelf discretization error bounds for diffusion-based sampling, we provide an efficient sample complexity bound for diffusion-based generative modeling when the score function is learned by deep neural networks. | Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models | [
"Song Mei",
"Yuchen Wu"
] | Workshop/M3L | oral | 2309.11420 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=KZ47dqKtGs | @inproceedings{
sonthalia2023underparameterized,
title={Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line},
author={Rishi Sonthalia and Xinyue Li and Bochao Gu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=KZ47dqKtGs}
} | In this paper, we present a simple example that provably exhibits double descent in the under-parameterized regime. For simplicity, we look at the ridge regularized least squares denoising problem with data on a line embedded in high-dimension space. By deriving an asymptotically accurate formula for the generalization error, we observe sample-wise and parameter-wise double descent with the peak in the under-parameterized regime rather than at the interpolation point or in the over-parameterized regime. Further, the peak of the sample-wise double descent curve corresponds to a peak in the curve for the norm of the estimator, and adjusting $\mu$, the strength of the ridge regularization, shifts the location of the peak. We observe that parameter-wise double descent occurs for this model for small $\mu$. For larger values of $\mu$, we observe that the curve for the norm of the estimator has a peak but that this no longer translates to a peak in the generalization error. | Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line | [
"Rishi Sonthalia",
"Xinyue Li",
"Bochao Gu"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=IfyZSIxcoM | @inproceedings{
molahasani2023continual,
title={Continual Learning for Long-Tailed Recognition: Bridging the Gap in Theory and Practice},
author={Mahdiyar Molahasani and Ali Etemad and Michael Greenspan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=IfyZSIxcoM}
} | The Long-Tailed Recognition (LTR) problem arises in imbalanced datasets. This paper bridges the theory-practice gap in this context, providing mathematical insights into the training dynamics of LTR scenarios by proposing a theorem stating that, under strong convexity, the learner's weights trained on the full dataset are bounded by those trained only on the Head. We extend this theorem for multiple subsets and introduce a novel perspective of using Continual Learning (CL) for LTR. We sequentially learn the Head and Tail by updating the learner's weights without forgetting the Head using CL methods. We prove that CL reduces loss compared to fine-tuning on the Tail. Our experiments on MNIST-LT and standard LTR benchmarks (CIFAR100-LT, CIFAR10-LT, and ImageNet-LT) validate our theory and demonstrate the effectiveness of CL solutions. We also show the efficacy of CL on real-world data, specifically the Caltech256 dataset, outperforming state-of-the-art classifiers. Our work unifies LTR and CL and paves the way for leveraging advances in CL to tackle the LTR challenge effectively. | Continual Learning for Long-Tailed Recognition: Bridging the Gap in Theory and Practice | [
"Mahdiyar Molahasani",
"Ali Etemad",
"Michael Greenspan"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Gw77i2J8g5 | @inproceedings{
bizeul2023simvae,
title={Sim{VAE}: Narrowing the gap between Discriminative \& Generative Representation Learning},
author={Alice Bizeul and Carl Allen},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Gw77i2J8g5}
} | Self-supervised representation learning is a powerful paradigm that leverages the relationship between semantically similar data, such as augmentations, extracts of an image or sound clip, or multiple views/modalities. Recent methods, e.g. SimCLR, CLIP and DINO, have made significant strides, yielding representations that achieve state-of-the-art results on multiple downstream tasks. A number of self-supervised discriminative approaches have been proposed, e.g. instance discrimination, latent clustering and contrastive methods.
Though often intuitive, a comprehensive theoretical understanding of their underlying mechanisms or *what* they learn eludes.
Meanwhile, generative approaches, such as variational autoencoders (VAEs), fit a specific latent variable model and have principled appeal, but lag significantly in terms of performance. We present a theoretical analysis of self-supervised discriminative methods and a graphical model that reflects the assumptions they implicitly make and unifies these methods. We show that fitting this model under an ELBO objective improves representations over previous VAE methods on several common benchmarks, narrowing the gap to discriminative methods, and can also preserve information lost by discriminative approaches. This work brings new theoretical insight to modern machine learning practice. | SimVAE: Narrowing the gap between Discriminative Generative Representation Learning | [
"Alice Bizeul",
"Carl Allen"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=FyCkPgTlXO | @inproceedings{
kosson2023rotational,
title={Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks},
author={Atli Kosson and Bettina Messmer and Martin Jaggi},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=FyCkPgTlXO}
} | Weight decay can significantly impact the optimization dynamics of deep neural networks. In certain situations the effects of weight decay and gradient updates on the magnitude of a parameter vector cancel out on average, forming a state known as equilibrium. This causes the expected rotation of the vector in each update to remain constant along with its magnitude. Importantly, equilibrium can arise independently for the weight vectors of different layers and neurons. These equilibria are highly homogeneous for some optimizer and normalization configurations, effectively balancing the average rotation—a proxy for the effective learning rate—across network components. In this work we explore the equilibrium states of multiple optimizers including AdamW and SGD with momentum, providing insights into interactions between the learning rate, weight decay, initialization, normalization and learning rate schedule. We show how rotational equilibrium can be enforced throughout training, eliminating the chaotic transient phase corresponding to the transition towards equilibrium, thus simplifying the training dynamics. Finally, we show that rotational behavior may play a key role in the effectiveness of AdamW compared to Adam with L2-regularization, the performance of different normalization layers, and the need for learning rate warmup. | Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks | [
"Atli Kosson",
"Bettina Messmer",
"Martin Jaggi"
] | Workshop/M3L | poster | 2305.17212 | [
"https://github.com/epfml/rotational-optimizers"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=FM6MtmeRxZ | @inproceedings{
lu2023benign,
title={Benign Oscillation of Stochastic Gradient Descent with Large Learning Rate},
author={Miao Lu and Beining Wu and Xiaodong Yang and Difan Zou},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=FM6MtmeRxZ}
} | In this work, we theoretically investigate the generalization properties of neural networks (NN) trained by stochastic gradient descent (SGD) with \emph{large learning rates}. Under such a training regime, our finding is that, the \emph{oscillation} of the NN weights caused by SGD with large learning rates turns out to be beneficial to the generalization of the NN, potentially improving over the same NN trained by SGD with small learning rates that converges more smoothly. In view of this finding, we call such a phenomenon ``\emph{benign oscillation}". | Benign Oscillation of Stochastic Gradient Descent with Large Learning Rate | [
"Miao Lu",
"Beining Wu",
"Xiaodong Yang",
"Difan Zou"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=DYiER5LgUU | @inproceedings{
diamond2023on,
title={On Compositionality and Emergence in Physical Systems Generativie Modeling},
author={Justin Diamond},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=DYiER5LgUU}
} | The principle of compositionality plays a pivotal role in both machine learning and physical sciences but remains under-explored, particularly in the context of synthetic data derived from physical energy potentials. This study aims to bridge this gap by examining the compositional nature of synthetic datasets generated using composite energy potentials. By combining established Lennard-Jones and Morse potentials into a composite potential, we generate synthetic datasets using Markov Chain Monte Carlo (MCMC) techniques. These datasets serve as training grounds for machine learning models, specifically Neural Ordinary Differential Equations (ODEs). Our primary focus is to investigate whether the properties of the composite datasets retain the characteristics of their individual components, effectively testing the principle of compositionality. The findings not only shed light on the compositional integrity of synthetic physical datasets but also lay the groundwork for more robust and interpretable machine learning models applied to complex physical systems by using the formalism of Category Theory. | On Compositionality and Emergence in Physical Systems Generativie Modeling | [
"Justin Diamond"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ClCrg213JS | @inproceedings{
sarnthein2023escaping,
title={Escaping Random Teacher Initialization Enhances Signal Propagation and Representation},
author={Felix Sarnthein and Sidak Pal Singh and Antonio Orvieto and Thomas Hofmann},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=ClCrg213JS}
} | Recent work shows that by mimicking a random teacher network, student networks learn to produce better feature representations, even if they are initialized at the teacher. In this paper, we characterize how students escape this global optimum and investigate how this process translates into concrete properties of the representations. To that end, we first describe a simplified setup and identify very large step sizes as the main driver of this phenomenon. Then, we investigate key signal propagation and representation separability properties during the escape. Our analysis reveals a two-stage process: the network first undergoes a form of representational collapse, then steers to a parameter region that not only allows for better propagation of input signals but also gives rise to well-conditioned representations. This might relate to the edge of stability and label-independent dynamics. | Escaping Random Teacher Initialization Enhances Signal Propagation and Representation | [
"Felix Sarnthein",
"Sidak Pal Singh",
"Antonio Orvieto",
"Thomas Hofmann"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=CDmerQ37Zs | @inproceedings{
merrill2023the,
title={The Expressive Power of Transformers with Chain of Thought},
author={William Merrill and Ashish Sabharwal},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=CDmerQ37Zs}
} | Recent theoretical work has identified surprisingly simple reasoning problems, such as checking if two nodes in a graph are connected or simulating finite-state machines, that are provably unsolvable by standard transformers that answer immediately after reading their input. However, in practice, transformers' reasoning can be improved by allowing them to use a "chain of thought" or "scratchpad", i.e., generate and condition on a sequence of intermediate tokens before answering. Motivated by this, we ask: *Does such intermediate generation fundamentally extend the computational power of a decoder-only transformer?* We show that the answer is *yes*, but the amount of increase depends crucially on the amount of intermediate generation. For instance, we find that transformer decoders with a logarithmic number of decoding steps (w.r.t. the input length) push the limits of standard transformers only slightly, while a linear number of decoding steps adds a clear new ability (under standard complexity conjectures): recognizing all regular languages. Our results also imply that linear steps keep transformer decoders within context-sensitive languages, and polynomial steps make them recognize exactly the class of polynomial-time solvable problems---the first exact characterization of a type of transformers in terms of standard complexity classes. Together, our results provide a nuanced framework for understanding how the length of a transformer’s chain of thought or scratchpad impacts its reasoning power. | The Expressive Power of Transformers with Chain of Thought | [
"William Merrill",
"Ashish Sabharwal"
] | Workshop/M3L | poster | 2310.07923 | [
""
] | https://huggingface.co/papers/2310.07923 | 0 | 0 | 0 | 2 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=BMQ4i2RVbE | @inproceedings{
li2023transformers,
title={Transformers as Multi-Task Feature Selectors: Generalization Analysis of In-Context Learning},
author={Hongkang Li and Meng Wang and Songtao Lu and Hui Wan and Xiaodong Cui and Pin-Yu Chen},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=BMQ4i2RVbE}
} | Transformer-based large language models have displayed impressive capabilities in the domain of in-context learning, wherein they use multiple input-output pairs to make predictions on unlabeled test data. To lay the theoretical groundwork for in-context learning, we delve into the optimization and generalization of a single-head, one-layer Transformer in the context of multi-task learning for classification. Our investigation uncovers that lower sample complexity is associated with increased training-relevant features and reduced noise in prompts, resulting in improved learning performance. The trained model exhibits the mechanism to first attend to demonstrations of training-relevant features and then decode the corresponding label embedding. Furthermore, we delineate the necessary conditions for successful out-of-domain generalization for in-context learning, specifically regarding the relationship between training and testing prompts. | Transformers as Multi-Task Feature Selectors: Generalization Analysis of In-Context Learning | [
"Hongkang Li",
"Meng Wang",
"Songtao Lu",
"Hui Wan",
"Xiaodong Cui",
"Pin-Yu Chen"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=AxoqhdHHH1 | @inproceedings{
qin2023fit,
title={Fit Like You Sample: Sample-Efficient Score Matching From Fast Mixing Diffusions},
author={Yilong Qin and Andrej Risteski},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=AxoqhdHHH1}
} | Score matching is an approach to learning probability distributions parametrized up to a constant of proportionality (e.g. Energy-Based Models). The idea is to fit the score of the distribution (i.e. $\nabla_x \log p(x)$), rather than the likelihood, thus avoiding the need to evaluate the constant of proportionality. While there's a clear algorithmic benefit, the statistical cost can be steep: recent work by (Koehler et al '22) showed that for distributions that have poor isoperimetric properties (a large Poincar'e or log-Sobolev constant), score matching is substantially statistically less efficient than maximum likelihood. However, many natural realistic distributions, e.g. multimodal distributions as simple as a mixture of two Gaussians in one dimension---have a poor Poincar'e constant.
In this paper, we show a close connection between the mixing time of a broad class of Markov processes with generator $\mathcal{L}$ and stationary distribution $p$, and an appropriately chosen generalized score matching loss that tries to fit $\frac{\mathcal{O} p}{p}$. In the special case of $\mathcal{O} = \nabla_x$, and $\mathcal{L}$ being the generator of Langevin diffusion, this generalizes and recovers the results from (Koehler et al '22). This allows us to adapt techniques to speed up Markov chains to construct better score-matching losses. In particular, "preconditioning" the diffusion can be translated to an appropriate "preconditioning" of the score loss. Lifting the chain by adding a temperature like in simulated tempering can be shown to result in a Gaussian-convolution annealed score matching loss, similar to (Song-Ermon '19). Moreover, we show that if the distribution being learned is a finite mixture of Gaussians in $d$ dimensions with a shared covariance, the sample complexity of annealed score matching is polynomial in the ambient dimension, the diameter of the means, and the smallest and largest eigenvalues of the covariance---obviating the Poincar'e constant-based lower bounds of the basic score matching loss shown in (Koehler et al '22). | Fit Like You Sample: Sample-Efficient Score Matching From Fast Mixing Diffusions | [
"Yilong Qin",
"Andrej Risteski"
] | Workshop/M3L | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=9qxoXqxa0N | @inproceedings{
zhao2023towards,
title={Towards the Fundamental Limits of Knowledge Transfer over Finite Domains},
author={Qingyue Zhao and Banghua Zhu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=9qxoXqxa0N}
} | We characterize the statistical efficiency of knowledge transfer through $n$ samples from a teacher to a probabilistic student classifier with input space $\mathcal{S}$ over labels $\mathcal{A}$. We show that privileged information at three progressive levels accelerates the transfer. At the first level, only samples with hard labels are known, via which the maximum likelihood estimator attains the minimax rate $\sqrt{{|\mathcal{S}||\mathcal{A}|}/{n}}$. The second level has the teacher probabilities of sampled labels available in addition, which turns out to boost the convergence rate lower bound to ${{|\mathcal{S}||\mathcal{A}|}/{n}}$. However, under this second data acquisition protocol, minimizing a naive adaptation of the cross-entropy loss results in an asymptotically biased student. We overcome this limitation and achieve the fundamental limit by using a novel empirical variant of the squared error logit loss. The third level further equips the student with the soft labels (complete logits) on $\mathcal{A}$ given every sampled input, thereby provably enables the student to enjoy a rate ${|\mathcal{S}|}/{n}$ free of $|\mathcal{A}|$. We find any Kullback-Leibler divergence minimizer to be optimal in the last case. Numerical simulations distinguish the four learners and corroborate our theory. | Towards the Fundamental Limits of Knowledge Transfer over Finite Domains | [
"Qingyue Zhao",
"Banghua Zhu"
] | Workshop/M3L | poster | 2310.07838 | [
""
] | https://huggingface.co/papers/2310.07838 | 1 | 0 | 0 | 2 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=9BYEBbhDRk | @inproceedings{
rosenfeld2023outliers,
title={Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization},
author={Elan Rosenfeld and Andrej Risteski},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=9BYEBbhDRk}
} | We identify a new phenomenon in neural network optimization which arises from the interaction of depth and a particular heavy-tailed structure in natural data. Our result offers intuitive explanations for several previously reported observations about network training dynamics, including a conceptually new cause for progressive sharpening and the edge of stability. It also enables new predictions of training behavior which we confirm experimentally, plus a new lens through which to theoretically study and improve modern stochastic optimization on neural nets.
Experimentally, we demonstrate the significant influence of paired groups of outliers in the training data with strong *Opposing Signals*: consistent, large magnitude features which dominate the network output and occur in both groups with similar frequency. Due to these outliers, early optimization enters a narrow valley which carefully balances the opposing groups; subsequent sharpening causes their loss to rise rapidly, oscillating between high on one group and then the other, until the overall loss spikes. We complement these experiments with a theoretical analysis of a two-layer linear network on a simple model of opposing signals. | Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization | [
"Elan Rosenfeld",
"Andrej Risteski"
] | Workshop/M3L | poster | 2311.04163 | [
""
] | https://huggingface.co/papers/2311.04163 | 1 | 1 | 0 | 2 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=8s8w7nwCuk | @inproceedings{
singh2023moxcohow,
title={Mo{XC}o:How I learned to stop exploring and love my local minima?},
author={Esha Singh and Shoham Sabach and Yu-Xiang Wang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=8s8w7nwCuk}
} | Deep Neural Networks (DNNs) are well-known for their generalization capabilities despite overparameterization. This is commonly attributed to the optimizer’s ability to find “good” solutions within high-dimensional loss landscapes. However, widely employed adaptive optimizers, such as ADAM, may suffer from subpar generalization. This paper presents an innovative methodology, $\textit{MoXCo}$, to address these concerns by designing adaptive optimizers that not only expedite exploration with faster convergence speeds but also ensure the avoidance of over-exploitation in specific parameter regimes, ultimately leading to convergence to good solutions. | MoXCo:How I learned to stop exploring and love my local minima? | [
"Esha Singh",
"Shoham Sabach",
"Yu-Xiang Wang"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |