bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=wLFXTAWa5V
@inproceedings{ wang2023an, title={An Efficient and Robust Framework for Approximate Nearest Neighbor Search with Attribute Constraint}, author={Mengzhao Wang and Lingwei Lv and Xiaoliang Xu and Yuxiang Wang and Qiang Yue and Jiongkang Ni}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wLFXTAWa5V} }
This paper introduces an efficient and robust framework for hybrid query (HQ) processing, which combines approximate nearest neighbor search (ANNS) with attribute constraint. HQ aims to find objects that are similar to a feature vector and match some structured attributes. Existing methods handle ANNS and attribute filtering separately, leading to inefficiency and inaccuracy. Our framework, called native hybrid query (NHQ), builds a composite index based on proximity graph (PG) and applies joint pruning for HQ. We can easily adapt existing PGs to this framework for efficient HQ processing. We also propose two new navigable PGs (NPGs) with optimized edge selection and routing, which improve the overall ANNS performance. We implement five HQ methods based on the proposed NPGs and existing PGs in NHQ, and show that they outperform the state-of-the-art methods on 10 real-world datasets (up to 315$\times$ faster with the same accuracy).
An Efficient and Robust Framework for Approximate Nearest Neighbor Search with Attribute Constraint
[ "Mengzhao Wang", "Lingwei Lv", "Xiaoliang Xu", "Yuxiang Wang", "Qiang Yue", "Jiongkang Ni" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wImYhdu4VF
@inproceedings{ jalal2023learning, title={Learning a 1-layer conditional generative model in total variation}, author={Ajil Jalal and Justin Kang and Ananya Uppal and Kannan Ramchandran and Eric Price}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wImYhdu4VF} }
A conditional generative model is a method for sampling from a conditional distribution $p(y \mid x)$. For example, one may want to sample an image of a cat given the label ``cat''. A feed-forward conditional generative model is a function $g(x, z)$ that takes the input $x$ and a random seed $z$, and outputs a sample $y$ from $p(y \mid x)$. Ideally the distribution of outputs $(x, g(x, z))$ would be close in total variation to the ideal distribution $(x, y)$. Generalization bounds for other learning models require assumptions on the distribution of $x$, even in simple settings like linear regression with Gaussian noise. We show these assumptions are unnecessary in our model, for both linear regression and single-layer ReLU networks. Given samples $(x, y)$, we show how to learn a 1-layer ReLU conditional generative model in total variation. As our result has no assumption on the distribution of inputs $x$, if we are given access to the internal activations of a deep generative model, we can compose our 1-layer guarantee to progressively learn the deep model using a near-linear number of samples.
Learning a 1-layer conditional generative model in total variation
[ "Ajil Jalal", "Justin Kang", "Ananya Uppal", "Kannan Ramchandran", "Eric Price" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wIlmx4bHrO
@inproceedings{ liu2023a, title={A Single-Loop Accelerated Extra-Gradient Difference Algorithm with Improved Complexity Bounds for Constrained Minimax Optimization}, author={Yuanyuan Liu and Fanhua Shang and Weixin An and Junhao Liu and Hongying Liu and Zhouchen Lin}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wIlmx4bHrO} }
In this paper, we propose a novel extra-gradient difference acceleration algorithm for solving constrained nonconvex-nonconcave (NC-NC) minimax problems. In particular, we design a new extra-gradient difference step to obtain an important quasi-cocoercivity property, which plays a key role to significantly improve the convergence rate in the constrained NC-NC setting without additional structural assumption. Then momentum acceleration is also introduced into our dual accelerating update step. Moreover, we prove that, to find an $\epsilon$-stationary point of the function $f$, our algorithm attains the complexity $\mathcal{O}(\epsilon^{-2})$ in the constrained NC-NC setting, while the best-known complexity bound is $\widetilde{\mathcal{O}}(\epsilon^{-4})$, where $\widetilde{\mathcal{O}}(\cdot)$ hides logarithmic factors compared to $\mathcal{O}(\cdot)$. As the special cases of the constrained NC-NC setting, our algorithm can also obtain the same complexity $\mathcal{O}(\epsilon^{-2})$ for both the nonconvex-concave (NC-C) and convex-nonconcave (C-NC) cases, while the best-known complexity bounds are $\widetilde{\mathcal{O}}(\epsilon^{-2.5})$ for the NC-C case and $\widetilde{\mathcal{O}}(\epsilon^{-4})$ for the C-NC case. For fair comparison with existing algorithms, we also analyze the complexity bound to find $\epsilon$-stationary point of the primal function $\phi$ for the constrained NC-C problem, which shows that our algorithm can improve the complexity bound from $\widetilde{\mathcal{O}}(\epsilon^{-3})$ to $\mathcal{O}(\epsilon^{-2})$. To the best of our knowledge, this is the first time that the proposed algorithm improves the best-known complexity bounds from $\mathcal{O}(\epsilon^{-4})$ and $\widetilde{\mathcal{O}}(\epsilon^{-3})$ to $\mathcal{O}(\epsilon^{-2})$ in both the NC-NC and NC-C settings.
A Single-Loop Accelerated Extra-Gradient Difference Algorithm with Improved Complexity Bounds for Constrained Minimax Optimization
[ "Yuanyuan Liu", "Fanhua Shang", "Weixin An", "Junhao Liu", "Hongying Liu", "Zhouchen Lin" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wHhPIv5G8Q
@inproceedings{ wang2023online, title={Online Corrupted User Detection and Regret Minimization}, author={Zhiyong Wang and Jize Xie and Tong Yu and Shuai Li and John C.S. Lui}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wHhPIv5G8Q} }
In real-world online web systems, multiple users usually arrive sequentially into the system. For applications like click fraud and fake reviews, some users can maliciously perform corrupted (disrupted) behaviors to trick the system. Therefore, it is crucial to design efficient online learning algorithms to robustly learn from potentially corrupted user behaviors and accurately identify the corrupted users in an online manner. Existing works propose bandit algorithms robust to adversarial corruption. However, these algorithms are designed for a single user, and cannot leverage the implicit social relations among multiple users for more efficient learning. Moreover, none of them consider how to detect corrupted users online in the multiple-user scenario. In this paper, we present an important online learning problem named LOCUD to learn and utilize unknown user relations from disrupted behaviors to speed up learning, and identify the corrupted users in an online setting. To robustly learn and utilize the unknown relations among potentially corrupted users, we propose a novel bandit algorithm RCLUB-WCU. To detect the corrupted users, we devise a novel online detection algorithm OCCUD based on RCLUB-WCU's inferred user relations. We prove a regret upper bound for RCLUB-WCU, which asymptotically matches the lower bound with respect to $T$ up to logarithmic factors, and matches the state-of-the-art results in degenerate cases. We also give a theoretical guarantee for the detection accuracy of OCCUD. With extensive experiments, our methods achieve superior performance over previous bandit algorithms and high corrupted user detection accuracy.
Online Corrupted User Detection and Regret Minimization
[ "Zhiyong Wang", "Jize Xie", "Tong Yu", "Shuai Li", "John C.S. Lui" ]
Conference
poster
2310.04768
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wFuemocyHZ
@inproceedings{ xu2023restart, title={Restart Sampling for Improving Generative Processes}, author={Yilun Xu and Mingyang Deng and Xiang Cheng and Yonglong Tian and Ziming Liu and Tommi S. Jaakkola}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wFuemocyHZ} }
Generative processes that involve solving differential equations, such as diffusion models, frequently necessitate balancing speed and quality. ODE-based samplers are fast but plateau in performance while SDE-based samplers deliver higher sample quality at the cost of increased sampling time. We attribute this difference to sampling errors: ODE-samplers involve smaller discretization errors while stochasticity in SDE contracts accumulated errors. Based on these findings, we propose a novel sampling algorithm called \textit{Restart} in order to better balance discretization errors and contraction. The sampling method alternates between adding substantial noise in additional forward steps and strictly following a backward ODE. Empirically, Restart sampler surpasses previous SDE and ODE samplers in both speed and accuracy. Restart not only outperforms the previous best SDE results, but also accelerates the sampling speed by 10-fold / 2-fold on CIFAR-10 / ImageNet $64{\times} 64$. In addition, it attains significantly better sample quality than ODE samplers within comparable sampling times. Moreover, Restart better balances text-image alignment/visual quality versus diversity than previous samplers in the large-scale text-to-image Stable Diffusion model pre-trained on LAION $512{\times} 512$. Code is available at https://github.com/Newbeeer/diffusion_restart_sampling
Restart Sampling for Improving Generative Processes
[ "Yilun Xu", "Mingyang Deng", "Xiang Cheng", "Yonglong Tian", "Ziming Liu", "Tommi S. Jaakkola" ]
Conference
poster
2306.14878
[ "https://github.com/newbeeer/diffusion_restart_sampling" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wFH5hZAwYz
@inproceedings{ capone2023sharp, title={Sharp Calibrated Gaussian Processes}, author={Alexandre Capone and Sandra Hirche and Geoff Pleiss}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wFH5hZAwYz} }
While Gaussian processes are a mainstay for various engineering and scientific applications, the uncertainty estimates don't satisfy frequentist guarantees and can be miscalibrated in practice. State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance, which yields confidence intervals that are potentially too coarse. To remedy this, we present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance but using a different set of hyperparameters chosen to satisfy an empirical calibration constraint. This results in a calibration approach that is considerably more flexible than existing approaches, which we optimize to yield tight predictive quantiles. Our approach is shown to yield a calibrated model under reasonable assumptions. Furthermore, it outperforms existing approaches in sharpness when employed for calibrated regression.
Sharp Calibrated Gaussian Processes
[ "Alexandre Capone", "Sandra Hirche", "Geoff Pleiss" ]
Conference
poster
2302.11961
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wEiUGpcr0M
@inproceedings{ luo2023improving, title={Improving Self-supervised Molecular Representation Learning using Persistent Homology}, author={Yuankai Luo and Lei Shi and Veronika Thost}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wEiUGpcr0M} }
Self-supervised learning (SSL) has great potential for molecular representation learning given the complexity of molecular graphs, the large amounts of unlabelled data available, the considerable cost of obtaining labels experimentally, and the hence often only small training datasets. The importance of the topic is reflected in the variety of paradigms and architectures that have been investigated recently, most focus on designing views for contrastive learning. In this paper, we study SSL based on persistent homology (PH), a mathematical tool for modeling topological features of data that persist across multiple scales. It has several unique features which particularly suit SSL, naturally offering: different views of the data, stability in terms of distance preservation, and the opportunity to flexibly incorporate domain knowledge. We (1) investigate an autoencoder, which shows the general representational power of PH, and (2) propose a contrastive loss that complements existing approaches. We rigorously evaluate our approach for molecular property prediction and demonstrate its particular features in improving the embedding space: after SSL, the representations are better and offer considerably more predictive power than the baselines over different probing tasks; our loss increases baseline performance, sometimes largely; and we often obtain substantial improvements over very small datasets, a common scenario in practice.
Improving Self-supervised Molecular Representation Learning using Persistent Homology
[ "Yuankai Luo", "Lei Shi", "Veronika Thost" ]
Conference
poster
2311.17327
[ "https://github.com/luoyk1999/molecular-homology" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wBJBLy9kBY
@inproceedings{ daras2023ambient, title={Ambient Diffusion: Learning Clean Distributions from Corrupted Data}, author={Giannis Daras and Kulin Shah and Yuval Dagan and Aravind Gollakota and Alex Dimakis and Adam Klivans}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wBJBLy9kBY} }
We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples. This problem arises in scientific applications where access to uncorrupted samples is impossible or expensive to acquire. Another benefit of our approach is the ability to train generative models that are less likely to memorize any individual training sample, since they never observe clean training data. Our main idea is to introduce additional measurement distortion during the diffusion process and require the model to predict the original corrupted image from the further corrupted image. We prove that our method leads to models that learn the conditional expectation of the full uncorrupted image given this additional measurement corruption. This holds for any corruption process that satisfies some technical conditions (and in particular includes inpainting and compressed sensing). We train models on standard benchmarks (CelebA, CIFAR-10 and AFHQ) and show that we can learn the distribution even when all the training samples have 90\% of their pixels missing. We also show that we can finetune foundation models on small corrupted datasets (e.g. MRI scans with block corruptions) and learn the clean distribution without memorizing the training set.
Ambient Diffusion: Learning Clean Distributions from Corrupted Data
[ "Giannis Daras", "Kulin Shah", "Yuval Dagan", "Aravind Gollakota", "Alex Dimakis", "Adam Klivans" ]
Conference
poster
2305.19256
[ "https://github.com/giannisdaras/ambient-diffusion" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w7TyuWhGZP
@inproceedings{ zhang2023interpretable, title={Interpretable Reward Redistribution in Reinforcement Learning: A Causal Approach}, author={Yudi Zhang and Yali Du and Biwei Huang and Ziyan Wang and Jun Wang and Meng Fang and Mykola Pechenizkiy}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=w7TyuWhGZP} }
A major challenge in reinforcement learning is to determine which state-action pairs are responsible for future rewards that are delayed. Reward redistribution serves as a solution to re-assign credits for each time step from observed sequences. While the majority of current approaches construct the reward redistribution in an uninterpretable manner, we propose to explicitly model the contributions of state and action from a causal perspective, resulting in an interpretable reward redistribution and preserving policy invariance. In this paper, we start by studying the role of causal generative models in reward redistribution by characterizing the generation of Markovian rewards and trajectory-wise long-term return and further propose a framework, called Generative Return Decomposition (GRD), for policy optimization in delayed reward scenarios. Specifically, GRD first identifies the unobservable Markovian rewards and causal relations in the generative process. Then, GRD makes use of the identified causal generative model to form a compact representation to train policy over the most favorable subspace of the state space of the agent. Theoretically, we show that the unobservable Markovian reward function is identifiable, as well as the underlying causal structure and causal models. Experimental results show that our method outperforms state-of-the-art methods and the provided visualization further demonstrates the interpretability of our method. The project page is located at [https://reedzyd.github.io/GenerativeReturnDecomposition/](https://reedzyd.github.io/GenerativeReturnDecomposition/).
Interpretable Reward Redistribution in Reinforcement Learning: A Causal Approach
[ "Yudi Zhang", "Yali Du", "Biwei Huang", "Ziyan Wang", "Jun Wang", "Meng Fang", "Mykola Pechenizkiy" ]
Conference
poster
2305.18427
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w7LxAZfDfv
@inproceedings{ lin2023infocd, title={Info{CD}: A Contrastive Chamfer Distance Loss for Point Cloud Completion}, author={Fangzhou Lin and Yun Yue and Ziming Zhang and Songlin Hou and Kazunori Yamada and Vijaya B Kolachalama and Venkatesh Saligrama}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=w7LxAZfDfv} }
A point cloud is a discrete set of data points sampled from a 3D geometric surface. Chamfer distance (CD) is a popular metric and training loss to measure the distances between point clouds, but also well known to be sensitive to outliers. To address this issue, in this paper we propose InfoCD, a novel contrastive Chamfer distance loss to learn to spread the matched points for better distribution alignments between point clouds as well as accounting for a surface similarity estimator. We show that minimizing InfoCD is equivalent to maximizing a lower bound of the mutual information between the underlying geometric surfaces represented by the point clouds, leading to a regularized CD metric which is robust and computationally efficient for deep learning. We conduct comprehensive experiments for point cloud completion using InfoCD and observe significant improvements consistently over all the popular baseline networks trained with CD-based losses, leading to new state-of-the-art results on several benchmark datasets. Demo code is available at https://github.com/Zhang-VISLab/NeurIPS2023-InfoCD.
InfoCD: A Contrastive Chamfer Distance Loss for Point Cloud Completion
[ "Fangzhou Lin", "Yun Yue", "Ziming Zhang", "Songlin Hou", "Kazunori Yamada", "Vijaya B Kolachalama", "Venkatesh Saligrama" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w79RtqIyoM
@inproceedings{ garipov2023compositional, title={Compositional Sculpting of Iterative Generative Processes}, author={Timur Garipov and Sebastiaan De Peuter and Ge Yang and Vikas Garg and Samuel Kaski and Tommi S. Jaakkola}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=w79RtqIyoM} }
High training costs of generative models and the need to fine-tune them for specific tasks have created a strong interest in model reuse and composition. A key challenge in composing iterative generative processes, such as GFlowNets and diffusion models, is that to realize the desired target distribution, all steps of the generative process need to be coordinated, and satisfy delicate balance conditions. In this work, we propose Compositional Sculpting: a general approach for defining compositions of iterative generative processes. We then introduce a method for sampling from these compositions built on classifier guidance. We showcase ways to accomplish compositional sculpting in both GFlowNets and diffusion models. We highlight two binary operations $\\unicode{x2014}$ the $\\textit{harmonic mean}\\unicode{x00A0}(p_1 \\otimes p_2$) and the $\\textit{contrast}\\unicode{x00A0}(p_1 \\,\\unicode{x25D1}\\,\\, p_2$) between pairs, and the generalization of these operations to multiple component distributions. We offer empirical results on image and molecular generation tasks. Project codebase: https://github.com/timgaripov/compositional-sculpting.
Compositional Sculpting of Iterative Generative Processes
[ "Timur Garipov", "Sebastiaan De Peuter", "Ge Yang", "Vikas Garg", "Samuel Kaski", "Tommi S. Jaakkola" ]
Conference
poster
2309.16115
[ "https://github.com/timgaripov/compositional-sculpting" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w6krZiUa7t
@inproceedings{ lee2023hyperhmm, title={Hyper-{HMM}: aligning human brains and semantic features in a common latent event space}, author={Caroline Lee and Jane Han and Ma Feilong and Guo Jiahui and James Haxby and Christopher Baldassano}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=w6krZiUa7t} }
Naturalistic stimuli evoke complex neural responses with spatial and temporal properties that differ across individuals. Current alignment methods focus on either spatial hyperalignment (assuming exact temporal correspondence) or temporal alignment (assuming exact spatial correspondence). Here, we propose a hybrid model, the Hyper-HMM, that simultaneously aligns both temporal and spatial features across brains. The model learns to linearly project voxels to a reduced-dimension latent space, in which timecourses are segmented into corresponding temporal events. This approach allows tracking of each individual's mental trajectory through an event sequence, and also allows for alignment with other feature spaces such as stimulus content. Using an fMRI dataset in which students watch videos of class lectures, we demonstrate that the Hyper-HMM can be used to map all participants and the semantic content of the videos into a common low-dimensional space, and that these mappings generalize to held-out data. Our model provides a new window into individual cognitive dynamics evoked by complex naturalistic stimuli.
Hyper-HMM: aligning human brains and semantic features in a common latent event space
[ "Caroline Lee", "Jane Han", "Ma Feilong", "Guo Jiahui", "James Haxby", "Christopher Baldassano" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w3ghbKBJg4
@inproceedings{ do2023minimax, title={Minimax Optimal Rate for Parameter Estimation in Multivariate Deviated Models}, author={Dat Do and Huy Nguyen and Khai Nguyen and Nhat Ho}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=w3ghbKBJg4} }
We study the maximum likelihood estimation (MLE) in the multivariate deviated model where the data are generated from the density function $(1-\lambda^{\ast})h_{0}(x)+\lambda^{\ast}f(x|\mu^{\ast}, \Sigma^{\ast})$ in which $h_{0}$ is a known function, $\lambda^{\ast} \in [0,1]$ and $(\mu^{\ast}, \Sigma^{\ast})$ are unknown parameters to estimate. The main challenges in deriving the convergence rate of the MLE mainly come from two issues: (1) The interaction between the function $h_{0}$ and the density function $f$; (2) The deviated proportion $\lambda^{\ast}$ can go to the extreme points of $[0,1]$ as the sample size tends to infinity. To address these challenges, we develop the \emph{distinguishability condition} to capture the linear independent relation between the function $h_{0}$ and the density function $f$. We then provide comprehensive convergence rates of the MLE via the vanishing rate of $\lambda^{\ast}$ to zero as well as the distinguishability of two functions $h_{0}$ and $f$.
Minimax Optimal Rate for Parameter Estimation in Multivariate Deviated Models
[ "Dat Do", "Huy Nguyen", "Khai Nguyen", "Nhat Ho" ]
Conference
poster
2301.11808
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w2F8Fm6Sg3
@inproceedings{ wang2023balanced, title={Balanced Training for Sparse {GAN}s}, author={Yite Wang and Jing Wu and Naira Hovakimyan and Ruoyu Sun}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=w2F8Fm6Sg3} }
Over the past few years, there has been growing interest in developing larger and deeper neural networks, including deep generative models like generative adversarial networks (GANs). However, GANs typically come with high computational complexity, leading researchers to explore methods for reducing the training and inference costs. One such approach gaining popularity in supervised learning is dynamic sparse training (DST), which maintains good performance while enjoying excellent training efficiency. Despite its potential benefits, applying DST to GANs presents challenges due to the adversarial nature of the training process. In this paper, we propose a novel metric called the balance ratio (BR) to study the balance between the sparse generator and discriminator. We also introduce a new method called balanced dynamic sparse training (ADAPT), which seeks to control the BR during GAN training to achieve a good trade-off between performance and computational cost. Our proposed method shows promising results on multiple datasets, demonstrating its effectiveness.
Balanced Training for Sparse GANs
[ "Yite Wang", "Jing Wu", "Naira Hovakimyan", "Ruoyu Sun" ]
Conference
poster
2302.14670
[ "https://github.com/yitewang/adapt" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w116w62fxH
@inproceedings{ attias2023optimal, title={Optimal Learners for Realizable Regression: {PAC} Learning and Online Learning}, author={Idan Attias and Steve Hanneke and Alkis Kalavasis and Amin Karbasi and Grigoris Velegkas}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=w116w62fxH} }
In this work, we aim to characterize the statistical complexity of realizable regression both in the PAC learning setting and the online learning setting. Previous work had established the sufficiency of finiteness of the fat shattering dimension for PAC learnability and the necessity of finiteness of the scaled Natarajan dimension, but little progress had been made towards a more complete characterization since the work of Simon 1997 (SICOMP '97). To this end, we first introduce a minimax instance optimal learner for realizable regression and propose a novel dimension that both qualitatively and quantitatively characterizes which classes of real-valued predictors are learnable. We then identify a combinatorial dimension related to the graph dimension that characterizes ERM learnability in the realizable setting. Finally, we establish a necessary condition for learnability based on a combinatorial dimension related to the DS dimension, and conjecture that it may also be sufficient in this context. Additionally, in the context of online learning we provide a dimension that characterizes the minimax instance optimal cumulative loss up to a constant factor and design an optimal online learner for realizable regression, thus resolving an open question raised by Daskalakis and Golowich in STOC '22.
Optimal Learners for Realizable Regression: PAC Learning and Online Learning
[ "Idan Attias", "Steve Hanneke", "Alkis Kalavasis", "Amin Karbasi", "Grigoris Velegkas" ]
Conference
oral
2307.03848
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w0H2xGHlkw
@inproceedings{ liu2023visual, title={Visual Instruction Tuning}, author={Haotian Liu and Chunyuan Li and Qingyang Wu and Yong Jae Lee}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=w0H2xGHlkw} }
Instruction tuning large language models (LLMs) using machine-generated instruction-following data has been shown to improve zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. We present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and an LLM for general-purpose visual and language understanding. To facilitate future research on visual instruction following, we construct two evaluation benchmarks with diverse and challenging application-oriented tasks. Our experiments show that LLaVA demonstrates impressive multimodal chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model, and code publicly available.
Visual Instruction Tuning
[ "Haotian Liu", "Chunyuan Li", "Qingyang Wu", "Yong Jae Lee" ]
Conference
oral
[ "https://github.com/xiaoman-zhang/PMC-VQA" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vzrA6uqOis
@inproceedings{ griffiths2023gauche, title={{GAUCHE}: A Library for Gaussian Processes in Chemistry}, author={Ryan-Rhys Griffiths and Leo Klarner and Henry Moss and Aditya Ravuri and Sang T. Truong and Yuanqi Du and Samuel Don Stanton and Gary Tom and Bojana Rankovi{\'c} and Arian Rokkum Jamasb and Aryan Deshwal and Julius Schwartz and Austin Tripp and Gregory Kell and Simon Frieder and Anthony Bourached and Alex James Chan and Jacob Moss and Chengzhi Guo and Johannes P. D{\"u}rholt and Saudamini Chaurasia and Ji Won Park and Felix Strieth-Kalthoff and Alpha Lee and Bingqing Cheng and Alan Aspuru-Guzik and Philippe Schwaller and Jian Tang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vzrA6uqOis} }
We introduce GAUCHE, an open-source library for GAUssian processes in CHEmistry. Gaussian processes have long been a cornerstone of probabilistic machine learning, affording particular advantages for uncertainty quantification and Bayesian optimisation. Extending Gaussian processes to molecular representations, however, necessitates kernels defined over structured inputs such as graphs, strings and bit vectors. By providing such kernels in a modular, robust and easy-to-use framework, we seek to enable expert chemists and materials scientists to make use of state-of-the-art black-box optimization techniques. Motivated by scenarios frequently encountered in practice, we showcase applications for GAUCHE in molecular discovery, chemical reaction optimisation and protein design. The codebase is made available at https://github.com/leojklarner/gauche.
GAUCHE: A Library for Gaussian Processes in Chemistry
[ "Ryan-Rhys Griffiths", "Leo Klarner", "Henry Moss", "Aditya Ravuri", "Sang T. Truong", "Yuanqi Du", "Samuel Don Stanton", "Gary Tom", "Bojana Ranković", "Arian Rokkum Jamasb", "Aryan Deshwal", "Julius Schwartz", "Austin Tripp", "Gregory Kell", "Simon Frieder", "Anthony Bourached", "Alex James Chan", "Jacob Moss", "Chengzhi Guo", "Johannes P. Dürholt", "Saudamini Chaurasia", "Ji Won Park", "Felix Strieth-Kalthoff", "Alpha Lee", "Bingqing Cheng", "Alan Aspuru-Guzik", "Philippe Schwaller", "Jian Tang" ]
Conference
poster
2212.04450
[ "https://github.com/leojklarner/gauche" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vz7SdRqWGM
@inproceedings{ duong2023adaptive, title={Adaptive whitening with fast gain modulation and slow synaptic plasticity}, author={Lyndon Duong and Eero P Simoncelli and Dmitri Chklovskii and David Lipshutz}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vz7SdRqWGM} }
Neurons in early sensory areas rapidly adapt to changing sensory statistics, both by normalizing the variance of their individual responses and by reducing correlations between their responses. Together, these transformations may be viewed as an adaptive form of statistical whitening. Existing mechanistic models of adaptive whitening exclusively use either synaptic plasticity or gain modulation as the biological substrate for adaptation; however, on their own, each of these models has significant limitations. In this work, we unify these approaches in a normative multi-timescale mechanistic model that adaptively whitens its responses with complementary computational roles for synaptic plasticity and gain modulation. Gains are modified on a fast timescale to adapt to the current statistical context, whereas synapses are modified on a slow timescale to match structural properties of the input statistics that are invariant across contexts. Our model is derived from a novel multi-timescale whitening objective that factorizes the inverse whitening matrix into basis vectors, which correspond to synaptic weights, and a diagonal matrix, which corresponds to neuronal gains. We test our model on synthetic and natural datasets and find that the synapses learn optimal configurations over long timescales that enable adaptive whitening on short timescales using gain modulation.
Adaptive whitening with fast gain modulation and slow synaptic plasticity
[ "Lyndon Duong", "Eero P Simoncelli", "Dmitri Chklovskii", "David Lipshutz" ]
Conference
spotlight
2308.13633
[ "https://github.com/lyndond/multi_timescale_whitening" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vybQs1Gbuk
@inproceedings{ meng2023learning, title={Learning from Rich Semantics and Coarse Locations for Long-tailed Object Detection}, author={Lingchen Meng and Xiyang Dai and Jianwei Yang and Dongdong Chen and Yinpeng Chen and Mengchen Liu and Yi-Ling Chen and Zuxuan Wu and Lu Yuan and Yu-Gang Jiang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vybQs1Gbuk} }
Long-tailed object detection (LTOD) aims to handle the extreme data imbalance in real-world datasets, where many tail classes have scarce instances. One popular strategy is to explore extra data with image-level labels, yet it produces limited results due to (1) semantic ambiguity---an image-level label only captures a salient part of the image, ignoring the remaining rich semantics within the image; and (2) location sensitivity---the label highly depends on the locations and crops of the original image, which may change after data transformations like random cropping. To remedy this, we propose RichSem, a simple but effective method, which is robust to learn rich semantics from coarse locations without the need of accurate bounding boxes. RichSem leverages rich semantics from images, which are then served as additional ``soft supervision'' for training detectors. Specifically, we add a semantic branch to our detector to learn these soft semantics and enhance feature representations for long-tailed object detection. The semantic branch is only used for training and is removed during inference. RichSem achieves consistent improvements on both overall and rare-category of LVIS under different backbones and detectors. Our method achieves state-of-the-art performance without requiring complex training and testing procedures. Moreover, we show the effectiveness of our method on other long-tailed datasets with additional experiments.
Learning from Rich Semantics and Coarse Locations for Long-tailed Object Detection
[ "Lingchen Meng", "Xiyang Dai", "Jianwei Yang", "Dongdong Chen", "Yinpeng Chen", "Mengchen Liu", "Yi-Ling Chen", "Zuxuan Wu", "Lu Yuan", "Yu-Gang Jiang" ]
Conference
poster
2310.12152
[ "https://github.com/MengLcool/RichSem" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vwr4bHHsRT
@inproceedings{ huang2023optimal, title={Optimal Regret Is Achievable with Bounded Approximate Inference Error: An Enhanced Bayesian Upper Confidence Bound Framework}, author={Ziyi Huang and Henry Lam and Amirhossein Meisami and Haofeng Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vwr4bHHsRT} }
Bayesian bandit algorithms with approximate Bayesian inference have been widely used in real-world applications. However, there is a large discrepancy between the superior practical performance of these approaches and their theoretical justification. Previous research only indicates a negative theoretical result: Thompson sampling could have a worst-case linear regret $\Omega(T)$ with a constant threshold on the inference error measured by one $\alpha$-divergence. To bridge this gap, we propose an Enhanced Bayesian Upper Confidence Bound (EBUCB) framework that can efficiently accommodate bandit problems in the presence of approximate inference. Our theoretical analysis demonstrates that for Bernoulli multi-armed bandits, EBUCB can achieve the optimal regret order $O(\log T)$ if the inference error measured by two different $\alpha$-divergences is less than a constant, regardless of how large this constant is. To our best knowledge, our study provides the first theoretical regret bound that is better than $o(T)$ in the setting of constant approximate inference error. Furthermore, in concordance with the negative results in previous studies, we show that only one bounded $\alpha$-divergence is insufficient to guarantee a sub-linear regret.
Optimal Regret Is Achievable with Bounded Approximate Inference Error: An Enhanced Bayesian Upper Confidence Bound Framework
[ "Ziyi Huang", "Henry Lam", "Amirhossein Meisami", "Haofeng Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vvoWPYqZJA
@inproceedings{ dai2023instructblip, title={Instruct{BLIP}: Towards General-purpose Vision-Language Models with Instruction Tuning}, author={Wenliang Dai and Junnan Li and Dongxu Li and Anthony Tiong and Junqi Zhao and Weisheng Wang and Boyang Li and Pascale Fung and Steven Hoi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vvoWPYqZJA} }
Large-scale pre-training and instruction tuning have been successful at creating general-purpose language models with broad competence. However, building general-purpose vision-language models is challenging due to the rich input distributions and task diversity resulting from the additional visual input. Although vision-language pretraining has been widely studied, vision-language instruction tuning remains under-explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pretrained BLIP-2 models. We gather 26 publicly available datasets, covering a wide variety of tasks and capabilities, and transform them into instruction tuning format. Additionally, we introduce an instruction-aware Query Transformer, which extracts informative features tailored to the given instruction. Trained on 13 held-in datasets, InstructBLIP attains state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and larger Flamingo models. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA questions with image contexts). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models. All InstructBLIP models are open-source.
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
[ "Wenliang Dai", "Junnan Li", "Dongxu Li", "Anthony Tiong", "Junqi Zhao", "Weisheng Wang", "Boyang Li", "Pascale Fung", "Steven Hoi" ]
Conference
poster
2305.06500
[ "https://github.com/salesforce/lavis" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vtoY8qJjTR
@inproceedings{ wang2023train, title={Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning}, author={Shenzhi Wang and Qisen Yang and Jiawei Gao and Matthieu Gaetan Lin and HAO CHEN and Liwei Wu and Ning Jia and Shiji Song and Gao Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vtoY8qJjTR} }
Offline-to-online reinforcement learning (RL) is a training paradigm that combines pre-training on a pre-collected dataset with fine-tuning in an online environment. However, the incorporation of online fine-tuning can intensify the well-known distributional shift problem. Existing solutions tackle this problem by imposing a policy constraint on the policy improvement objective in both offline and online learning. They typically advocate a single balance between policy improvement and constraints across diverse data collections. This one-size-fits-all manner may not optimally leverage each collected sample due to the significant variation in data quality across different states. To this end, we introduce Family Offline-to-Online RL (FamO2O), a simple yet effective framework that empowers existing algorithms to determine state-adaptive improvement-constraint balances. FamO2O utilizes a universal model to train a family of policies with different improvement/constraint intensities, and a balance model to select a suitable policy for each state. Theoretically, we prove that state-adaptive balances are necessary for achieving a higher policy performance upper bound. Empirically, extensive experiments show that FamO2O offers a statistically significant improvement over various existing methods, achieving state-of-the-art performance on the D4RL benchmark. Codes are available at https://github.com/LeapLabTHU/FamO2O.
Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning
[ "Shenzhi Wang", "Qisen Yang", "Jiawei Gao", "Matthieu Gaetan Lin", "HAO CHEN", "Liwei Wu", "Ning Jia", "Shiji Song", "Gao Huang" ]
Conference
spotlight
2310.17966
[ "https://github.com/leaplabthu/famo2o" ]
https://huggingface.co/papers/2310.17966
1
0
0
9
1
[]
[]
[]
null
https://openreview.net/forum?id=vtLNwa6uX0
@inproceedings{ kristiadi2023the, title={The Geometry of Neural Nets' Parameter Spaces Under Reparametrization}, author={Agustinus Kristiadi and Felix Dangel and Philipp Hennig}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vtLNwa6uX0} }
Model reparametrization, which follows the change-of-variable rule of calculus, is a popular way to improve the training of neural nets. But it can also be problematic since it can induce inconsistencies in, e.g., Hessian-based flatness measures, optimization trajectories, and modes of probability densities. This complicates downstream analyses: e.g. one cannot definitively relate flatness with generalization since arbitrary reparametrization changes their relationship. In this work, we study the invariance of neural nets under reparametrization from the perspective of Riemannian geometry. From this point of view, invariance is an inherent property of any neural net if one explicitly represents the metric and uses the correct associated transformation rules. This is important since although the metric is always present, it is often implicitly assumed as identity, and thus dropped from the notation, then lost under reparametrization. We discuss implications for measuring the flatness of minima, optimization, and for probability-density maximization. Finally, we explore some interesting directions where invariance is useful.
The Geometry of Neural Nets' Parameter Spaces Under Reparametrization
[ "Agustinus Kristiadi", "Felix Dangel", "Philipp Hennig" ]
Conference
spotlight
2302.07384
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vqGWslLeEw
@inproceedings{ tarasov2023revisiting, title={Revisiting the Minimalist Approach to Offline Reinforcement Learning}, author={Denis Tarasov and Vladislav Kurenkov and Alexander Nikulin and Sergey Kolesnikov}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vqGWslLeEw} }
Recent years have witnessed significant advancements in offline reinforcement learning (RL), resulting in the development of numerous algorithms with varying degrees of complexity. While these algorithms have led to noteworthy improvements, many incorporate seemingly minor design choices that impact their effectiveness beyond core algorithmic advances. However, the effect of these design choices on established baselines remains understudied. In this work, we aim to bridge this gap by conducting a retrospective analysis of recent works in offline RL and propose ReBRAC, a minimalistic algorithm that integrates such design elements built on top of the TD3+BC method. We evaluate ReBRAC on 51 datasets with both proprioceptive and visual state spaces using D4RL and V-D4RL benchmarks, demonstrating its state-of-the-art performance among ensemble-free methods in both offline and offline-to-online settings. To further illustrate the efficacy of these design choices, we perform a large-scale ablation study and hyperparameter sensitivity analysis on the scale of thousands of experiments.
Revisiting the Minimalist Approach to Offline Reinforcement Learning
[ "Denis Tarasov", "Vladislav Kurenkov", "Alexander Nikulin", "Sergey Kolesnikov" ]
Conference
poster
2305.09836
[ "https://github.com/dt6a/rebrac" ]
https://huggingface.co/papers/2305.09836
2
3
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=vq11gurmUY
@inproceedings{ li2023online, title={Online {PCA} in Converging Self-consistent Field Equations}, author={Xihan Li and Xiang Chen and Rasul Tutunov and Haitham Bou Ammar and Lei Wang and Jun Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vq11gurmUY} }
Self-consistent Field (SCF) equation is a type of nonlinear eigenvalue problem in which the matrix to be eigen-decomposed is a function of its own eigenvectors. It is of great significance in computational science for its connection to the Schrödinger equation. Traditional fixed-point iteration methods for solving such equations suffer from non-convergence issues. In this work, we present a novel perspective on such SCF equations as a principal component analysis (PCA) for non-stationary time series, in which a distribution and its own top principal components are mutually updated over time, and the equilibrium state of the model corresponds to the solution of the SCF equations. By the new perspective, online PCA techniques are able to engage in so as to enhance the convergence of the model towards the equilibrium state, acting as a new set of tools for converging the SCF equations. With several numerical adaptations, we then develop a new algorithm for converging the SCF equation, and demonstrated its high convergence capacity with experiments on both synthesized and real electronic structure scenarios.
Online PCA in Converging Self-consistent Field Equations
[ "Xihan Li", "Xiang Chen", "Rasul Tutunov", "Haitham Bou Ammar", "Lei Wang", "Jun Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vpQuCsZXz2
@inproceedings{ wang2023transhp, title={Trans{HP}: Image Classification with Hierarchical Prompting}, author={Wenhao Wang and Yifan Sun and Wei Li and Yi Yang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vpQuCsZXz2} }
This paper explores a hierarchical prompting mechanism for the hierarchical image classification (HIC) task. Different from prior HIC methods, our hierarchical prompting is the first to explicitly inject ancestor-class information as a tokenized hint that benefits the descendant-class discrimination. We think it well imitates human visual recognition, i.e., humans may use the ancestor class as a prompt to draw focus on the subtle differences among descendant classes. We model this prompting mechanism into a Transformer with Hierarchical Prompting (TransHP). TransHP consists of three steps: 1) learning a set of prompt tokens to represent the coarse (ancestor) classes, 2) on-the-fly predicting the coarse class of the input image at an intermediate block, and 3) injecting the prompt token of the predicted coarse class into the intermediate feature. Though the parameters of TransHP maintain the same for all input images, the injected coarse-class prompt conditions (modifies) the subsequent feature extraction and encourages a dynamic focus on relatively subtle differences among the descendant classes. Extensive experiments show that TransHP improves image classification on accuracy (e.g., improving ViT-B/16 by +2.83% ImageNet classification accuracy), training data efficiency (e.g., +12.69% improvement under 10% ImageNet training data), and model explainability. Moreover, TransHP also performs favorably against prior HIC methods, showing that TransHP well exploits the hierarchical information. The code is available at: https://github.com/WangWenhao0716/TransHP.
TransHP: Image Classification with Hierarchical Prompting
[ "Wenhao Wang", "Yifan Sun", "Wei Li", "Yi Yang" ]
Conference
poster
2304.06385
[ "https://github.com/wangwenhao0716/transhp" ]
https://huggingface.co/papers/2304.06385
1
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=vpMBqdt9Hl
@inproceedings{ chalumeau2023combinatorial, title={Combinatorial Optimization with Policy Adaptation using Latent Space Search}, author={Felix Chalumeau and Shikha Surana and Cl{\'e}ment Bonnet and Nathan Grinsztajn and Arnu Pretorius and Alexandre Laterre and Thomas D Barrett}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vpMBqdt9Hl} }
Combinatorial Optimization underpins many real-world applications and yet, designing performant algorithms to solve these complex, typically NP-hard, problems remains a significant research challenge. Reinforcement Learning (RL) provides a versatile framework for designing heuristics across a broad spectrum of problem domains. However, despite notable progress, RL has not yet supplanted industrial solvers as the go-to solution. Current approaches emphasize pre-training heuristics that construct solutions, but often rely on search procedures with limited variance, such as stochastically sampling numerous solutions from a single policy, or employing computationally expensive fine-tuning of the policy on individual problem instances. Building on the intuition that performant search at inference time should be anticipated during pre-training, we propose COMPASS, a novel RL approach that parameterizes a distribution of diverse and specialized policies conditioned on a continuous latent space. We evaluate COMPASS across three canonical problems - Travelling Salesman, Capacitated Vehicle Routing, and Job-Shop Scheduling - and demonstrate that our search strategy (i) outperforms state-of-the-art approaches in 9 out of 11 standard benchmarking tasks and (ii) generalizes better, surpassing all other approaches on a set of 18 procedurally transformed instance distributions.
Combinatorial Optimization with Policy Adaptation using Latent Space Search
[ "Felix Chalumeau", "Shikha Surana", "Clément Bonnet", "Nathan Grinsztajn", "Arnu Pretorius", "Alexandre Laterre", "Thomas D Barrett" ]
Conference
poster
2311.13569
[ "https://github.com/instadeepai/compass" ]
https://huggingface.co/papers/2311.13569
5
0
0
7
1
[]
[]
[]
null
https://openreview.net/forum?id=voG6nEW9BV
@inproceedings{ baldassari2023conditional, title={Conditional score-based diffusion models for Bayesian inference in infinite dimensions}, author={Lorenzo Baldassari and Ali Siahkoohi and Josselin Garnier and Knut Solna and Maarten V. de Hoop}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=voG6nEW9BV} }
Since their initial introduction, score-based diffusion models (SDMs) have been successfully applied to solve a variety of linear inverse problems in finite-dimensional vector spaces due to their ability to efficiently approximate the posterior distribution. However, using SDMs for inverse problems in infinite-dimensional function spaces has only been addressed recently, primarily through methods that learn the unconditional score. While this approach is advantageous for some inverse problems, it is mostly heuristic and involves numerous computationally costly forward operator evaluations during posterior sampling. To address these limitations, we propose a theoretically grounded method for sampling from the posterior of infinite-dimensional Bayesian linear inverse problems based on amortized conditional SDMs. In particular, we prove that one of the most successful approaches for estimating the conditional score in finite dimensions—the conditional denoising estimator—can also be applied in infinite dimensions. A significant part of our analysis is dedicated to demonstrating that extending infinite-dimensional SDMs to the conditional setting requires careful consideration, as the conditional score typically blows up for small times, contrarily to the unconditional score. We conclude by presenting stylized and large-scale numerical examples that validate our approach, offer additional insights, and demonstrate that our method enables large-scale, discretization-invariant Bayesian inference.
Conditional score-based diffusion models for Bayesian inference in infinite dimensions
[ "Lorenzo Baldassari", "Ali Siahkoohi", "Josselin Garnier", "Knut Solna", "Maarten V. de Hoop" ]
Conference
spotlight
2305.19147
[ "https://github.com/alisiahkoohi/csgm" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vnTUuecp2v
@inproceedings{ toonsi2023higherorder, title={Higher-Order Uncoupled Dynamics Do Not Lead to Nash Equilibrium - Except When They Do}, author={Sarah Asad Toonsi and Jeff S Shamma}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vnTUuecp2v} }
The framework of multi-agent learning explores the dynamics of how an agent's strategies evolve in response to the evolving strategies of other agents. Of particular interest is whether or not agent strategies converge to well known solution concepts such as Nash Equilibrium (NE). In "higher order'' learning, agent dynamics include auxiliary states that can capture phenomena such as path dependencies. We introduce higher-order gradient play dynamics that resemble projected gradient ascent with auxiliary states. The dynamics are "payoff based'' and "uncoupled'' in that each agent's dynamics depend on its own evolving payoff and has no explicit dependence on the utilities of other agents. We first show that for any specific game with an isolated completely mixed-strategy NE, there exist higher-order gradient play dynamics that lead (locally) to that NE, both for the specific game and nearby games with perturbed utility functions. Conversely, we show that for any higher-order gradient play dynamics, there exists a game with a unique isolated completely mixed-strategy NE for which the dynamics do not lead to NE. Finally, we show that convergence to the mixed-strategy equilibrium in coordination games, comes at the expense of the dynamics being inherently internally unstable.
Higher-Order Uncoupled Dynamics Do Not Lead to Nash Equilibrium - Except When They Do
[ "Sarah Asad Toonsi", "Jeff S Shamma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vnGcubtzR1
@inproceedings{ xie2023on, title={On the Overlooked Pitfalls of Weight Decay and How to Mitigate Them: A Gradient-Norm Perspective}, author={Zeke Xie and zhiqiang xu and Jingzhao Zhang and Issei Sato and Masashi Sugiyama}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vnGcubtzR1} }
Weight decay is a simple yet powerful regularization technique that has been very widely used in training of deep neural networks (DNNs). While weight decay has attracted much attention, previous studies fail to discover some overlooked pitfalls on large gradient norms resulted by weight decay. In this paper, we discover that, weight decay can unfortunately lead to large gradient norms at the final phase (or the terminated solution) of training, which often indicates bad convergence and poor generalization. To mitigate the gradient-norm-centered pitfalls, we present the first practical scheduler for weight decay, called the Scheduled Weight Decay (SWD) method that can dynamically adjust the weight decay strength according to the gradient norm and significantly penalize large gradient norms during training. Our experiments also support that SWD indeed mitigates large gradient norms and often significantly outperforms the conventional constant weight decay strategy for Adaptive Moment Estimation (Adam).
On the Overlooked Pitfalls of Weight Decay and How to Mitigate Them: A Gradient-Norm Perspective
[ "Zeke Xie", "zhiqiang xu", "Jingzhao Zhang", "Issei Sato", "Masashi Sugiyama" ]
Conference
poster
2011.11152
[ "https://github.com/zeke-xie/stable-weight-decay-regularization" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vlDbqzwczj
@inproceedings{ cui2023a, title={A Novel Approach for Effective Multi-View Clustering with Information-Theoretic Perspective}, author={Chenhang Cui and Yazhou Ren and Jingyu Pu and Jiawei Li and Xiaorong Pu and Tianyi Wu and Yutao Shi and Lifang He}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vlDbqzwczj} }
Multi-view clustering (MVC) is a popular technique for improving clustering performance using various data sources. However, existing methods primarily focus on acquiring consistent information while often neglecting the issue of redundancy across multiple views. This study presents a new approach called Sufficient Multi-View Clustering (SUMVC) that examines the multi-view clustering framework from an information-theoretic standpoint. Our proposed method consists of two parts. Firstly, we develop a simple and reliable multi-view clustering method SCMVC (simple consistent multi-view clustering) that employs variational analysis to generate consistent information. Secondly, we propose a sufficient representation lower bound to enhance consistent information and minimise unnecessary information among views. The proposed SUMVC method offers a promising solution to the problem of multi-view clustering and provides a new perspective for analyzing multi-view data. To verify the effectiveness of our model, we conducted a theoretical analysis based on the Bayes Error Rate, and experiments on multiple multi-view datasets demonstrate the superior performance of SUMVC.
A Novel Approach for Effective Multi-View Clustering with Information-Theoretic Perspective
[ "Chenhang Cui", "Yazhou Ren", "Jingyu Pu", "Jiawei Li", "Xiaorong Pu", "Tianyi Wu", "Yutao Shi", "Lifang He" ]
Conference
poster
2309.13989
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vf77fTbgG3
@inproceedings{ amini2023structured, title={Structured Voronoi Sampling}, author={Afra Amini and Li Du and Ryan Cotterell}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vf77fTbgG3} }
Gradient-based sampling algorithms have demonstrated their effectiveness in text generation, especially in the context of controlled text generation. However, there exists a lack of theoretically grounded and principled approaches for this task. In this paper, we take an important step toward building a principled approach for sampling from language models with gradient-based methods. We use discrete distributions given by language models to define densities and develop an algorithm based on Hamiltonian Monte Carlo to sample from them. We name our gradient-based technique Structured Voronoi Sampling (SVS). In an experimental setup where the reference distribution is known, we show that the empirical distribution of SVS samples is closer to the reference distribution compared to alternative sampling schemes. Furthermore, in a controlled generation task, SVS is able to generate fluent and diverse samples while following the control targets significantly better than other methods.
Structured Voronoi Sampling
[ "Afra Amini", "Li Du", "Ryan Cotterell" ]
Conference
poster
2306.03061
[ "https://github.com/afraamini/svs" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vZRiMjo826
@inproceedings{ gazdieva2023extremal, title={Extremal Domain Translation with Neural Optimal Transport}, author={Milena Gazdieva and Alexander Korotin and Daniil Selikhanovych and Evgeny Burnaev}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vZRiMjo826} }
In many unpaired image domain translation problems, e.g., style transfer or super-resolution, it is important to keep the translated image similar to its respective input image. We propose the extremal transport (ET) which is a mathematical formalization of the theoretically best possible unpaired translation between a pair of domains w.r.t. the given similarity function. Inspired by the recent advances in neural optimal transport (OT), we propose a scalable algorithm to approximate ET maps as a limit of partial OT maps. We test our algorithm on toy examples and on the unpaired image-to-image translation task. The code is publicly available at https://github.com/milenagazdieva/ExtremalNeuralOptimalTransport
Extremal Domain Translation with Neural Optimal Transport
[ "Milena Gazdieva", "Alexander Korotin", "Daniil Selikhanovych", "Evgeny Burnaev" ]
Conference
poster
2301.12874
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vZHk1QlBQW
@inproceedings{ jiang2023forkmerge, title={ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning}, author={Junguang Jiang and Baixu Chen and Junwei Pan and Ximei Wang and Dapeng Liu and jie jiang and Mingsheng Long}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vZHk1QlBQW} }
Auxiliary-Task Learning (ATL) aims to improve the performance of the target task by leveraging the knowledge obtained from related tasks. Occasionally, learning multiple tasks simultaneously results in lower accuracy than learning only the target task, which is known as negative transfer. This problem is often attributed to the gradient conflicts among tasks, and is frequently tackled by coordinating the task gradients in previous works. However, these optimization-based methods largely overlook the auxiliary-target generalization capability. To better understand the root cause of negative transfer, we experimentally investigate it from both optimization and generalization perspectives. Based on our findings, we introduce ForkMerge, a novel approach that periodically forks the model into multiple branches, automatically searches the varying task weights by minimizing target validation errors, and dynamically merges all branches to filter out detrimental task-parameter updates. On a series of auxiliary-task learning benchmarks, ForkMerge outperforms existing methods and effectively mitigates negative transfer.
ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning
[ "Junguang Jiang", "Baixu Chen", "Junwei Pan", "Ximei Wang", "Dapeng Liu", "jie jiang", "Mingsheng Long" ]
Conference
poster
2301.12618
[ "https://github.com/thuml/forkmerge" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vUXNNLatFv
@inproceedings{ chen2023a, title={A Unified Framework for Uniform Signal Recovery in Nonlinear Generative Compressed Sensing}, author={Junren Chen and Jonathan Scarlett and Michael Ng and Zhaoqiang Liu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vUXNNLatFv} }
In generative compressed sensing (GCS), we want to recover a signal $\mathbf{x^*}\in\mathbb{R}^n$ from $m$ measurements ($m\ll n$) using a generative prior $\mathbf{x^*}\in G(\mathbb{B}_2^k(r))$, where $G$ is typically an $L$-Lipschitz continuous generative model and $\mathbb{B}_2^k(r)$ represents the radius-$r$ $\ell_2$-ball in $\mathbb{R}^k$. Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $\mathbf{x^*}$ rather than for all $\mathbf{x^*}$ simultaneously. In this paper, we build a unified framework to derive uniform recovery guarantees for nonlinear GCS where the observation model is nonlinear and possibly discontinuous or unknown. Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index model as canonical examples. Specifically, using a single realization of the sensing ensemble and generalized Lasso, all $\mathbf{x^*}\in G(\mathbb{B}_2^k(r))$ can be recovered up to an $\ell_2$-error at most $\epsilon$ using roughly $\tilde{O}({k}/{\epsilon^2})$ samples, with omitted logarithmic factors typically being dominated by $\log L$. Notably, this almost coincides with existing non-uniform guarantees up to logarithmic factors, hence the uniformity costs very little. As part of our technical contributions, we introduce Lipschitz approximation to handle discontinuous observation models. We also develop a concentration inequality that produces tighter bound for product process whose index sets have low metric entropy. Experimental results are presented to corroborate our theory.
A Unified Framework for Uniform Signal Recovery in Nonlinear Generative Compressed Sensing
[ "Junren Chen", "Jonathan Scarlett", "Michael Ng", "Zhaoqiang Liu" ]
Conference
poster
2310.03758
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vTug54Uunq
@inproceedings{ wang2023faster, title={Faster Margin Maximization Rates for Generic Optimization Methods}, author={Guanghui Wang and Zihao Hu and Vidya Muthukumar and Jacob Abernethy}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vTug54Uunq} }
First-order optimization methods tend to inherently favor certain solutions over others when minimizing a given training objective with multiple local optima. This phenomenon, known as \emph{implicit bias}, plays a critical role in understanding the generalization capabilities of optimization algorithms. Recent research has revealed that gradient-descent-based methods exhibit an implicit bias for the $\ell_2$-maximal margin classifier in the context of separable binary classification. In contrast, generic optimization methods, such as mirror descent and steepest descent, have been shown to converge to maximal margin classifiers defined by alternative geometries. However, while gradient-descent-based algorithms demonstrate fast implicit bias rates, the implicit bias rates of generic optimization methods have been relatively slow. To address this limitation, in this paper, we present a series of state-of-the-art implicit bias rates for mirror descent and steepest descent algorithms. Our primary technique involves transforming a generic optimization algorithm into an online learning dynamic that solves a regularized bilinear game, providing a unified framework for analyzing the implicit bias of various optimization methods. The accelerated rates are derived leveraging the regret bounds of online learning algorithms within this game framework.
Faster Margin Maximization Rates for Generic Optimization Methods
[ "Guanghui Wang", "Zihao Hu", "Vidya Muthukumar", "Jacob Abernethy" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vORUHrVEnH
@inproceedings{ zhou2023going, title={Going Beyond Linear Mode Connectivity: The Layerwise Linear Feature Connectivity}, author={Zhanpeng Zhou and Yongyi Yang and Xiaojiang Yang and Junchi Yan and Wei Hu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vORUHrVEnH} }
Recent work has revealed many intriguing empirical phenomena in neural network training, despite the poorly understood and highly complex loss landscapes and training dynamics. One of these phenomena, Linear Mode Connectivity (LMC), has gained considerable attention due to the intriguing observation that different solutions can be connected by a linear path in the parameter space while maintaining near-constant training and test losses. In this work, we introduce a stronger notion of linear connectivity, Layerwise Linear Feature Connectivity (LLFC), which says that the feature maps of every layer in different trained networks are also linearly connected. We provide comprehensive empirical evidence for LLFC across a wide range of settings, demonstrating that whenever two trained networks satisfy LMC (via either spawning or permutation methods), they also satisfy LLFC in nearly all the layers. Furthermore, we delve deeper into the underlying factors contributing to LLFC, which reveal new insights into the permutation approaches. The study of LLFC transcends and advances our understanding of LMC by adopting a feature-learning perspective.
Going Beyond Linear Mode Connectivity: The Layerwise Linear Feature Connectivity
[ "Zhanpeng Zhou", "Yongyi Yang", "Xiaojiang Yang", "Junchi Yan", "Wei Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vO6ZdPWaHc
@inproceedings{ tan2023data, title={Data Pruning via Moving-one-Sample-out}, author={Haoru Tan and Sitong Wu and Fei Du and Yukang Chen and Zhibin Wang and Fan Wang and XIAOJUAN QI}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vO6ZdPWaHc} }
In this paper, we propose a novel data-pruning approach called moving-one-sample-out (MoSo), which aims to identify and remove the least informative samples from the training set. The core insight behind MoSo is to determine the importance of each sample by assessing its impact on the optimal empirical risk. This is achieved by measuring the extent to which the empirical risk changes when a particular sample is excluded from the training set. Instead of using the computationally expensive leaving-one-out-retraining procedure, we propose an efficient first-order approximator that only requires gradient information from different training stages. The key idea behind our approximation is that samples with gradients that are consistently aligned with the average gradient of the training set are more informative and should receive higher scores, which could be intuitively understood as follows: if the gradient from a specific sample is consistent with the average gradient vector, it implies that optimizing the network using the sample will yield a similar effect on all remaining samples. Experimental results demonstrate that MoSo effectively mitigates severe performance degradation at high pruning ratios and achieves satisfactory performance across various settings. Experimental results demonstrate that MoSo effectively mitigates severe performance degradation at high pruning ratios and outperforms state-of-the-art methods by a large margin across various settings.
Data Pruning via Moving-one-Sample-out
[ "Haoru Tan", "Sitong Wu", "Fei Du", "Yukang Chen", "Zhibin Wang", "Fan Wang", "XIAOJUAN QI" ]
Conference
poster
2310.14664
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vO04AzsB49
@inproceedings{ li2023imitation, title={Imitation Learning from Imperfection: Theoretical Justifications and Algorithms}, author={Ziniu Li and Tian Xu and Zeyu Qin and Yang Yu and Zhi-Quan Luo}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vO04AzsB49} }
Imitation learning (IL) algorithms excel in acquiring high-quality policies from expert data for sequential decision-making tasks. But, their effectiveness is hampered when faced with limited expert data. To tackle this challenge, a novel framework called (offline) IL with supplementary data has been proposed, which enhances learning by incorporating an additional yet imperfect dataset obtained inexpensively from sub-optimal policies. Nonetheless, learning becomes challenging due to the potential inclusion of out-of-expert-distribution samples. In this work, we propose a mathematical formalization of this framework, uncovering its limitations. Our theoretical analysis reveals that a naive approach—applying the behavioral cloning (BC) algorithm concept to the combined set of expert and supplementary data—may fall short of vanilla BC, which solely relies on expert data. This deficiency arises due to the distribution shift between the two data sources. To address this issue, we propose a new importance-sampling-based technique for selecting data within the expert distribution. We prove that the proposed method eliminates the gap of the naive approach, highlighting its efficacy when handling imperfect data. Empirical studies demonstrate that our method outperforms previous state-of-the-art methods in tasks including robotic locomotion control, Atari video games, and image classification. Overall, our work underscores the potential of improving IL by leveraging diverse data sources through effective data selection.
Imitation Learning from Imperfection: Theoretical Justifications and Algorithms
[ "Ziniu Li", "Tian Xu", "Zeyu Qin", "Yang Yu", "Zhi-Quan Luo" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vNsdFwjPtL
@inproceedings{ jia2023suggesting, title={Suggesting Variable Order for Cylindrical Algebraic Decomposition via Reinforcement Learning}, author={Fuqi Jia and Yuhang Dong and Minghao Liu and Pei Huang and Feifei Ma and Jian Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vNsdFwjPtL} }
Cylindrical Algebraic Decomposition (CAD) is one of the pillar algorithms of symbolic computation, and its worst-case complexity is double exponential to the number of variables. Researchers found that variable order dramatically affects efficiency and proposed various heuristics. The existing learning-based methods are all supervised learning methods that cannot cope with diverse polynomial sets. This paper proposes two Reinforcement Learning (RL) approaches combined with Graph Neural Networks (GNN) for Suggesting Variable Order (SVO). One is GRL-SVO(UP), a branching heuristic integrated with CAD. The other is GRL-SVO(NUP), a fast heuristic providing a total order directly. We generate a random dataset and collect a real-world dataset from SMT-LIB. The experiments show that our approaches outperform state-of-the-art learning-based heuristics and are competitive with the best expert-based heuristics. Interestingly, our models show a strong generalization ability, working well on various datasets even if they are only trained on a 3-var random dataset. The source code and data are available at https://github.com/dongyuhang22/GRL-SVO.
Suggesting Variable Order for Cylindrical Algebraic Decomposition via Reinforcement Learning
[ "Fuqi Jia", "Yuhang Dong", "Minghao Liu", "Pei Huang", "Feifei Ma", "Jian Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vM5VnNQ4n7
@inproceedings{ verma2023exploiting, title={Exploiting Correlated Auxiliary Feedback in Parameterized Bandits}, author={Arun Verma and Zhongxiang Dai and Yao Shu and Bryan Kian Hsiang Low}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vM5VnNQ4n7} }
We study a novel variant of the parameterized bandits problem in which the learner can observe additional auxiliary feedback that is correlated with the observed reward. The auxiliary feedback is readily available in many real-life applications, e.g., an online platform that wants to recommend the best-rated services to its users can observe the user's rating of service (rewards) and collect additional information like service delivery time (auxiliary feedback). In this paper, we first develop a method that exploits auxiliary feedback to build a reward estimator with tight confidence bounds, leading to a smaller regret. We then characterize the regret reduction in terms of the correlation coefficient between reward and its auxiliary feedback. Experimental results in different settings also verify the performance gain achieved by our proposed method.
Exploiting Correlated Auxiliary Feedback in Parameterized Bandits
[ "Arun Verma", "Zhongxiang Dai", "Yao Shu", "Bryan Kian Hsiang Low" ]
Conference
poster
2311.02715
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vKpVJxplmB
@inproceedings{ kim2023transformer, title={Transformer as a hippocampal memory consolidation model based on {NMDAR}-inspired nonlinearity}, author={Dong-Kyum Kim and Jea Kwon and Meeyoung Cha and C. Justin Lee}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vKpVJxplmB} }
The hippocampus plays a critical role in learning, memory, and spatial representation, processes that depend on the NMDA receptor (NMDAR). Inspired by recent findings that compare deep learning models to the hippocampus, we propose a new nonlinear activation function that mimics NMDAR dynamics. NMDAR-like nonlinearity shifts short-term working memory into long-term reference memory in transformers, thus enhancing a process that is similar to memory consolidation in the mammalian brain. We design a navigation task assessing these two memory functions and show that manipulating the activation function (i.e., mimicking the Mg$^{2+}$-gating of NMDAR) disrupts long-term memory processes. Our experiments suggest that place cell-like functions and reference memory reside in the feed-forward network layer of transformers and that nonlinearity drives these processes. We discuss the role of NMDAR-like nonlinearity in establishing this striking resemblance between transformer architecture and hippocampal spatial representation.
Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity
[ "Dong-Kyum Kim", "Jea Kwon", "Meeyoung Cha", "C. Justin Lee" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vIGNYQ4Alv
@inproceedings{ jiang2023accelerated, title={Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization}, author={Ruichen Jiang and Aryan Mokhtari}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vIGNYQ4Alv} }
In this paper, we propose an accelerated quasi-Newton proximal extragradient method for solving unconstrained smooth convex optimization problems. With access only to the gradients of the objective, we prove that our method can achieve a convergence rate of $\mathcal{O}\bigl(\min\\{\frac{1}{k^2}, \frac{\sqrt{d\log k}}{k^{2.5}}\\}\bigr)$, where $d$ is the problem dimension and $k$ is the number of iterations. In particular, in the regime where $k = \mathcal{O}(d)$, our method matches the _optimal rate_ of $\mathcal{O}(\frac{1}{k^2})$ by Nesterov's accelerated gradient (NAG). Moreover, in the the regime where $k = \Omega(d \log d)$, it outperforms NAG and converges at a _faster rate_ of $\mathcal{O}\bigl(\frac{\sqrt{d\log k}}{k^{2.5}}\bigr)$. To the best of our knowledge, this result is the first to demonstrate a provable gain for a quasi-Newton-type method over NAG in the convex setting. To achieve such results, we build our method on a recent variant of the Monteiro-Svaiter acceleration framework and adopt an online learning perspective to update the Hessian approximation matrices, in which we relate the convergence rate of our method to the dynamic regret of a specific online convex optimization problem in the space of matrices.
Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization
[ "Ruichen Jiang", "Aryan Mokhtari" ]
Conference
spotlight
2306.02212
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vHSQTEIFkp
@inproceedings{ kornilov2023accelerated, title={Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance}, author={Nikita Kornilov and Ohad Shamir and Aleksandr Lobanov and Darina Dvinskikh and Alexander Gasnikov and Innokentiy Andreevich Shibaev and Eduard Gorbunov and Samuel Horv{\'a}th}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vHSQTEIFkp} }
In this paper, we consider non-smooth stochastic convex optimization with two function evaluations per round under infinite noise variance. In the classical setting when noise has finite variance, an optimal algorithm, built upon the batched accelerated gradient method, was proposed in (Gasnikov et. al., 2022). This optimality is defined in terms of iteration and oracle complexity, as well as the maximal admissible level of adversarial noise. However, the assumption of finite variance is burdensome and it might not hold in many practical scenarios. To address this, we demonstrate how to adapt a refined clipped version of the accelerated gradient (Stochastic Similar Triangles) method from (Sadiev et al., 2023) for a two-point zero-order oracle. This adaptation entails extending the batching technique to accommodate infinite variance — a non-trivial task that stands as a distinct contribution of this paper.
Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance
[ "Nikita Kornilov", "Ohad Shamir", "Aleksandr Lobanov", "Darina Dvinskikh", "Alexander Gasnikov", "Innokentiy Andreevich Shibaev", "Eduard Gorbunov", "Samuel Horváth" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vHRLS8HhK1
@inproceedings{ zhao2023generalized, title={Generalized Weighted Path Consistency for Mastering Atari Games}, author={Dengwei Zhao and Shikui Tu and Lei Xu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vHRLS8HhK1} }
Reinforcement learning with the help of neural-guided search consumes huge computational resources to achieve remarkable performance. Path consistency (PC), i.e., $f$ values on one optimal path should be identical, was previously imposed on MCTS by PCZero to improve the learning efficiency of AlphaZero. Not only PCZero still lacks a theoretical support but also considers merely board games. In this paper, PCZero is generalized into GW-PCZero for real applications with non-zero immediate reward. A weighting mechanism is introduced to reduce the variance caused by scouting's uncertainty on the $f$ value estimation. For the first time, it is theoretically proved that neural-guided MCTS is guaranteed to find the optimal solution under the constraint of PC. Experiments are conducted on the Atari $100$k benchmark with $26$ games and GW-PCZero achieves $198\%$ mean human performance, higher than the state-of-the-art EfficientZero's $194\\%$, while consuming only $25\\%$ of the computational resources consumed by EfficientZero.
Generalized Weighted Path Consistency for Mastering Atari Games
[ "Dengwei Zhao", "Shikui Tu", "Lei Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vF8ukt5l1R
@inproceedings{ parthasarathy2023selfsupervised, title={Self-supervised video pretraining yields robust and more human-aligned visual representations}, author={Nikhil Parthasarathy and S. M. Ali Eslami and Joao Carreira and Olivier J Henaff}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vF8ukt5l1R} }
Humans learn powerful representations of objects and scenes by observing how they evolve over time. Yet, outside of specific tasks that require explicit temporal understanding, static image pretraining remains the dominant paradigm for learning visual foundation models. We question this mismatch, and ask whether video pretraining can yield visual representations that bear the hallmarks of human perception: generalisation across tasks, robustness to perturbations, and consistency with human judgements. To that end we propose a novel procedure for curating videos, and develop a contrastive framework which learns from the complex transformations therein. This simple paradigm for distilling knowledge from videos, called VITO, yields general representations that far outperform prior video pretraining methods on image understanding tasks, and image pretraining methods on video understanding tasks. Moreover, VITO representations are significantly more robust to natural and synthetic deformations than image-, video-, and adversarially-trained ones. Finally, VITO’s predictions are strongly aligned with human judgements, surpassing models that were specifically trained for that purpose. Together, these results suggest that video pretraining could be a simple way of learning unified, robust, and human-aligned representations of the visual world.
Self-supervised video pretraining yields robust and more human-aligned visual representations
[ "Nikhil Parthasarathy", "S. M. Ali Eslami", "Joao Carreira", "Olivier J Henaff" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vEzcRdiTkP
@inproceedings{ errica2023on, title={On Class Distributions Induced by Nearest Neighbor Graphs for Node Classification of Tabular Data}, author={Federico Errica}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vEzcRdiTkP} }
Researchers have used nearest neighbor graphs to transform classical machine learning problems on tabular data into node classification tasks to solve with graph representation learning methods. Such artificial structures often reflect the homophily assumption, believed to be a key factor in the performances of deep graph networks. In light of recent results demystifying these beliefs, we introduce a theoretical framework to understand the benefits of Nearest Neighbor (NN) graphs when a graph structure is missing. We formally analyze the Cross-Class Neighborhood Similarity (CCNS), used to empirically evaluate the usefulness of structures, in the context of nearest neighbor graphs. Moreover, we study the class separability induced by deep graph networks on a k-NN graph. Motivated by the theory, our quantitative experiments demonstrate that, under full supervision, employing a k-NN graph offers no benefits compared to a structure-agnostic baseline. Qualitative analyses suggest that our framework is good at estimating the CCNS and hint at k-NN graphs never being useful for such classification tasks under full supervision, thus advocating for the study of alternative graph construction techniques in combination with deep graph networks.
On Class Distributions Induced by Nearest Neighbor Graphs for Node Classification of Tabular Data
[ "Federico Errica" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vBwSACOB3x
@inproceedings{ rodionov2023neural, title={Neural Algorithmic Reasoning Without Intermediate Supervision}, author={Gleb Rodionov and Liudmila Prokhorenkova}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vBwSACOB3x} }
Neural algorithmic reasoning is an emerging area of machine learning focusing on building models that can imitate the execution of classic algorithms, such as sorting, shortest paths, etc. One of the main challenges is to learn algorithms that are able to generalize to out-of-distribution data, in particular with significantly larger input sizes. Recent work on this problem has demonstrated the advantages of learning algorithms step-by-step, giving models access to all intermediate steps of the original algorithm. In this work, we instead focus on learning neural algorithmic reasoning only from the input-output pairs without appealing to the intermediate supervision. We propose simple but effective architectural improvements and also build a self-supervised objective that can regularise intermediate computations of the model without access to the algorithm trajectory. We demonstrate that our approach is competitive to its trajectory-supervised counterpart on tasks from the CLRS Algorithmic Reasoning Benchmark and achieves new state-of-the-art results for several problems, including sorting, where we obtain significant improvements. Thus, learning without intermediate supervision is a promising direction for further research on neural reasoners.
Neural Algorithmic Reasoning Without Intermediate Supervision
[ "Gleb Rodionov", "Liudmila Prokhorenkova" ]
Conference
poster
2306.13411
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vBHKSTgcYQ
@inproceedings{ ito2023an, title={An Exploration-by-Optimization Approach to Best of Both Worlds in Linear Bandits}, author={Shinji Ito and Kei Takemura}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vBHKSTgcYQ} }
In this paper, we consider how to construct best-of-both-worlds linear bandit algorithms that achieve nearly optimal performance for both stochastic and adversarial environments. For this purpose, we show that a natural approach referred to as exploration by optimization [Lattimore and Szepesvári, 2020] works well. Specifically, an algorithm constructed using this approach achieves $O(d \sqrt{ T \log{T}})$-regret in adversarial environments and $O(\frac{d^2 \log T}{\Delta_{\min}} )$-regret in stochastic environments. Symbols $d$, $T$ and $\Delta_{\min}$ here represent the dimensionality of the action set, the time horizon, and the minimum sub-optimality gap, respectively. We also show that this algorithm has even better theoretical guarantees for important special cases including the multi-armed bandit problem and multitask bandits.
An Exploration-by-Optimization Approach to Best of Both Worlds in Linear Bandits
[ "Shinji Ito", "Kei Takemura" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vAElhFcKW6
@inproceedings{ shinn2023reflexion, title={Reflexion: language agents with verbal reinforcement learning}, author={Noah Shinn and Federico Cassano and Ashwin Gopinath and Karthik R Narasimhan and Shunyu Yao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vAElhFcKW6} }
Large language models (LLMs) have been increasingly used to interact with external environments (e.g., games, compilers, APIs) as goal-driven agents. However, it remains challenging for these language agents to quickly and efficiently learn from trial-and-error as traditional reinforcement learning methods require extensive training samples and expensive model fine-tuning. We propose \emph{Reflexion}, a novel framework to reinforce language agents not by updating weights, but instead through linguistic feedback. Concretely, Reflexion agents verbally reflect on task feedback signals, then maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials. Reflexion is flexible enough to incorporate various types (scalar values or free-form language) and sources (external or internally simulated) of feedback signals, and obtains significant improvements over a baseline agent across diverse tasks (sequential decision-making, coding, language reasoning). For example, Reflexion achieves a 91\% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous state-of-the-art GPT-4 that achieves 80\%. We also conduct ablation and analysis studies using different feedback signals, feedback incorporation methods, and agent types, and provide insights into how they affect performance. We release all code, demos, and datasets at \url{https://github.com/noahshinn024/reflexion}.
Reflexion: language agents with verbal reinforcement learning
[ "Noah Shinn", "Federico Cassano", "Ashwin Gopinath", "Karthik R Narasimhan", "Shunyu Yao" ]
Conference
poster
2303.11366
[ "https://github.com/noahshinn024/reflexion" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vA0vj1mY77
@inproceedings{ tang2023mvdiffusion, title={{MVD}iffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion}, author={Shitao Tang and Fuyang Zhang and Jiacheng Chen and Peng Wang and Yasutaka Furukawa}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=vA0vj1mY77} }
This paper introduces MVDiffusion, a simple yet effective method for generating consistent multi-view images from text prompts given pixel-to-pixel correspondences (e.g., perspective crops from a panorama or multi-view images given depth maps and poses). Unlike prior methods that rely on iterative image warping and inpainting, MVDiffusion simultaneously generates all images with a global awareness, effectively addressing the prevalent error accumulation issue. At its core, MVDiffusion processes perspective images in parallel with a pre-trained text-to-image diffusion model, while integrating novel correspondence-aware attention layers to facilitate cross-view interactions. For panorama generation, while only trained with 10k panoramas, MVDiffusion is able to generate high-resolution photorealistic images for arbitrary texts or extrapolate one perspective image to a 360-degree view. For multi-view depth-to-image generation, MVDiffusion demonstrates state-of-the-art performance for texturing a scene mesh. The project page is at https://mvdiffusion.github.io/.
MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion
[ "Shitao Tang", "Fuyang Zhang", "Jiacheng Chen", "Peng Wang", "Yasutaka Furukawa" ]
Conference
spotlight
2307.01097
[ "https://github.com/Tangshitao/MVDiffusion" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v9yC7sSXf3
@inproceedings{ s{\'u}ken{\'\i}k2023deep, title={Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model}, author={Peter S{\'u}ken{\'\i}k and Marco Mondelli and Christoph H Lampert}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v9yC7sSXf3} }
Neural collapse (NC) refers to the surprising structure of the last layer of deep neural networks in the terminal phase of gradient descent training. Recently, an increasing amount of experimental evidence has pointed to the propagation of NC to earlier layers of neural networks. However, while the NC in the last layer is well studied theoretically, much less is known about its multi-layered counterpart - deep neural collapse (DNC). In particular, existing work focuses either on linear layers or only on the last two layers at the price of an extra assumption. Our work fills this gap by generalizing the established analytical framework for NC - the unconstrained features model - to multiple non-linear layers. Our key technical contribution is to show that, in a deep unconstrained features model, the unique global optimum for binary classification exhibits all the properties typical of DNC. This explains the existing experimental evidence of DNC. We also empirically show that (i) by optimizing deep unconstrained features models via gradient descent, the resulting solution agrees well with our theory, and (ii) trained networks recover the unconstrained features suitable for the occurrence of DNC, thus supporting the validity of this modeling principle.
Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model
[ "Peter Súkeník", "Marco Mondelli", "Christoph H Lampert" ]
Conference
spotlight
2305.13165
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v8u3EFAyW9
@inproceedings{ cho2023pitfall, title={Pitfall of Optimism: Distributional Reinforcement Learning by Randomizing Risk Criterion}, author={Taehyun Cho and Seungyub Han and Heesoo Lee and Kyungjae Lee and Jungwoo Lee}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v8u3EFAyW9} }
Distributional reinforcement learning algorithms have attempted to utilize estimated uncertainty for exploration, such as optimism in the face of uncertainty. However, using the estimated variance for optimistic exploration may cause biased data collection and hinder convergence or performance. In this paper, we present a novel distributional reinforcement learning that selects actions by randomizing risk criterion without losing the risk-neutral objective. We provide a perturbed distributional Bellman optimality operator by distorting the risk measure. Also,we prove the convergence and optimality of the proposed method with the weaker contraction property. Our theoretical results support that the proposed method does not fall into biased exploration and is guaranteed to converge to an optimal return. Finally, we empirically show that our method outperforms other existing distribution-based algorithms in various environments including Atari 55 games.
Pitfall of Optimism: Distributional Reinforcement Learning by Randomizing Risk Criterion
[ "Taehyun Cho", "Seungyub Han", "Heesoo Lee", "Kyungjae Lee", "Jungwoo Lee" ]
Conference
poster
2310.16546
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v7WWesSiOu
@inproceedings{ shmakov2023endtoend, title={End-To-End Latent Variational Diffusion Models for Inverse Problems in High Energy Physics}, author={Alexander Shmakov and Kevin Greif and Michael James Fenton and Aishik Ghosh and Pierre Baldi and Daniel Whiteson}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v7WWesSiOu} }
High-energy collisions at the Large Hadron Collider (LHC) provide valuable insights into open questions in particle physics. However, detector effects must be corrected before measurements can be compared to certain theoretical predictions or measurements from other detectors. Methods to solve this inverse problem of mapping detector observations to theoretical quantities of the underlying collision are essential parts of many physics analyses at the LHC. We investigate and compare various generative deep learning methods to approximate this inverse mapping. We introduce a novel unified architecture, termed latent variational diffusion models, which combines the latent learning of cutting-edge generative art approaches with an end-to-end variational framework. We demonstrate the effectiveness of this approach for reconstructing global distributions of theoretical kinematic quantities, as well as for ensuring the adherence of the learned posterior distributions to known physics constraints. Our unified approach achieves a distribution-free distance to the truth of over 20 times smaller than non-latent state-of-the-art baseline and 3 times smaller than traditional latent diffusion models.
End-To-End Latent Variational Diffusion Models for Inverse Problems in High Energy Physics
[ "Alexander Shmakov", "Kevin Greif", "Michael James Fenton", "Aishik Ghosh", "Pierre Baldi", "Daniel Whiteson" ]
Conference
poster
2305.10399
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v6jIxRRDyD
@inproceedings{ kone2023adaptive, title={Adaptive Algorithms for Relaxed Pareto Set Identification}, author={Cyrille Kone and Emilie Kaufmann and Laura Richert}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v6jIxRRDyD} }
In this paper we revisit the fixed-confidence identification of the Pareto optimal set in a multi-objective multi-armed bandit model. As the sample complexity to identify the exact Pareto set can be very large, a relaxation allowing to output some additional near-optimal arms has been studied. In this work we also tackle alternative relaxations that allow instead to identify a relevant \emph{subset} of the Pareto set. Notably, we propose a single sampling strategy, called Adaptive Pareto Exploration, that can be used in conjunction with different stopping rules to take into account different relaxations of the Pareto Set Identification problem. We analyze the sample complexity of these different combinations, quantifying in particular the reduction in sample complexity that occurs when one seeks to identify at most $k$ Pareto optimal arms. We showcase the good practical performance of Adaptive Pareto Exploration on a real-world scenario, in which we adaptively explore several vaccination strategies against Covid-19 in order to find the optimal ones when multiple immunogenicity criteria are taken into account.
Adaptive Algorithms for Relaxed Pareto Set Identification
[ "Cyrille Kone", "Emilie Kaufmann", "Laura Richert" ]
Conference
poster
2307.00424
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v6YzxwJlQn
@inproceedings{ marwah2023deep, title={Deep Equilibrium Based Neural Operators for Steady-State {PDE}s}, author={Tanya Marwah and Ashwini Pokle and J Zico Kolter and Zachary Chase Lipton and Jianfeng Lu and Andrej Risteski}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v6YzxwJlQn} }
Data-driven machine learning approaches are being increasingly used to solve partial differential equations (PDEs). They have shown particularly striking successes when training an operator, which takes as input a PDE in some family, and outputs its solution. However, the architectural design space, especially given structural knowledge of the PDE family of interest, is still poorly understood. We seek to remedy this gap by studying the benefits of weight-tied neural network architectures for steady-state PDEs. To achieve this, we first demonstrate that the solution of most steady-state PDEs can be expressed as a fixed point of a non-linear operator. Motivated by this observation, we propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE as the infinite-depth fixed point of an implicit operator layer using a black-box root solver and differentiates analytically through this fixed point resulting in $\mathcal{O}(1)$ training memory. Our experiments indicate that FNO-DEQ-based architectures outperform FNO-based baselines with $4\times$ the number of parameters in predicting the solution to steady-state PDEs such as Darcy Flow and steady-state incompressible Navier-Stokes. Finally, we show FNO-DEQ is more robust when trained with datasets with more noisy observations than the FNO-based baselines, demonstrating the benefits of using appropriate inductive biases in architectural design for different neural network based PDE solvers. Further, we show a universal approximation result that demonstrates that FNO-DEQ can approximate the solution to any steady-state PDE that can be written as a fixed point equation.
Deep Equilibrium Based Neural Operators for Steady-State PDEs
[ "Tanya Marwah", "Ashwini Pokle", "J Zico Kolter", "Zachary Chase Lipton", "Jianfeng Lu", "Andrej Risteski" ]
Conference
poster
2312.00234
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v6VpqGcGAR
@inproceedings{ grinsztajn2023winner, title={Winner Takes It All: Training Performant {RL} Populations for Combinatorial Optimization}, author={Nathan Grinsztajn and Daniel Furelos-Blanco and Shikha Surana and Cl{\'e}ment Bonnet and Thomas D Barrett}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v6VpqGcGAR} }
Applying reinforcement learning (RL) to combinatorial optimization problems is attractive as it removes the need for expert knowledge or pre-solved instances. However, it is unrealistic to expect an agent to solve these (often NP-)hard problems in a single shot at inference due to their inherent complexity. Thus, leading approaches often implement additional search strategies, from stochastic sampling and beam-search to explicit fine-tuning. In this paper, we argue for the benefits of learning a population of complementary policies, which can be simultaneously rolled out at inference. To this end, we introduce Poppy, a simple training procedure for populations. Instead of relying on a predefined or hand-crafted notion of diversity, Poppy induces an unsupervised specialization targeted solely at maximizing the performance of the population. We show that Poppy produces a set of complementary policies, and obtains state-of-the-art RL results on three popular NP-hard problems: traveling salesman, capacitated vehicle routing, and job-shop scheduling.
Winner Takes It All: Training Performant RL Populations for Combinatorial Optimization
[ "Nathan Grinsztajn", "Daniel Furelos-Blanco", "Shikha Surana", "Clément Bonnet", "Thomas D Barrett" ]
Conference
poster
2210.03475
[ "" ]
https://huggingface.co/papers/2210.03475
3
1
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=v5Aaxk4sSy
@inproceedings{ kuang2023improving, title={Improving Adversarial Robustness via Information Bottleneck Distillation}, author={Huafeng Kuang and Hong Liu and YONGJIAN WU and Shin'ichi Satoh and Rongrong Ji}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v5Aaxk4sSy} }
Previous studies have shown that optimizing the information bottleneck can significantly improve the robustness of deep neural networks. Our study closely examines the information bottleneck principle and proposes an Information Bottleneck Distillation approach. This specially designed, robust distillation technique utilizes prior knowledge obtained from a robust pre-trained model to boost information bottlenecks. Specifically, we propose two distillation strategies that align with the two optimization processes of the information bottleneck. Firstly, we use a robust soft-label distillation method to increase the mutual information between latent features and output prediction. Secondly, we introduce an adaptive feature distillation method that automatically transfers relevant knowledge from the teacher model to the student model, thereby reducing the mutual information between the input and latent features. We conduct extensive experiments to evaluate our approach's robustness against state-of-the-art adversarial attackers such as PGD-attack and AutoAttack. Our experimental results demonstrate the effectiveness of our approach in significantly improving adversarial robustness. Our code is available at https://github.com/SkyKuang/IBD.
Improving Adversarial Robustness via Information Bottleneck Distillation
[ "Huafeng Kuang", "Hong Liu", "YONGJIAN WU", "Shin'ichi Satoh", "Rongrong Ji" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v54eUIayFh
@inproceedings{ qin2023unicontrol, title={UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild}, author={Can Qin and Shu Zhang and Ning Yu and Yihao Feng and Xinyi Yang and Yingbo Zhou and Huan Wang and Juan Carlos Niebles and Caiming Xiong and Silvio Savarese and Stefano Ermon and Yun Fu and Ran Xu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v54eUIayFh} }
Achieving machine autonomy and human control often represent divergent objectives in the design of interactive AI systems. Visual generative foundation models such as Stable Diffusion show promise in navigating these goals, especially when prompted with arbitrary languages. However, they often fall short in generating images with spatial, structural, or geometric controls. The integration of such controls, which can accommodate various visual conditions in a single unified model, remains an unaddressed challenge. In response, we introduce UniControl, a new generative foundation model that consolidates a wide array of controllable condition-to-image (C2I) tasks within a singular framework, while still allowing for arbitrary language prompts. UniControl enables pixel-level-precise image generation, where visual conditions primarily influence the generated structures and language prompts guide the style and context. To equip UniControl with the capacity to handle diverse visual conditions, we augment pretrained text-to-image diffusion models and introduce a task-aware HyperNet to modulate the diffusion models, enabling the adaptation to different C2I tasks simultaneously. Trained on nine unique C2I tasks, UniControl demonstrates impressive zero-shot generation abilities with unseen visual conditions. Experimental results show that UniControl often surpasses the performance of single-task-controlled methods of comparable model sizes. This control versatility positions UniControl as a significant advancement in the realm of controllable visual generation.
UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild
[ "Can Qin", "Shu Zhang", "Ning Yu", "Yihao Feng", "Xinyi Yang", "Yingbo Zhou", "Huan Wang", "Juan Carlos Niebles", "Caiming Xiong", "Silvio Savarese", "Stefano Ermon", "Yun Fu", "Ran Xu" ]
Conference
poster
2305.11147
[ "https://github.com/salesforce/unicontrol" ]
https://huggingface.co/papers/2305.11147
5
3
1
13
1
[ "ModelsLab/unicontrol-v1.1" ]
[]
[ "Robert001/UniControl-Demo" ]
null
https://openreview.net/forum?id=v2oGdhbKxi
@inproceedings{ jia2023monouni, title={Mono{UNI}: A Unified Vehicle and Infrastructure-side Monocular 3D Object Detection Network with Sufficient Depth Clues}, author={Jinrang Jia and Zhenjia Li and Yifeng Shi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v2oGdhbKxi} }
Monocular 3D detection of vehicle and infrastructure sides are two important topics in autonomous driving. Due to diverse sensor installations and focal lengths, researchers are faced with the challenge of constructing algorithms for the two topics based on different prior knowledge. In this paper, by taking into account the diversity of pitch angles and focal lengths, we propose a unified optimization target named normalized depth, which realizes the unification of 3D detection problems for the two sides. Furthermore, to enhance the accuracy of monocular 3D detection, 3D normalized cube depth of obstacle is developed to promote the learning of depth information. We posit that the richness of depth clues is a pivotal factor impacting the detection performance on both the vehicle and infrastructure sides. A richer set of depth clues facilitates the model to learn better spatial knowledge, and the 3D normalized cube depth offers sufficient depth clues. Extensive experiments demonstrate the effectiveness of our approach. Without introducing any extra information, our method, named MonoUNI, achieves state-of-the-art performance on five widely used monocular 3D detection benchmarks, including Rope3D and DAIR-V2X-I for the infrastructure side, KITTI and Waymo for the vehicle side, and nuScenes for the cross-dataset evaluation.
MonoUNI: A Unified Vehicle and Infrastructure-side Monocular 3D Object Detection Network with Sufficient Depth Clues
[ "Jinrang Jia", "Zhenjia Li", "Yifeng Shi" ]
Conference
poster
[ "https://github.com/Traffic-X/MonoUNI" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v1VVKaMYbk
@inproceedings{ yu2023hrboxv, title={H2{RB}ox-v2: Incorporating Symmetry for Boosting Horizontal Box Supervised Oriented Object Detection}, author={Yi Yu and Xue Yang and Qingyun Li and Yue Zhou and Feipeng Da and Junchi Yan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v1VVKaMYbk} }
With the rapidly increasing demand for oriented object detection, e.g. in autonomous driving and remote sensing, the recently proposed paradigm involving weakly-supervised detector H2RBox for learning rotated box (RBox) from the more readily-available horizontal box (HBox) has shown promise. This paper presents H2RBox-v2, to further bridge the gap between HBox-supervised and RBox-supervised oriented object detection. Specifically, we propose to leverage the reflection symmetry via flip and rotate consistencies, using a weakly-supervised network branch similar to H2RBox, together with a novel self-supervised branch that learns orientations from the symmetry inherent in visual objects. The detector is further stabilized and enhanced by practical techniques to cope with peripheral issues e.g. angular periodicity. To our best knowledge, H2RBox-v2 is the first symmetry-aware self-supervised paradigm for oriented object detection. In particular, our method shows less susceptibility to low-quality annotation and insufficient training data compared to H2RBox. Specifically, H2RBox-v2 achieves very close performance to a rotation annotation trained counterpart -- Rotated FCOS: 1) DOTA-v1.0/1.5/2.0: 72.31%/64.76%/50.33% vs. 72.44%/64.53%/51.77%; 2) HRSC: 89.66% vs. 88.99%; 3) FAIR1M: 42.27% vs. 41.25%.
H2RBox-v2: Incorporating Symmetry for Boosting Horizontal Box Supervised Oriented Object Detection
[ "Yi Yu", "Xue Yang", "Qingyun Li", "Yue Zhou", "Feipeng Da", "Junchi Yan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v0lkbp66Uw
@inproceedings{ liu2023egocentric, title={Egocentric Planning for Scalable Embodied Task Achievement}, author={Xiaotian Liu and Hector Palacios and Christian Muise}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v0lkbp66Uw} }
Embodied agents face significant challenges when tasked with performing actions in diverse environments, particularly in generalizing across object types and executing suitable actions to accomplish tasks. Furthermore, agents should exhibit robustness, minimizing the execution of illegal actions. In this work, we present Egocentric Planning, an innovative approach that combines symbolic planning and Object-oriented POMDPs to solve tasks in complex environments, harnessing existing models for visual perception and natural language processing. We evaluated our approach in ALFRED, a simulated environment designed for domestic tasks, and demonstrated its high scalability, achieving an impressive 36.07\% unseen success rate in the ALFRED benchmark and winning the ALFRED challenge at CVPR Embodied AI workshop. Our method requires reliable perception and the specification or learning of a symbolic description of the preconditions and effects of the agent's actions, as well as what object types reveal information about others. It can naturally scale to solve new tasks beyond ALFRED, as long as they can be solved using the available skills. This work offers a solid baseline for studying end-to-end and hybrid methods that aim to generalize to new tasks, including recent approaches relying on LLMs, but often struggle to scale to long sequences of actions or produce robust plans for novel tasks.
Egocentric Planning for Scalable Embodied Task Achievement
[ "Xiaotian Liu", "Hector Palacios", "Christian Muise" ]
Conference
poster
2306.01295
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=v0GzRLvVp3
@inproceedings{ tang2023temporal, title={Temporal Continual Learning with Prior Compensation for Human Motion Prediction}, author={Jianwei Tang and Jiangxin Sun and Xiaotong Lin and lifang zhang and Wei-Shi Zheng and Jian-Fang Hu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=v0GzRLvVp3} }
Human Motion Prediction (HMP) aims to predict future poses at different moments according to past motion sequences. Previous approaches have treated the prediction of various moments equally, resulting in two main limitations: the learning of short-term predictions is hindered by the focus on long-term predictions, and the incorporation of prior information from past predictions into subsequent predictions is limited. In this paper, we introduce a novel multi-stage training framework called Temporal Continual Learning (TCL) to address the above challenges. To better preserve prior information, we introduce the Prior Compensation Factor (PCF). We incorporate it into the model training to compensate for the lost prior information. Furthermore, we derive a more reasonable optimization objective through theoretical derivation. It is important to note that our TCL framework can be easily integrated with different HMP backbone models and adapted to various datasets and applications. Extensive experiments on four HMP benchmark datasets demonstrate the effectiveness and flexibility of TCL. The code is available at https://github.com/hyqlat/TCL.
Temporal Continual Learning with Prior Compensation for Human Motion Prediction
[ "Jianwei Tang", "Jiangxin Sun", "Xiaotong Lin", "lifang zhang", "Wei-Shi Zheng", "Jian-Fang Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uzOBDerK1j
@inproceedings{ sankararaman2023online, title={Online robust non-stationary estimation}, author={Abishek Sankararaman and Murali Balakrishnan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uzOBDerK1j} }
The real-time estimation of time-varying parameters from high-dimensional, heavy-tailed and corrupted data-streams is a common sub-routine in systems ranging from those for network monitoring and anomaly detection to those for traffic scheduling in data-centers. For estimation tasks that can be cast as minimizing a strongly convex loss function, we prove that an appropriately tuned version of the {\ttfamily clipped Stochastic Gradient Descent} (SGD) is simultaneously {\em(i)} adaptive to drift, {\em (ii)} robust to heavy-tailed inliers and arbitrary corruptions, {\em(iii)} requires no distributional knowledge and {\em (iv)} can be implemented in an online streaming fashion. All prior estimation algorithms have only been proven to posses a subset of these practical desiderata. A observation we make is that, neither the $\mathcal{O}\left(\frac{1}{t}\right)$ learning rate for {\ttfamily clipped SGD} known to be optimal for strongly convex loss functions of a \emph{stationary} data-stream, nor the $\mathcal{O}(1)$ learning rate known to be optimal for being adaptive to drift in a \emph{noiseless} environment can be used. Instead, a learning rate of $T^{-\alpha}$ for $ \alpha < 1$ where $T$ is the stream-length is needed to balance adaptivity to potential drift and to combat noise. We develop a new inductive argument and combine it with a martingale concentration result to derive high-probability under \emph{any learning rate} on data-streams exhibiting \emph{arbitrary distribution shift} - a proof strategy that may be of independent interest. Further, using the classical doubling-trick, we relax the knowledge of the stream length $T$. Ours is the first online estimation algorithm that is provably robust to heavy-tails, corruptions and distribution shift simultaneously. We complement our theoretical results empirically on synthetic and real data.
Online robust non-stationary estimation
[ "Abishek Sankararaman", "Murali Balakrishnan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uvdJgFFzby
@inproceedings{ anagnostidis2023dynamic, title={Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers}, author={Sotiris Anagnostidis and Dario Pavllo and Luca Biggio and Lorenzo Noci and Aurelien Lucchi and Thomas Hofmann}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uvdJgFFzby} }
Autoregressive Transformers adopted in Large Language Models (LLMs) are hard to scale to long sequences. Despite several works trying to reduce their computational cost, most of LLMs still adopt attention layers between all pairs of tokens in the sequence, thus incurring a quadratic cost. In this study, we present a novel approach that dynamically prunes contextual information while preserving the model's expressiveness, resulting in reduced memory and computational requirements during inference. Our method employs a learnable mechanism that determines which uninformative tokens can be dropped from the context at any point across the generation process. By doing so, our approach not only addresses performance concerns but also enhances interpretability, providing valuable insight into the model's decision-making process. Our technique can be applied to existing pre-trained models through a straightforward fine-tuning process, and the pruning strength can be specified by a sparsity parameter. Notably, our empirical findings demonstrate that we can effectively prune up to 80\% of the context without significant performance degradation on downstream tasks, offering a valuable tool for mitigating inference costs. Our reference implementation achieves up to $2\times$ increase in inference throughput and even greater memory savings.
Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers
[ "Sotiris Anagnostidis", "Dario Pavllo", "Luca Biggio", "Lorenzo Noci", "Aurelien Lucchi", "Thomas Hofmann" ]
Conference
spotlight
2305.15805
[ "" ]
https://huggingface.co/papers/2305.15805
0
1
0
6
1
[]
[]
[]
null
https://openreview.net/forum?id=uv3ge0goPa
@inproceedings{ zhou2023training, title={Training Your Image Restoration Network Better with Random Weight Network as Optimization Function}, author={Man Zhou and Naishan Zheng and Yuan Xu and Chun-Le Guo and Chongyi Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uv3ge0goPa} }
The blooming progress made in deep learning-based image restoration has been largely attributed to the availability of high-quality, large-scale datasets and advanced network structures. However, optimization functions such as L_1 and L_2 are still de facto. In this study, we propose to investigate new optimization functions to improve image restoration performance. Our key insight is that ``random weight network can be acted as a constraint for training better image restoration networks''. However, not all random weight networks are suitable as constraints. We draw inspiration from Functional theory and show that alternative random weight networks should be represented in the form of a strict mathematical manifold. We explore the potential of our random weight network prototypes that satisfy this requirement: Taylor's unfolding network, invertible neural network, central difference convolution, and zero-order filtering. We investigate these prototypes from four aspects: 1) random weight strategies, 2) network architectures, 3) network depths, and 4) combinations of random weight networks. Furthermore, we devise the random weight in two variants: the weights are randomly initialized only once during the entire training procedure, and the weights are randomly initialized in each training epoch. Our approach can be directly integrated into existing networks without incurring additional training and testing computational costs. We perform extensive experiments across multiple image restoration tasks, including image denoising, low-light image enhancement, and guided image super-resolution to demonstrate the consistent performance gains achieved by our method. Upon acceptance of this paper, we will release the code.
Training Your Image Restoration Network Better with Random Weight Network as Optimization Function
[ "Man Zhou", "Naishan Zheng", "Yuan Xu", "Chun-Le Guo", "Chongyi Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=utreNaM1VY
@inproceedings{ tifrea2023can, title={Can semi-supervised learning use all the data effectively? A lower bound perspective}, author={Alexandru Tifrea and Gizem Y{\"u}ce and Amartya Sanyal and Fanny Yang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=utreNaM1VY} }
Prior theoretical and empirical works have established that semi-supervised learning algorithms can leverage the unlabeled data to improve over the labeled sample complexity of supervised learning (SL) algorithms. However, existing theoretical work focuses on regimes where the unlabeled data is sufficient to learn a good decision boundary using unsupervised learning (UL) alone. This begs the question: Can SSL algorithms simultaneously improve upon both UL and SL? To this end, we derive a tight lower bound for 2-Gaussian mixture models that explicitly depends on the labeled and the unlabeled dataset size as well as the signal-to-noise ratio of the mixture distribution. Surprisingly, our result implies that no SSL algorithm improves upon the minimax-optimal statistical error rates of SL or UL algorithms for these distributions. Nevertheless, in our real-world experiments, SSL algorithms can often outperform UL and SL algorithms. In summary, our work suggests that while it is possible to prove the performance gains of SSL algorithms, this would require careful tracking of constants in the theoretical analysis.
Can semi-supervised learning use all the data effectively? A lower bound perspective
[ "Alexandru Tifrea", "Gizem Yüce", "Amartya Sanyal", "Fanny Yang" ]
Conference
spotlight
2311.18557
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=utQms7PPx5
@inproceedings{ tang2023all, title={All Points Matter: Entropy-Regularized Distribution Alignment for Weakly-supervised 3D Segmentation}, author={Liyao Tang and Zhe Chen and Shanshan Zhao and Chaoyue Wang and Dacheng Tao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=utQms7PPx5} }
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning. Existing methods often rely on empirical label selection strategies, such as confidence thresholding, to generate beneficial pseudo-labels for model training. This approach may, however, hinder the comprehensive exploitation of unlabeled data points. We hypothesize that this selective usage arises from the noise in pseudo-labels generated on unlabeled data. The noise in pseudo-labels may result in significant discrepancies between pseudo-labels and model predictions, thus confusing and affecting the model training greatly. To address this issue, we propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions. More specifically, our method introduces an Entropy Regularization loss and a Distribution Alignment loss for weakly supervised learning in 3D segmentation tasks, resulting in an ERDA learning strategy. Interestingly, by using KL distance to formulate the distribution alignment loss, it reduces to a deceptively simple cross-entropy-based loss which optimizes both the pseudo-label generation network and the 3D segmentation network simultaneously. Despite the simplicity, our method promisingly improves the performance. We validate the effectiveness through extensive experiments on various baselines and large-scale datasets. Results show that ERDA effectively enables the effective usage of all unlabeled data points for learning and achieves state-of-the-art performance under different settings. Remarkably, our method can outperform fully-supervised baselines using only 1\% of true annotations. Code and model will be made publicly available at https://github.com/LiyaoTang/ERDA.
All Points Matter: Entropy-Regularized Distribution Alignment for Weakly-supervised 3D Segmentation
[ "Liyao Tang", "Zhe Chen", "Shanshan Zhao", "Chaoyue Wang", "Dacheng Tao" ]
Conference
poster
2305.15832
[ "https://github.com/LiyaoTang/ERDA" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uqkUguNu40
@inproceedings{ ma2023fused, title={Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications}, author={Xinyu Ma and Xu Chu and Yasha Wang and Yang Lin and Junfeng Zhao and Liantao Ma and Wenwu Zhu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uqkUguNu40} }
Graph data augmentation has shown superiority in enhancing generalizability and robustness of GNNs in graph-level classifications. However, existing methods primarily focus on the augmentation in the graph signal space and the graph structure space independently, neglecting the joint interaction between them. In this paper, we address this limitation by formulating the problem as an optimal transport problem that aims to find an optimal inter-graph node matching strategy considering the interactions between graph structures and signals. To solve this problem, we propose a novel graph mixup algorithm called FGWMixup, which seeks a "midpoint" of source graphs in the Fused Gromov-Wasserstein (FGW) metric space. To enhance the scalability of our method, we introduce a relaxed FGW solver that accelerates FGWMixup by improving the convergence rate from $\mathcal{O}(t^{-1})$ to $\mathcal{O}(t^{-2})$. Extensive experiments conducted on five datasets using both classic (MPNNs) and advanced (Graphormers) GNN backbones demonstrate that \mname\xspace effectively improves the generalizability and robustness of GNNs. Codes are available at https://github.com/ArthurLeoM/FGWMixup.
Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications
[ "Xinyu Ma", "Xu Chu", "Yasha Wang", "Yang Lin", "Junfeng Zhao", "Liantao Ma", "Wenwu Zhu" ]
Conference
poster
2306.15963
[ "https://github.com/arthurleom/fgwmixup" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uotGmrcooz
@inproceedings{ geuchen2023optimal, title={Optimal approximation using complex-valued neural networks}, author={Paul Geuchen and Felix Voigtlaender}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uotGmrcooz} }
Complex-valued neural networks (CVNNs) have recently shown promising empirical success, for instance for increasing the stability of recurrent neural networks and for improving the performance in tasks with complex-valued inputs, such as MRI fingerprinting. While the overwhelming success of Deep Learning in the real-valued case is supported by a growing mathematical foundation, such a foundation is still largely lacking in the complex-valued case. We thus analyze the expressivity of CVNNs by studying their approximation properties. Our results yield the first quantitative approximation bounds for CVNNs that apply to a wide class of activation functions including the popular modReLU and complex cardioid activation functions. Precisely, our results apply to any activation function that is smooth but not polyharmonic on some non-empty open set; this is the natural generalization of the class of smooth and non-polynomial activation functions to the complex setting. Our main result shows that the approximation error scales as $m^{-k/(2n)}$ for $m \to \infty$ where $m$ is the number of neurons, $k$ the smoothness of the target function and $n$ is the (complex) input dimension. Under a natural continuity assumption, we show that this rate is optimal; we further discuss the optimality when dropping this assumption. Moreover, we prove that the problem of approximating $C^k$-functions using continuous approximation methods unavoidably suffers from the curse of dimensionality.
Optimal approximation using complex-valued neural networks
[ "Paul Geuchen", "Felix Voigtlaender" ]
Conference
poster
2303.16813
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uoiwugtpCH
@inproceedings{ mallik2023priorband, title={PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning}, author={Neeratyoy Mallik and Eddie Bergman and Carl Hvarfner and Danny Stoll and Maciej Janowski and Marius Lindauer and Luigi Nardi and Frank Hutter}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uoiwugtpCH} }
Hyperparameters of Deep Learning (DL) pipelines are crucial for their downstream performance. While a large number of methods for Hyperparameter Optimization (HPO) have been developed, their incurred costs are often untenable for modern DL. Consequently, manual experimentation is still the most prevalent approach to optimize hyperparameters, relying on the researcher's intuition, domain knowledge, and cheap preliminary explorations. To resolve this misalignment between HPO algorithms and DL researchers, we propose PriorBand, an HPO algorithm tailored to DL, able to utilize both expert beliefs and cheap proxy tasks. Empirically, we demonstrate PriorBand's efficiency across a range of DL benchmarks and show its gains under informative expert input and robustness against poor expert beliefs.
PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning
[ "Neeratyoy Mallik", "Eddie Bergman", "Carl Hvarfner", "Danny Stoll", "Maciej Janowski", "Marius Lindauer", "Luigi Nardi", "Frank Hutter" ]
Conference
poster
2306.12370
[ "https://github.com/automl/mf-prior-exp" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uoRiO855Sj
@inproceedings{ alvarez2023minimax, title={Minimax Forward and Backward Learning of Evolving Tasks with Performance Guarantees}, author={Veronica Alvarez and Santiago Mazuelas and Jose A. Lozano}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uoRiO855Sj} }
For a sequence of classification tasks that arrive over time, it is common that tasks are evolving in the sense that consecutive tasks often have a higher similarity. The incremental learning of a growing sequence of tasks holds promise to enable accurate classification even with few samples per task by leveraging information from all the tasks in the sequence (forward and backward learning). However, existing techniques developed for continual learning and concept drift adaptation are either designed for tasks with time-independent similarities or only aim to learn the last task in the sequence. This paper presents incremental minimax risk classifiers (IMRCs) that effectively exploit forward and backward learning and account for evolving tasks. In addition, we analytically characterize the performance improvement provided by forward and backward learning in terms of the tasks’ expected quadratic change and the number of tasks. The experimental evaluation shows that IMRCs can result in a significant performance improvement, especially for reduced sample sizes.
Minimax Forward and Backward Learning of Evolving Tasks with Performance Guarantees
[ "Veronica Alvarez", "Santiago Mazuelas", "Jose A. Lozano" ]
Conference
poster
2310.15974
[ "https://github.com/machinelearningbcam/imrcs-for-incremental-learning-neurips-2023" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uoG1fLIK2s
@inproceedings{ zhu2023sampleefficient, title={Sample-efficient Multi-objective Molecular Optimization with {GF}lowNets}, author={Yiheng Zhu and Jialu Wu and Chaowen Hu and Jiahuan Yan and Chang-Yu Hsieh and Tingjun Hou and Jian Wu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uoG1fLIK2s} }
Many crucial scientific problems involve designing novel molecules with desired properties, which can be formulated as a black-box optimization problem over the *discrete* chemical space. In practice, multiple conflicting objectives and costly evaluations (e.g., wet-lab experiments) make the *diversity* of candidates paramount. Computational methods have achieved initial success but still struggle with considering diversity in both objective and search space. To fill this gap, we propose a multi-objective Bayesian optimization (MOBO) algorithm leveraging the hypernetwork-based GFlowNets (HN-GFN) as an acquisition function optimizer, with the purpose of sampling a diverse batch of candidate molecular graphs from an approximate Pareto front. Using a single preference-conditioned hypernetwork, HN-GFN learns to explore various trade-offs between objectives. We further propose a hindsight-like off-policy strategy to share high-performing molecules among different preferences in order to speed up learning for HN-GFN. We empirically illustrate that HN-GFN has adequate capacity to generalize over preferences. Moreover, experiments in various real-world MOBO settings demonstrate that our framework predominantly outperforms existing methods in terms of candidate quality and sample efficiency. The code is available at https://github.com/violet-sto/HN-GFN.
Sample-efficient Multi-objective Molecular Optimization with GFlowNets
[ "Yiheng Zhu", "Jialu Wu", "Chaowen Hu", "Jiahuan Yan", "Chang-Yu Hsieh", "Tingjun Hou", "Jian Wu" ]
Conference
poster
2302.04040
[ "https://github.com/violet-sto/hn-gfn" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=umvV3yvo4N
@inproceedings{ nguyen2023energybased, title={Energy-Based Sliced Wasserstein Distance}, author={Khai Nguyen and Nhat Ho}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=umvV3yvo4N} }
The sliced Wasserstein (SW) distance has been widely recognized as a statistically effective and computationally efficient metric between two probability measures. A key component of the SW distance is the slicing distribution. There are two existing approaches for choosing this distribution. The first approach is using a fixed prior distribution. The second approach is optimizing for the best distribution which belongs to a parametric family of distributions and can maximize the expected distance. However, both approaches have their limitations. A fixed prior distribution is non-informative in terms of highlighting projecting directions that can discriminate two general probability measures. Doing optimization for the best distribution is often expensive and unstable. Moreover, designing the parametric family of the candidate distribution could be easily misspecified. To address the issues, we propose to design the slicing distribution as an energy-based distribution that is parameter-free and has the density proportional to an energy function of the projected one-dimensional Wasserstein distance. We then derive a novel sliced Wasserstein variant, energy-based sliced Waserstein (EBSW) distance, and investigate its topological, statistical, and computational properties via importance sampling, sampling importance resampling, and Markov Chain methods. Finally, we conduct experiments on point-cloud gradient flow, color transfer, and point-cloud reconstruction to show the favorable performance of the EBSW.
Energy-Based Sliced Wasserstein Distance
[ "Khai Nguyen", "Nhat Ho" ]
Conference
poster
2304.13586
[ "https://github.com/khainb/ebsw" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uj9PxVTVqq
@inproceedings{ gao2023enhancing, title={Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork}, author={Qiang Gao and Xiaojun Shan and Yuchen Zhang and Fan Zhou}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uj9PxVTVqq} }
As there exist competitive subnetworks within a dense network in concert with Lottery Ticket Hypothesis, we introduce a novel neuron-wise task incremental learning method, namely Data-free Subnetworks (DSN), which attempts to enhance the elastic knowledge transfer across the tasks that sequentially arrive. Specifically, DSN primarily seeks to transfer knowledge to the new coming task from the learned tasks by selecting the affiliated weights of a small set of neurons to be activated, including the reused neurons from prior tasks via neuron-wise masks. And it also transfers possibly valuable knowledge to the earlier tasks via data-free replay. Especially, DSN inherently relieves the catastrophic forgetting and the unavailability of past data or possible privacy concerns. The comprehensive experiments conducted on four benchmark datasets demonstrate the effectiveness of the proposed DSN in the context of task-incremental learning by comparing it to several state-of-the-art baselines. In particular, DSN enables the knowledge transfer to the earlier tasks, which is often overlooked by prior efforts.
Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork
[ "Qiang Gao", "Xiaojun Shan", "Yuchen Zhang", "Fan Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uiiVSVADDc
@inproceedings{ xie2023annotator, title={Annotator: A Generic Active Learning Baseline for Li{DAR} Semantic Segmentation}, author={Binhui Xie and Shuang Li and qingju guo and Chi Harold Liu and Xinjing Cheng}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uiiVSVADDc} }
Active learning, a label-efficient paradigm, empowers models to interactively query an oracle for labeling new data. In the realm of LiDAR semantic segmentation, the challenges stem from the sheer volume of point clouds, rendering annotation labor-intensive and cost-prohibitive. This paper presents Annotator, a general and efficient active learning baseline, in which a voxel-centric online selection strategy is tailored to efficiently probe and annotate the salient and exemplar voxel girds within each LiDAR scan, even under distribution shift. Concretely, we first execute an in-depth analysis of several common selection strategies such as Random, Entropy, Margin, and then develop voxel confusion degree (VCD) to exploit the local topology relations and structures of point clouds. Annotator excels in diverse settings, with a particular focus on active learning (AL), active source-free domain adaptation (ASFDA), and active domain adaptation (ADA). It consistently delivers exceptional performance across LiDAR semantic segmentation benchmarks, spanning both simulation-to-real and real-to-real scenarios. Surprisingly, Annotator exhibits remarkable efficiency, requiring significantly fewer annotations, e.g., just labeling five voxels per scan in the SynLiDAR → SemanticKITTI task. This results in impressive performance, achieving 87.8% fully-supervised performance under AL, 88.5% under ASFDA, and 94.4% under ADA. We envision that Annotator will offer a simple, general, and efficient solution for label-efficient 3D applications.
Annotator: A Generic Active Learning Baseline for LiDAR Semantic Segmentation
[ "Binhui Xie", "Shuang Li", "qingju guo", "Chi Harold Liu", "Xinjing Cheng" ]
Conference
poster
2310.20293
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uhKtQMn21D
@inproceedings{ cutkosky2023mechanic, title={Mechanic: A Learning Rate Tuner}, author={Ashok Cutkosky and Aaron Defazio and Harsh Mehta}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uhKtQMn21D} }
We introduce a technique for tuning the learning rate scale factor of any base optimization algorithm and schedule automatically, which we call Mechanic. Our method provides a practical realization of recent theoretical reductions for accomplishing a similar goal in online convex optimization. We rigorously evaluate Mechanic on a range of large scale deep learning tasks with varying batch sizes, schedules, and base optimization algorithms. These experiments demonstrate that depending on the problem, Mechanic either comes very close to, matches or even improves upon manual tuning of learning rates.
Mechanic: A Learning Rate Tuner
[ "Ashok Cutkosky", "Aaron Defazio", "Harsh Mehta" ]
Conference
poster
2306.00144
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ubzNoJjOKj
@inproceedings{ nguyen2023hyenadna, title={Hyena{DNA}: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution}, author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin W Thomas and Michael Wornow and Callum Birch-Sykes and Stefano Massaroli and Aman Patel and Clayton M. Rabideau and Yoshua Bengio and Stefano Ermon and Christopher Re and Stephen Baccus}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ubzNoJjOKj} }
Genomic (DNA) sequences encode an enormous amount of information for gene regulation and protein synthesis. Similar to natural language models, researchers have proposed foundation models in genomics to learn generalizable features from unlabeled genome data that can then be fine-tuned for downstream tasks such as identifying regulatory elements. Due to the quadratic scaling of attention, previous Transformer-based genomic models have used 512 to 4k tokens as context (<0.001% of the human genome), significantly limiting the modeling of long-range interactions in DNA. In addition, these methods rely on tokenizers or fixed k-mers to aggregate meaningful DNA units, losing single nucleotide resolution (i.e. DNA "characters") where subtle genetic variations can completely alter protein function via single nucleotide polymorphisms (SNPs). Recently, Hyena, a large language model based on implicit convolutions was shown to match attention in quality while allowing longer context lengths and lower time complexity. Leveraging Hyena’s new long-range capabilities, we present HyenaDNA, a genomic foundation model pretrained on the human reference genome with context lengths of up to 1 million tokens at the single nucleotide-level – an up to 500x increase over previous dense attention-based models. HyenaDNA scales sub-quadratically in sequence length (training up to 160x faster than Transformer), uses single nucleotide tokens, and has full global context at each layer. We explore what longer context enables - including the first use of in-context learning in genomics for simple adaptation to novel tasks without updating pretrained model weights. On fine-tuned benchmarks from the Nucleotide Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 18 datasets using a model with orders of magnitude less parameters and pretraining data.1 On the GenomicBenchmarks, HyenaDNA surpasses SotA on 7 of 8 datasets on average by +10 accuracy points. Code at https://github.com/HazyResearch/hyena-dna.
HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution
[ "Eric Nguyen", "Michael Poli", "Marjan Faizi", "Armin W Thomas", "Michael Wornow", "Callum Birch-Sykes", "Stefano Massaroli", "Aman Patel", "Clayton M. Rabideau", "Yoshua Bengio", "Stefano Ermon", "Christopher Re", "Stephen Baccus" ]
Conference
spotlight
2306.15794
[ "https://github.com/HazyResearch/hyena-dna" ]
https://huggingface.co/papers/2306.15794
3
17
2
13
1
[ "LongSafari/hyenadna-large-1m-seqlen", "LongSafari/hyenadna-large-1m-seqlen-hf", "LongSafari/hyenadna-medium-450k-seqlen", "LongSafari/hyenadna-tiny-1k-seqlen", "LongSafari/hyenadna-medium-160k-seqlen", "LongSafari/hyenadna-medium-160k-seqlen-hf", "LongSafari/hyenadna-medium-450k-seqlen-hf", "LongSafari/hyenadna-tiny-1k-seqlen-d256", "LongSafari/hyenadna-small-32k-seqlen-hf", "LongSafari/hyenadna-tiny-1k-seqlen-hf", "LongSafari/hyenadna-small-32k-seqlen", "LongSafari/hyenadna-tiny-16k-seqlen-d128-hf", "LongSafari/hyenadna-tiny-1k-seqlen-d256-hf", "LongSafari/hyenadna-tiny-16k-seqlen-d128" ]
[]
[]
null
https://openreview.net/forum?id=ubp5s2tgXq
@inproceedings{ jiang2023uncovering, title={Uncovering Meanings of Embeddings via Partial Orthogonality}, author={Yibo Jiang and Bryon Aragam and Victor Veitch}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ubp5s2tgXq} }
Machine learning tools often rely on embedding text as vectors of real numbers. In this paper, we study how the semantic structure of language is encoded in the algebraic structure of such embeddings. Specifically, we look at a notion of "semantic independence" capturing the idea that, e.g., "eggplant" and "tomato" are independent given "vegetable". Although such examples are intuitive, it is difficult to formalize such a notion of semantic independence. The key observation here is that any sensible formalization should obey a set of so-called independence axioms, and thus any algebraic encoding of this structure should also obey these axioms. This leads us naturally to use partial orthogonality as the relevant algebraic structure. We develop theory and methods that allow us to demonstrate that partial orthogonality does indeed capture semantic independence. Complementary to this, we also introduce the concept of independence preserving embeddings where embeddings preserve the conditional independence structures of a distribution, and we prove the existence of such embeddings and approximations to them.
Uncovering Meanings of Embeddings via Partial Orthogonality
[ "Yibo Jiang", "Bryon Aragam", "Victor Veitch" ]
Conference
poster
2310.17611
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ubgdInLSF9
@inproceedings{ li2023snapfusion, title={SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds}, author={Yanyu Li and Huan Wang and Qing Jin and Ju Hu and Pavlo Chemerys and Yun Fu and Yanzhi Wang and Sergey Tulyakov and Jian Ren}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ubgdInLSF9} }
Text-to-image diffusion models can create stunning images from natural language descriptions that rival the work of professional artists and photographers. However, these models are large, with complex network architectures and tens of denoising iterations, making them computationally expensive and slow to run. As a result, high-end GPUs and cloud-based inference are required to run diffusion models at scale. This is costly and has privacy implications, especially when user data is sent to a third party. To overcome these challenges, we present a generic approach that, for the first time, unlocks running text-to-image diffusion models on mobile devices in **less than 2 seconds**. We achieve so by introducing efficient network architecture and improving step distillation. Specifically, we propose an efficient UNet by identifying the redundancy of the original model and reducing the computation of the image decoder via data distillation. Further, we enhance the step distillation by exploring training strategies and introducing regularization from classifier-free guidance. Our extensive experiments on MS-COCO show that our model with $8$ denoising steps achieves better FID and CLIP scores than Stable Diffusion v$1.5$ with $50$ steps. Our work democratizes content creation by bringing powerful text-to-image diffusion models to the hands of users.
SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds
[ "Yanyu Li", "Huan Wang", "Qing Jin", "Ju Hu", "Pavlo Chemerys", "Yun Fu", "Yanzhi Wang", "Sergey Tulyakov", "Jian Ren" ]
Conference
poster
2306.00980
[ "" ]
https://huggingface.co/papers/2306.00980
2
15
13
9
1
[]
[]
[]
null
https://openreview.net/forum?id=ubap5FKbJs
@inproceedings{ liu2023domain, title={Domain Agnostic Fourier Neural Operators}, author={Ning Liu and Siavash Jafarzadeh and Yue Yu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ubap5FKbJs} }
Fourier neural operators (FNOs) can learn highly nonlinear mappings between function spaces, and have recently become a popular tool for learning responses of complex physical systems. However, to achieve good accuracy and efficiency, FNOs rely on the Fast Fourier transform (FFT), which is restricted to modeling problems on rectangular domains. To lift such a restriction and permit FFT on irregular geometries as well as topology changes, we introduce domain agnostic Fourier neural operator (DAFNO), a novel neural operator architecture for learning surrogates with irregular geometries and evolving domains. The key idea is to incorporate a smoothed characteristic function in the integral layer architecture of FNOs, and leverage FFT to achieve rapid computations, in such a way that the geometric information is explicitly encoded in the architecture. In our empirical evaluation, DAFNO has achieved state-of-the-art accuracy as compared to baseline neural operator models on two benchmark datasets of material modeling and airfoil simulation. To further demonstrate the capability and generalizability of DAFNO in handling complex domains with topology changes, we consider a brittle material fracture evolution problem. With only one training crack simulation sample, DAFNO has achieved generalizability to unseen loading scenarios and substantially different crack patterns from the trained scenario. Our code and data accompanying this paper are available at https://github.com/ningliu-iga/DAFNO.
Domain Agnostic Fourier Neural Operators
[ "Ning Liu", "Siavash Jafarzadeh", "Yue Yu" ]
Conference
poster
2305.00478
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uZvG0HLkOB
@inproceedings{ chzhen2023small, title={Small Total-Cost Constraints in Contextual Bandits with Knapsacks, with Application to Fairness}, author={Evgenii E Chzhen and Christophe Giraud and Zhen LI and Gilles Stoltz}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uZvG0HLkOB} }
We consider contextual bandit problems with knapsacks [CBwK], a problem where at each round, a scalar reward is obtained and vector-valued costs are suffered. The learner aims to maximize the cumulative rewards while ensuring that the cumulative costs are lower than some predetermined cost constraints. We assume that contexts come from a continuous set, that costs can be signed, and that the expected reward and cost functions, while unknown, may be uniformly estimated---a typical assumption in the literature. In this setting, total cost constraints had so far to be at least of order $T^{3/4}$, where $T$ is the number of rounds, and were even typically assumed to depend linearly on $T$. We are however motivated to use CBwK to impose a fairness constraint of equalized average costs between groups: the budget associated with the corresponding cost constraints should be as close as possible to the natural deviations, of order $\sqrt{T}$. To that end, we introduce a dual strategy based on projected-gradient-descent updates, that is able to deal with total-cost constraints of the order of $\sqrt{T}$ up to poly-logarithmic terms. This strategy is more direct and simpler than existing strategies in the literature. It relies on a careful, adaptive, tuning of the step size.
Small Total-Cost Constraints in Contextual Bandits with Knapsacks, with Application to Fairness
[ "Evgenii E Chzhen", "Christophe Giraud", "Zhen LI", "Gilles Stoltz" ]
Conference
poster
2305.15807
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uZjpSBTPik
@inproceedings{ wu2023clnerf, title={{CL}-Ne{RF}: Continual Learning of Neural Radiance Fields for Evolving Scene Representation}, author={Xiuzhe Wu and Peng Dai and Weipeng DENG and Handi Chen and Yang Wu and Yan-Pei Cao and Ying Shan and XIAOJUAN QI}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uZjpSBTPik} }
Existing methods for adapting Neural Radiance Fields (NeRFs) to scene changes require extensive data capture and model retraining, which is both time-consuming and labor-intensive. In this paper, we tackle the challenge of efficiently adapting NeRFs to real-world scene changes over time using a few new images while retaining the memory of unaltered areas, focusing on the continual learning aspect of NeRFs. To this end, we propose CL-NeRF, which consists of two key components: a lightweight expert adaptor for adapting to new changes and evolving scene representations and a conflict-aware knowledge distillation learning objective for memorizing unchanged parts. We also present a new benchmark for evaluating Continual Learning of NeRFs with comprehensive metrics. Our extensive experiments demonstrate that CL-NeRF can synthesize high-quality novel views of both changed and unchanged regions with high training efficiency, surpassing existing methods in terms of reducing forgetting and adapting to changes. Code and benchmark will be made available.
CL-NeRF: Continual Learning of Neural Radiance Fields for Evolving Scene Representation
[ "Xiuzhe Wu", "Peng Dai", "Weipeng DENG", "Handi Chen", "Yang Wu", "Yan-Pei Cao", "Ying Shan", "XIAOJUAN QI" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uWNqy09dFW
@inproceedings{ hu2023learning, title={Learning Neural Implicit through Volume Rendering with Attentive Depth Fusion Priors}, author={Pengchong Hu and Zhizhong Han}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uWNqy09dFW} }
Learning neural implicit representations has achieved remarkable performance in 3D reconstruction from multi-view images. Current methods use volume rendering to render implicit representations into either RGB or depth images that are supervised by the multi-view ground truth. However, rendering a view each time suffers from incomplete depth at holes and unawareness of occluded structures from the depth supervision, which severely affects the accuracy of geometry inference via volume rendering. To resolve this issue, we propose to learn neural implicit representations from multi-view RGBD images through volume rendering with an attentive depth fusion prior. Our prior allows neural networks to sense coarse 3D structures from the Truncated Signed Distance Function (TSDF) fused from all available depth images for rendering. The TSDF enables accessing the missing depth at holes on one depth image and the occluded parts that are invisible from the current view. By introducing a novel attention mechanism, we allow neural networks to directly use the depth fusion prior with the inferred occupancy as the learned implicit function. Our attention mechanism works with either a one-time fused TSDF that represents a whole scene or an incrementally fused TSDF that represents a partial scene in the context of Simultaneous Localization and Mapping (SLAM). Our evaluations on widely used benchmarks including synthetic and real-world scans show our superiority over the latest neural implicit methods.
Learning Neural Implicit through Volume Rendering with Attentive Depth Fusion Priors
[ "Pengchong Hu", "Zhizhong Han" ]
Conference
poster
2310.11598
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uWGH6jDTVv
@inproceedings{ garg2023complementary, title={Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift}, author={Saurabh Garg and Amrith Setlur and Zachary Chase Lipton and Sivaraman Balakrishnan and Virginia Smith and Aditi Raghunathan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uWGH6jDTVv} }
Self-training and contrastive learning have emerged as leading techniques for incorporating unlabeled data, both under distribution shift (unsupervised domain adaptation) and when it is absent (semi-supervised learning). However, despite the popularity and compatibility of these techniques, their efficacy in combination remains surprisingly unexplored. In this paper, we first undertake a systematic empirical investigation of this combination, finding (i) that in domain adaptation settings, self-training and contrastive learning offer significant complementary gains; and (ii) that in semi-supervised learning settings, surprisingly, the benefits are not synergistic. Across eight distribution shift datasets (e.g., BREEDs, WILDS), we demonstrate that the combined method obtains 3--8\% higher accuracy than either approach independently. Finally, we theoretically analyze these techniques in a simplified model of distribution shift demonstrating scenarios under which the features produced by contrastive learning can yield a good initialization for self-training to further amplify gains and achieve optimal performance, even when either method alone would fail.
Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift
[ "Saurabh Garg", "Amrith Setlur", "Zachary Chase Lipton", "Sivaraman Balakrishnan", "Virginia Smith", "Aditi Raghunathan" ]
Conference
poster
2312.03318
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uTlKUAm68H
@inproceedings{ akbari2023alternating, title={Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception}, author={Hassan Akbari and Dan Kondratyuk and Yin Cui and Rachel Hornung and Huisheng Wang and Hartwig Adam}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uTlKUAm68H} }
We present Integrated Multimodal Perception (IMP), a simple and scalable multimodal multi-task training and modeling approach. IMP integrates multimodal inputs including image, video, text, and audio into a single Transformer encoder with minimal modality-specific components. IMP makes use of a novel design that combines Alternating Gradient Descent (AGD) and Mixture-of-Experts (MoE) for efficient model & task scaling. We conduct extensive empirical studies and reveal the following key insights: 1) performing gradient descent updates by alternating on diverse modalities, loss functions, and tasks, with varying input resolutions, efficiently improves the model. 2) sparsification with MoE on a single modality-agnostic encoder substantially improves the performance, outperforming dense models that use modality-specific encoders or additional fusion layers and greatly mitigating the conflicts between modalities. IMP achieves competitive performance on a wide range of downstream tasks including video classification, image classification, image-text, and video-text retrieval. Most notably, we train a sparse IMP-MoE-L focusing on video tasks that achieves new state-of-the-art in zero-shot video classification: 77.0% on Kinetics-400, 76.8% on Kinetics-600, and 68.3% on Kinetics-700, improving the previous state-of-the-art by +5%, +6.7%, and +5.8%, respectively, while using only 15% of their total training computational cost.
Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception
[ "Hassan Akbari", "Dan Kondratyuk", "Yin Cui", "Rachel Hornung", "Huisheng Wang", "Hartwig Adam" ]
Conference
poster
2305.06324
[ "" ]
https://huggingface.co/papers/2305.06324
0
1
0
6
1
[]
[]
[]
null
https://openreview.net/forum?id=uRewSnLJAa
@inproceedings{ chen2023selfsupervised, title={Self-Supervised Reinforcement Learning that Transfers using Random Features}, author={Boyuan Chen and Chuning Zhu and Pulkit Agrawal and Kaiqing Zhang and Abhishek Gupta}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uRewSnLJAa} }
Model-free reinforcement learning algorithms have exhibited great potential in solving single-task sequential decision-making problems with high-dimensional observations and long horizons, but are known to be hard to generalize across tasks. Model-based RL, on the other hand, learns task-agnostic models of the world that naturally enables transfer across different reward functions, but struggles to scale to complex environments due to the compounding error. To get the best of both worlds, we propose a self-supervised reinforcement learning method that enables the transfer of behaviors across tasks with different rewards, while circumventing the challenges of model-based RL. In particular, we show self-supervised pre-training of model-free reinforcement learning with a number of random features as rewards allows implicit modeling of long-horizon environment dynamics. Then, planning techniques like model-predictive control using these implicit models enable fast adaptation to problems with new reward functions. Our method is self-supervised in that it can be trained on offline datasets without reward labels, but can then be quickly deployed on new tasks. We validate that our proposed method enables transfer across tasks on a variety of manipulation and locomotion domains in simulation, opening the door to generalist decision-making agents.
Self-Supervised Reinforcement Learning that Transfers using Random Features
[ "Boyuan Chen", "Chuning Zhu", "Pulkit Agrawal", "Kaiqing Zhang", "Abhishek Gupta" ]
Conference
poster
2305.17250
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uRHpgo6TMR
@inproceedings{ bolager2023sampling, title={Sampling weights of deep neural networks}, author={Erik Lien Bolager and Iryna Burak and Chinmay Datar and Qing Sun and Felix Dietrich}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uRHpgo6TMR} }
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights and biases of fully-connected neural networks. In a supervised learning context, no iterative optimization or gradient computations of internal network parameters are needed to obtain a trained network. The sampling is based on the idea of random feature models. However, instead of a data-agnostic distribution, e.g., a normal distribution, we use both the input and the output training data to sample shallow and deep networks. We prove that sampled networks are universal approximators. For Barron functions, we show that the $L^2$-approximation error of sampled shallow networks decreases with the square root of the number of neurons. Our sampling scheme is invariant to rigid body transformations and scaling of the input data, which implies many popular pre-processing techniques are not required. In numerical experiments, we demonstrate that sampled networks achieve accuracy comparable to iteratively trained ones, but can be constructed orders of magnitude faster. Our test cases involve a classification benchmark from OpenML, sampling of neural operators to represent maps in function spaces, and transfer learning using well-known architectures.
Sampling weights of deep neural networks
[ "Erik Lien Bolager", "Iryna Burak", "Chinmay Datar", "Qing Sun", "Felix Dietrich" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uR8TtWCIsr
@inproceedings{ merrill2023a, title={A Logic for Expressing Log-Precision Transformers}, author={William Merrill and Ashish Sabharwal}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uR8TtWCIsr} }
One way to interpret the reasoning power of transformer-based language models is to describe the types of logical rules they can resolve over some input text. Recently, Chiang et al. (2023) showed that finite-precision transformer classifiers can be equivalently expressed in a generalization of first-order logic. However, finite-precision transformers are a weak transformer variant because, as we show, a single head can only attend to a constant number of tokens and, in particular, cannot represent uniform attention. Since attending broadly is a core capability for transformers, we ask whether a minimally more expressive model that can attend universally can also be characterized in logic. To this end, we analyze transformers whose forward pass is computed in $\log n$ precision on contexts of length $n$. We prove any log-precision transformer classifier can be equivalently expressed as a first-order logic sentence that, in addition to standard universal and existential quantifiers, may also contain majority-vote quantifiers. This is the tightest known upper bound and first logical characterization of log-precision transformers.
A Logic for Expressing Log-Precision Transformers
[ "William Merrill", "Ashish Sabharwal" ]
Conference
poster
2210.02671
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uPSQv0leAu
@inproceedings{ xie2023data, title={Data Selection for Language Models via Importance Resampling}, author={Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uPSQv0leAu} }
Selecting a suitable pretraining dataset is crucial for both general-domain (e.g., GPT-3) and domain-specific (e.g., Codex) language models (LMs). We formalize this problem as selecting a subset of a large raw unlabeled dataset to match a desired target distribution given unlabeled target samples. Due to the scale and dimensionality of the raw text data, existing methods use simple heuristics or require human experts to manually curate data. Instead, we extend the classic importance resampling approach used in low-dimensions for LM data selection. We propose Data Selection with Importance Resampling (DSIR), an efficient and scalable framework that estimates importance weights in a reduced feature space for tractability and selects data with importance resampling according to these weights. We instantiate the DSIR framework with hashed n-gram features for efficiency, enabling the selection of 100M documents from the full Pile dataset in 4.5 hours. To measure whether hashed n-gram features preserve the aspects of the data that are relevant to the target, we define KL reduction, a data metric that measures the proximity between the selected pretraining data and the target on some feature space. Across 8 data selection methods (including expert selection), KL reduction on hashed n-gram features highly correlates with average downstream accuracy (r=0.82). When selecting data for continued pretraining on a specific domain, DSIR performs comparably to expert curation across 8 target distributions. When pretraining general-domain models (target is Wikipedia and books), DSIR improves over random selection and heuristic filtering baselines by 2--2.5% on the GLUE benchmark.
Data Selection for Language Models via Importance Resampling
[ "Sang Michael Xie", "Shibani Santurkar", "Tengyu Ma", "Percy Liang" ]
Conference
poster
2302.03169
[ "https://github.com/p-lambda/dsir" ]
https://huggingface.co/papers/2302.03169
1
0
0
4
1
[ "globis-university/deberta-v3-japanese-xsmall", "globis-university/deberta-v3-japanese-large", "globis-university/deberta-v3-japanese-base" ]
[ "togethercomputer/RedPajama-Data-V2", "stanford-crfm/DSIR-filtered-pile-50M", "stanford-crfm/heuristic_classification-filtered-pile-50M", "ShivamPR21/RedPajama-Data-V2" ]
[]
null
https://openreview.net/forum?id=uOEeui0rL7
@inproceedings{ villasevil2023breadcrumbs, title={Breadcrumbs to the Goal: Supervised Goal Selection from Human-in-the-Loop Feedback}, author={Marcel Torne Villasevil and Max Balsells I Pamies and Zihan Wang and Samedh Desai and Tao Chen and Pulkit Agrawal and Abhishek Gupta}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uOEeui0rL7} }
Exploration and reward specification are fundamental and intertwined challenges for reinforcement learning. Solving sequential decision making tasks with a non-trivial element of exploration requires either specifying carefully designed reward functions or relying on indiscriminate, novelty seeking exploration bonuses. Human supervisors can provide effective guidance in the loop to direct the exploration process, but prior methods to leverage this guidance require constant synchronous high-quality human feedback, which is expensive and impractical to obtain. In this work, we propose a technique - Human Guided Exploration (HUGE), that is able to leverage low-quality feedback from non-expert users, which is infrequent, asynchronous and noisy, to guide exploration for reinforcement learning, without requiring careful reward specification. The key idea is to separate the challenges of directed exploration and policy learning - human feedback is used to direct exploration, while self-supervised policy learning is used to independently learn unbiased behaviors from the collected data. We show that this procedure can leverage noisy, asynchronous human feedback to learn tasks with no hand-crafted reward design or exploration bonuses. We show that HUGE is able to learn a variety of challenging multi-stage robotic navigation and manipulation tasks in simulation using crowdsourced feedback from non-expert users. Moreover, this paradigm can be scaled to learning directly on real-world robots.
Breadcrumbs to the Goal: Goal-Conditioned Exploration from Human-in-the-Loop Feedback
[ "Marcel Torne Villasevil", "Max Balsells I Pamies", "Zihan Wang", "Samedh Desai", "Tao Chen", "Pulkit Agrawal", "Abhishek Gupta" ]
Conference
poster
2307.11049
[ "https://github.com/improbable-ai/human-guided-exploration" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uNnPWR66b8
@inproceedings{ zhu2023sample, title={Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling}, author={Zhenyu Zhu and Francesco Locatello and Volkan Cevher}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uNnPWR66b8} }
This paper provides statistical sample complexity bounds for score-matching and its applications in causal discovery. We demonstrate that accurate estimation of the score function is achievable by training a standard deep ReLU neural network using stochastic gradient descent. We establish bounds on the error rate of recovering causal relationships using the score-matching-based causal discovery method of Rolland et al. [2022], assuming a sufficiently good estimation of the score function. Finally, we analyze the upper bound of score-matching estimation within the score-based generative modeling, which has been applied for causal discovery but is also of independent interest within the domain of generative models.
Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling
[ "Zhenyu Zhu", "Francesco Locatello", "Volkan Cevher" ]
Conference
poster
2310.18123
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uNmKBZrRZC
@inproceedings{ ying2023adaptive, title={Adaptive Linear Estimating Equations}, author={Mufang Ying and Koulik Khamaru and Cun-Hui Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uNmKBZrRZC} }
Sequential data collection has emerged as a widely adopted technique for enhancing the efficiency of data gathering processes. Despite its advantages, such data collection mechanism often introduces complexities to the statistical inference procedure. For instance, the ordinary least squares (OLS) estimator in an adaptive linear regression model can exhibit non-normal asymptotic behavior, posing challenges for accurate inference and interpretation. In this paper, we propose a general method for constructing debiased estimator which remedies this issue. It makes use of the idea of adaptive linear estimating equations, and we establish theoretical guarantees of asymptotic normality, supplemented by discussions on achieving near-optimal asymptotic variance. A salient feature of our estimator is that in the context of multi-armed bandits, our estimator retains the non-asymptotic performance of the least squares estimator while obtaining asymptotic normality property. Consequently, this work helps connect two fruitful paradigms of adaptive inference: a) non-asymptotic inference using concentration inequalities and b) asymptotic inference via asymptotic normality.
Adaptive Linear Estimating Equations
[ "Mufang Ying", "Koulik Khamaru", "Cun-Hui Zhang" ]
Conference
poster
2307.07320
[ "https://github.com/mufangying/alee" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uN71BdBEG8
@inproceedings{ gunderson2023the, title={The Graph Pencil Method: Mapping Subgraph Densities to Stochastic Block Models}, author={Lee M. Gunderson and Gecia Bravo-Hermsdorff and Peter Orbanz}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uN71BdBEG8} }
In this work, we describe a method that determines an exact map from a finite set of subgraph densities to the parameters of a stochastic block model (SBM) matching these densities. Given a number K of blocks, the subgraph densities of a finite number of stars and bistars uniquely determines a single element of the class of all degree-separated stochastic block models with K blocks. Our method makes it possible to translate estimates of these subgraph densities into model parameters, and hence to use subgraph densities directly for inference. The computational overhead is negligible; computing the translation map is polynomial in K, but independent of the graph size once the subgraph densities are given.
The Graph Pencil Method: Mapping Subgraph Densities to Stochastic Block Models
[ "Lee M. Gunderson", "Gecia Bravo-Hermsdorff", "Peter Orbanz" ]
Conference
poster
2402.00188
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uJmsYZiu3E
@inproceedings{ li2023fair, title={Fair Allocation of Indivisible Chores: Beyond Additive Costs}, author={Bo Li and Fangxiao Wang and Yu Zhou}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uJmsYZiu3E} }
We study the maximin share (MMS) fair allocation of $m$ indivisible tasks to $n$ agents who have costs for completing the assigned tasks. It is known that exact MMS fairness cannot be guaranteed, and so far the best-known approximation for additive cost functions is $\frac{13}{11}$ by Huang and Segal-Halevi [EC, 2023]; however, beyond additivity, very little is known. In this work, we first prove that no algorithm can ensure better than $\min\{n,\frac{\log m}{\log \log m}\}$-approximation if the cost functions are submodular. This result also shows a sharp contrast with the allocation of goods where constant approximations exist as shown by Barman and Krishnamurthy [TEAC, 2020] and Ghodsi et al. [AIJ, 2022]. We then prove that for subadditive costs, there always exists an allocation that is $\min\{n,\lceil\log m\rceil\}$-approximation, and thus the approximation ratio is asymptotically tight. Besides multiplicative approximation, we also consider the ordinal relaxation, 1-out-of-$d$ MMS, which was recently proposed by Hosseini et al. [JAIR and AAMAS, 2022]. Our impossibility result implies that for any $d\ge 2$, a 1-out-of-$d$ MMS allocation may not exist. Due to these hardness results for general subadditive costs, we turn to studying two specific subadditive costs, namely, bin packing and job scheduling. For both settings, we show that constant approximate allocations exist for both multiplicative and ordinal relaxations of MMS.
Fair Allocation of Indivisible Chores: Beyond Additive Costs
[ "Bo Li", "Fangxiao Wang", "Yu Zhou" ]
Conference
poster
2205.10520
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uJ3qNIsDGF
@inproceedings{ balasubramanian2023exploring, title={Exploring Geometry of Blind Spots in Vision models}, author={Sriram Balasubramanian and Gaurang Sriramanan and Vinu Sankar Sadasivan and Soheil Feizi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uJ3qNIsDGF} }
Despite the remarkable success of deep neural networks in a myriad of settings, several works have demonstrated their overwhelming sensitivity to near-imperceptible perturbations, known as adversarial attacks. On the other hand, prior works have also observed that deep networks can be under-sensitive, wherein large-magnitude perturbations in input space do not induce appreciable changes to network activations. In this work, we study in detail the phenomenon of under-sensitivity in vision models such as CNNs and Transformers, and present techniques to study the geometry and extent of “equi-confidence” level sets of such networks. We propose a Level Set Traversal algorithm that iteratively explores regions of high confidence with respect to the input space using orthogonal components of the local gradients. Given a source image, we use this algorithm to identify inputs that lie in the same equi-confidence level set as the source image despite being perceptually similar to arbitrary images from other classes. We further observe that the source image is linearly connected by a high-confidence path to these inputs, uncovering a star-like structure for level sets of deep networks. Furthermore, we attempt to identify and estimate the extent of these connected higher-dimensional regions over which the model maintains a high degree of confidence.
Exploring Geometry of Blind Spots in Vision models
[ "Sriram Balasubramanian", "Gaurang Sriramanan", "Vinu Sankar Sadasivan", "Soheil Feizi" ]
Conference
spotlight
2310.19889
[ "https://github.com/sriramb-98/blindspots-neurips-sub" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uFpjPJMkv6
@inproceedings{ zhang2023fairlisa, title={Fair{LISA}: Fair User Modeling with Limited Sensitive Attributes Information}, author={Zheng Zhang and Qi Liu and Hao Jiang and Fei Wang and Yan Zhuang and Le Wu and Weibo Gao and Enhong Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uFpjPJMkv6} }
User modeling techniques profile users' latent characteristics (e.g., preference) from their observed behaviors, and play a crucial role in decision-making. Unfortunately, traditional user models may unconsciously capture biases related to sensitive attributes (e.g., gender) from behavior data, even when this sensitive information is not explicitly provided. This can lead to unfair issues and discrimination against certain groups based on these sensitive attributes. Recent studies have been proposed to improve fairness by explicitly decorrelating user modeling results and sensitive attributes. However, most existing approaches assume that fully sensitive attribute labels are available in the training set, which is unrealistic due to collection limitations like privacy concerns, and hence bear the limitation of performance. In this paper, we focus on a practical situation with limited sensitive data and propose a novel FairLISA framework, which can efficiently utilize data with known and unknown sensitive attributes to facilitate fair model training. We first propose a novel theoretical perspective to build the relationship between data with both known and unknown sensitive attributes with the fairness objective. Then, based on this, we provide a general adversarial framework to effectively leverage the whole user data for fair user modeling. We conduct experiments on representative user modeling tasks including recommender system and cognitive diagnosis. The results demonstrate that our FairLISA can effectively improve fairness while retaining high accuracy in scenarios with different ratios of missing sensitive attributes.
FairLISA: Fair User Modeling with Limited Sensitive Attributes Information
[ "Zheng Zhang", "Qi Liu", "Hao Jiang", "Fei Wang", "Yan Zhuang", "Le Wu", "Weibo Gao", "Enhong Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uFlE0qgtRO
@inproceedings{ wang2023focus, title={Focus Your Attention when Few-Shot Classification}, author={Haoqing Wang and Shibo Jie and Zhi-Hong Deng}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uFlE0qgtRO} }
Since many pre-trained vision transformers emerge and provide strong representation for various downstream tasks, we aim to adapt them to few-shot image classification tasks in this work. The input images typically contain multiple entities. The model may not focus on the class-related entities for the current few-shot task, even with fine-tuning on support samples, and the noise information from the class-independent ones harms performance. To this end, we first propose a method that uses the attention and gradient information to automatically locate the positions of key entities, denoted as position prompts, in the support images. Then we employ the cross-entropy loss between their many-hot presentation and the attention logits to optimize the model to focus its attention on the key entities during fine-tuning. This ability then can generalize to the query samples. Our method is applicable to different vision transformers (e.g., columnar or pyramidal ones), and also to different pre-training ways (e.g., single-modal or vision-language pre-training). Extensive experiments show that our method can improve the performance of full or parameter-efficient fine-tuning methods on few-shot tasks. Code is available at https://github.com/Haoqing-Wang/FORT.
Focus Your Attention when Few-Shot Classification
[ "Haoqing Wang", "Shibo Jie", "Zhi-Hong Deng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uEJfW3OtUm
@inproceedings{ zhao2023static, title={Static and Sequential Malicious Attacks in the Context of Selective Forgetting}, author={CHENXU ZHAO and Wei Qian and Zhitao Ying and Mengdi Huai}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uEJfW3OtUm} }
With the growing demand for the right to be forgotten, there is an increasing need for machine learning models to forget sensitive data and its impact. To address this, the paradigm of selective forgetting (a.k.a machine unlearning) has been extensively studied, which aims to remove the impact of requested data from a well-trained model without retraining from scratch. Despite its significant success, limited attention has been given to the security vulnerabilities of the unlearning system concerning malicious data update requests. Motivated by this, in this paper, we explore the possibility and feasibility of malicious data update requests during the unlearning process. Specifically, we first propose a new class of malicious selective forgetting attacks, which involves a static scenario where all the malicious data update requests are provided by the adversary at once. Additionally, considering the sequential setting where the data update requests arrive sequentially, we also design a novel framework for sequential forgetting attacks, which is formulated as a stochastic optimal control problem. We also propose novel optimization algorithms that can find the effective malicious data update requests. We perform theoretical analyses for the proposed selective forgetting attacks, and extensive experimental results validate the effectiveness of our proposed selective forgetting attacks. The source code is available in the supplementary material.
Static and Sequential Malicious Attacks in the Context of Selective Forgetting
[ "CHENXU ZHAO", "Wei Qian", "Zhitao Ying", "Mengdi Huai" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uDV4lA0gZ6
@inproceedings{ yang2023efficient, title={Efficient Robust Bayesian Optimization for Arbitrary Uncertain inputs}, author={Lin Yang and Junlong Lyu and Wenlong Lyu and Zhitang Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=uDV4lA0gZ6} }
Bayesian Optimization (BO) is a sample-efficient optimization algorithm widely employed across various applications. In some challenging BO tasks, input uncertainty arises due to the inevitable randomness in the optimization process, such as machining errors, execution noise, or contextual variability. This uncertainty deviates the input from the intended value before evaluation, resulting in significant performance fluctuations in the final result. In this paper, we introduce a novel robust Bayesian Optimization algorithm, AIRBO, which can effectively identify a robust optimum that performs consistently well under arbitrary input uncertainty. Our method directly models the uncertain inputs of arbitrary distributions by empowering the Gaussian Process with the Maximum Mean Discrepancy (MMD) and further accelerates the posterior inference via Nystrom approximation. Rigorous theoretical regret bound is established under MMD estimation error and extensive experiments on synthetic functions and real problems demonstrate that our approach can handle various input uncertainties and achieve a state-of-the-art performance.
Efficient Robust Bayesian Optimization for Arbitrary Uncertain inputs
[ "Lin Yang", "Junlong Lyu", "Wenlong Lyu", "Zhitang Chen" ]
Conference
poster
2310.20145
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]