bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=yQL5tutdaH | @inproceedings{
wei2024toward,
title={Toward a Stable, Fair, and Comprehensive Evaluation of Object Hallucination in Large Vision-Language Models},
author={Hongliang Wei and Xingtao Wang and Xianqi Zhang and Xiaopeng Fan and Debin Zhao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yQL5tutdaH}
} | Given different instructions, large vision-language models (LVLMs) exhibit different degrees of object hallucinations, posing a significant challenge to the evaluation of object hallucinations. Overcoming this challenge, existing object hallucination evaluation methods average the results obtained from a set of instructions. However, these methods fail to provide consistent evaluation across instruction sets that generate image descriptions of significantly different lengths. In this paper, we present the first systematic investigation of the effect of instructions on object hallucinations in LVLMs, with a specific focus on the role played by image description lengths. A valuable finding is that instructions indirectly affect hallucinations through the length of image descriptions. The longer the image description, the higher the object hallucination degree. Accordingly, we fit an informative length-hallucination curve, upon which a fine-grained evaluation framework named LeHaCE is introduced for evaluating object hallucinations at any given image description length. LeHaCE evaluates the object hallucination degree at a uniform image description length to mitigate the effect of description lengths, promoting stability and fairness. Moreover, LeHaCE incorporates the curve slope as an innovative hallucination evaluation metric, reflecting the extent to which the object hallucination degree is affected by the image description length, achieving a more comprehensive evaluation. Experimental results demonstrate that LeHaCE provides a more stable, fair, and comprehensive evaluation of object hallucinations in LVLMs compared to existing methods. | Toward a Stable, Fair, and Comprehensive Evaluation of Object Hallucination in Large Vision-Language Models | [
"Hongliang Wei",
"Xingtao Wang",
"Xianqi Zhang",
"Xiaopeng Fan",
"Debin Zhao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yPPNi7vc7n | @inproceedings{
osada2024local,
title={Local Curvature Smoothing with Stein's Identity for Efficient Score Matching},
author={GENKI OSADA and Makoto Shing and Takashi Nishide},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yPPNi7vc7n}
} | The training of score-based diffusion models (SDMs) is based on score matching. The challenge of score matching is that it includes a computationally expensive Jacobian trace. While several methods have been proposed to avoid this computation, each has drawbacks, such as instability during training and approximating the learning as learning a denoising vector field rather than a true score.
We propose a novel score matching variant, local curvature smoothing with Stein's identity (LCSS). The LCSS bypasses the Jacobian trace by applying Stein's identity, enabling regularization effectiveness and efficient computation. We show that LCSS surpasses existing methods in sample generation performance and matches the performance of denoising score matching, widely adopted by most SDMs, in evaluations such as FID, Inception score, and bits per dimension. Furthermore, we show that LCSS enables realistic image generation even at a high resolution of $1024 \times 1024$. | Local Curvature Smoothing with Stein's Identity for Efficient Score Matching | [
"GENKI OSADA",
"Makoto Shing",
"Takashi Nishide"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yOe6ajdslI | @inproceedings{
kumagai2024auc,
title={{AUC} Maximization under Positive Distribution Shift},
author={Atsutoshi Kumagai and Tomoharu Iwata and Hiroshi Takahashi and Taishi Nishiyama and Yasuhiro Fujiwara},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yOe6ajdslI}
} | Maximizing the area under the receiver operating characteristic curve (AUC) is a popular approach to imbalanced binary classification problems. Existing AUC maximization methods usually assume that training and test distributions are identical. However, this assumption is often violated in practice due to {\it a positive distribution shift}, where the negative-conditional density does not change but the positive-conditional density can vary. This shift often occurs in imbalanced classification since positive data are often more diverse and time-varying than negative data. To deal with this shift, we theoretically show that the AUC on the test distribution can be expressed by using the positive and marginal training densities and the marginal test density. Based on this result, we can maximize the AUC on the test distribution by using positive and unlabeled data in the training distribution and unlabeled data in the test distribution. The proposed method requires only positive labels in the training distribution as supervision. Moreover, the derived AUC has a simple form and thus is easy to implement. The effectiveness of the proposed method is shown with four real-world datasets. | AUC Maximization under Positive Distribution Shift | [
"Atsutoshi Kumagai",
"Tomoharu Iwata",
"Hiroshi Takahashi",
"Taishi Nishiyama",
"Yasuhiro Fujiwara"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yO5DVyCHZR | @inproceedings{
yan2024a,
title={A Simple and Optimal Approach for Universal Online Learning with Gradient Variations},
author={Yu-Hu Yan and Peng Zhao and Zhi-Hua Zhou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yO5DVyCHZR}
} | We investigate the problem of universal online learning with gradient-variation regret. Universal online learning aims to achieve regret guarantees without prior knowledge of the curvature of the online functions. Moreover, we study the problem-dependent gradient-variation regret as it plays a crucial role in bridging stochastic and adversarial optimization as well as game theory. In this work, we design a universal approach with the *optimal* gradient-variation regret simultaneously for strongly convex, exp-concave, and convex functions, thus addressing an open problem highlighted by [Yan et al. [2023]](https://openreview.net/forum?id=AA1xrgAP5z). Our approach is *simple* since it is algorithmically efficient-to-implement with a two-layer online ensemble structure and only $1$ gradient query per round, and theoretically easy-to-analyze with a novel and alternative analysis to the gradient-variation regret. Concretely, previous works on gradient variations require controlling the algorithmic stability, which is challenging and leads to sub-optimal regret and less efficient algorithm design. Our analysis overcomes this issue by using a Bregman divergence negative term from linearization and a useful smoothness property. | A Simple and Optimal Approach for Universal Online Learning with Gradient Variations | [
"Yu-Hu Yan",
"Peng Zhao",
"Zhi-Hua Zhou"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yMS7ansbr6 | @inproceedings{
liu2024lips,
title={Lips Are Lying: Spotting the Temporal Inconsistency between Audio and Visual in Lip-Syncing DeepFakes},
author={Weifeng Liu and Tianyi She and Jiawei Liu and Boheng Li and Dongyu Yao and Ziyou Liang and Run Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yMS7ansbr6}
} | In recent years, DeepFake technology has achieved unprecedented success in high-quality video synthesis, but these methods also pose potential and severe security threats to humanity. DeepFake can be bifurcated into entertainment applications like face swapping and illicit uses such as lip-syncing fraud. However, lip-forgery videos, which neither change identity nor have discernible visual artifacts, present a formidable challenge to existing DeepFake detection methods. Our preliminary experiments have shown that the effectiveness of the existing methods often drastically decrease or even fail when tackling lip-syncing videos.
In this paper, for the first time, we propose a novel approach dedicated to lip-forgery identification that exploits the inconsistency between lip movements and audio signals. We also mimic human natural cognition by capturing subtle biological links between lips and head regions to boost accuracy. To better illustrate the effectiveness and advances of our proposed method, we create a high-quality LipSync dataset, AVLips, by employing the state-of-the-art lip generators. We hope this high-quality and diverse dataset could be well served the further research on this challenging and interesting field. Experimental results show that our approach gives an average accuracy of more than 95.3% in spotting lip-syncing videos, significantly outperforming the baselines. Extensive experiments demonstrate the capability to tackle deepfakes and the robustness in surviving diverse input transformations. Our method achieves an accuracy of up to 90.2% in real-world scenarios (e.g., WeChat video call) and shows its powerful capabilities in real scenario deployment.
To facilitate the progress of this research community, we release all resources at https://github.com/AaronComo/LipFD. | Lips Are Lying: Spotting the Temporal Inconsistency between Audio and Visual in Lip-Syncing DeepFakes | [
"Weifeng Liu",
"Tianyi She",
"Jiawei Liu",
"Boheng Li",
"Dongyu Yao",
"Ziyou Liang",
"Run Wang"
] | NeurIPS.cc/2024/Conference | 2401.15668 | [
"https://github.com/aaroncomo/lipfd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=yKvHJJE9le | @inproceedings{
li2024safe,
title={Safe Time-Varying Optimization based on Gaussian Processes with Spatio-Temporal Kernel},
author={Jialin Li and Marta Zagorowska and Giulia De Pasquale and Alisa Rupenyan and John Lygeros},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yKvHJJE9le}
} | Ensuring safety is a key aspect in sequential decision making problems, such as robotics or process control. The complexity of the underlying systems often makes finding the optimal decision challenging, especially when the safety-critical system is time-varying. Overcoming the problem of optimizing an unknown time-varying reward subject to unknown time-varying safety constraints, we propose TVSAFEOPT, a new algorithm built on Bayesian optimization with a spatio-temporal kernel. The algorithm is capable of safely tracking a time-varying safe region without the need for explicit change detection. Optimality guarantees are also provided for the algorithm when the optimization problem becomes stationary. We show that TVSAFEOPT compares favorably against SAFEOPT on synthetic data, both regarding safety and optimality. Evaluation on a realistic case study with gas compressors confirms that TVSAFEOPT ensures safety when solving time-varying optimization problems with unknown reward and safety functions. | Safe Time-Varying Optimization based on Gaussian Processes with Spatio-Temporal Kernel | [
"Jialin Li",
"Marta Zagorowska",
"Giulia De Pasquale",
"Alisa Rupenyan",
"John Lygeros"
] | NeurIPS.cc/2024/Conference | 2409.18000 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=yDo1ynArjj | @inproceedings{
chen2024diffusion,
title={Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion},
author={Boyuan Chen and Diego Mart{\'\i} Mons{\'o} and Yilun Du and Max Simchowitz and Russ Tedrake and Vincent Sitzmann},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yDo1ynArjj}
} | This paper presents Diffusion Forcing, a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels. We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens without fully diffusing past ones. Our approach is shown to combine the strengths of next-token prediction models, such as variable-length generation, with the strengths of full-sequence diffusion models, such as the ability to guide sampling to desirable trajectories. Our method offers a range of additional capabilities, such as (1) rolling-out sequences of continuous tokens, such as video, with lengths past the training horizon, where baselines diverge and (2) new sampling and guiding schemes that uniquely profit from Diffusion Forcing's variable-horizon and causal architecture, and which lead to marked performance gains in decision-making and planning tasks. In addition to its empirical success, our method is proven to optimize a variational lower bound on the likelihoods of all subsequences of tokens drawn from the true joint distribution. Project website: https://boyuan.space/diffusion-forcing/ | Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion | [
"Boyuan Chen",
"Diego Martí Monsó",
"Yilun Du",
"Max Simchowitz",
"Russ Tedrake",
"Vincent Sitzmann"
] | NeurIPS.cc/2024/Conference | 2407.01392 | [
"https://github.com/buoyancy99/diffusion-forcing"
] | https://huggingface.co/papers/2407.01392 | 5 | 39 | 1 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=yDjojeIWO9 | @inproceedings{
xia2024transferable,
title={Transferable Adversarial Attacks on {SAM} and Its Downstream Models},
author={Song Xia and Wenhan Yang and Yi Yu and Xun Lin and Henghui Ding and LINGYU DUAN and Xudong Jiang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yDjojeIWO9}
} | The utilization of large foundational models has a dilemma: while fine-tuning downstream tasks from them holds promise for making use of the well-generalized knowledge in practical applications, their open accessibility also poses threats of adverse usage.
This paper, for the first time, explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM), by solely utilizing the information from the open-sourced SAM.
In contrast to prevailing transfer-based adversarial attacks, we demonstrate the existence of adversarial dangers even without accessing the downstream task and dataset to train a similar surrogate model.
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm to extract the intrinsic vulnerability inherent in the foundation model, which is then utilized as the prior knowledge to guide the generation of adversarial perturbations.
Moreover, by formulating the gradient difference in the attacking process between the open-sourced SAM and its fine-tuned downstream models, we theoretically demonstrate that a deviation occurs in the adversarial update direction by directly maximizing the distance of encoded feature embeddings in the open-sourced SAM.
Consequently, we propose a gradient robust loss that simulates the associated uncertainty with gradient-based noise augmentation to enhance the robustness of generated adversarial examples (AEs) towards this deviation, thus improving the transferability.
Extensive experiments demonstrate the effectiveness of the proposed universal meta-initialized and gradient robust adversarial attack (UMI-GRAT) toward SAMs and their downstream models.
Code is available at https://github.com/xiasong0501/GRAT. | Transferable Adversarial Attacks on SAM and Its Downstream Models | [
"Song Xia",
"Wenhan Yang",
"Yi Yu",
"Xun Lin",
"Henghui Ding",
"LINGYU DUAN",
"Xudong Jiang"
] | NeurIPS.cc/2024/Conference | 2410.20197 | [
"https://github.com/xiasong0501/grat"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=yCh1z6Dcto | @inproceedings{
feng2024stepping,
title={Stepping Forward on the Last Mile},
author={Chen Feng and Shaojie Zhuo and Xiaopeng Zhang and Ramchalam Kinattinkara Ramakrishnan and Zhaocong Yuan and Andrew Zou Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yCh1z6Dcto}
} | Continuously adapting pre-trained models to local data on resource constrained edge devices is the \emph{last mile} for model deployment. However, as models increase in size and depth, backpropagation requires a large amount of memory, which becomes prohibitive for edge devices. In addition, most existing low power neural processing engines (e.g., NPUs, DSPs, MCUs, etc.) are designed as fixed-point inference accelerators, without training capabilities. Forward gradients, solely based on directional derivatives computed from two forward calls, have been recently used for model training, with substantial savings in computation and memory. However, the performance of quantized training with fixed-point forward gradients remains unclear. In this paper, we investigate the feasibility of on-device training using fixed-point forward gradients, by conducting comprehensive experiments across a variety of deep learning benchmark tasks in both vision and audio domains. We propose a series of algorithm enhancements that further reduce the memory footprint, and the accuracy gap compared to backpropagation. An empirical study on how training with forward gradients navigates in the loss landscape is further explored. Our results demonstrate that on the last mile of model customization on edge devices, training with fixed-point forward gradients is a feasible and practical approach. | Stepping Forward on the Last Mile | [
"Chen Feng",
"Shaojie Zhuo",
"Xiaopeng Zhang",
"Ramchalam Kinattinkara Ramakrishnan",
"Zhaocong Yuan",
"Andrew Zou Li"
] | NeurIPS.cc/2024/Conference | 2411.04036 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=yBrxziByeG | @inproceedings{
zhang2024textdifuse,
title={Text-DiFuse: An Interactive Multi-Modal Image Fusion Framework based on Text-modulated Diffusion Model},
author={Hao Zhang and Lei Cao and Jiayi Ma},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yBrxziByeG}
} | Existing multi-modal image fusion methods fail to address the compound degradations presented in source images, resulting in fusion images plagued by noise, color bias, improper exposure, etc. Additionally, these methods often overlook the specificity of foreground objects, weakening the salience of the objects of interest within the fused images. To address these challenges, this study proposes a novel interactive multi-modal image fusion framework based on the text-modulated diffusion model, called Text-DiFuse. First, this framework integrates feature-level information integration into the diffusion process, allowing adaptive degradation removal and multi-modal information fusion. This is the first attempt to deeply and explicitly embed information fusion within the diffusion process, effectively addressing compound degradation in image fusion. Second, by embedding the combination of the text and zero-shot location model into the diffusion fusion process, a text-controlled fusion re-modulation strategy is developed. This enables user-customized text control to improve fusion performance and highlight foreground objects in the fused images. Extensive experiments on diverse public datasets show that our Text-DiFuse achieves state-of-the-art fusion performance across various scenarios with complex degradation. Moreover, the semantic segmentation experiment validates the significant enhancement in semantic performance achieved by our text-controlled fusion re-modulation strategy. The code is publicly available at https://github.com/Leiii-Cao/Text-DiFuse. | Text-DiFuse: An Interactive Multi-Modal Image Fusion Framework based on Text-modulated Diffusion Model | [
"Hao Zhang",
"Lei Cao",
"Jiayi Ma"
] | NeurIPS.cc/2024/Conference | 2410.23905 | [
"https://github.com/leiii-cao/text-difuse"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=yBHbeSpwYS | @inproceedings{
chen2024in,
title={In Pursuit of Causal Label Correlations for Multi-label Image Recognition},
author={Zhao-Min Chen and Xin Jin and YisuGe and Sixian Chan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yBHbeSpwYS}
} | Multi-label image recognition aims to predict all objects present in an input image. A common belief is that modeling the correlations between objects is beneficial for multi-label recognition. However, this belief has been recently challenged as label correlations may mislead the classifier in testing, due to the possible contextual bias in training. Accordingly, a few of recent works not only discarded label correlation modeling, but also advocated to remove contextual information for multi-label image recognition. This work explicitly explores label correlations for multi-label image recognition based on a principled causal intervention approach. With causal intervention, we pursue causal label correlations and suppress spurious label correlations, as the former tend to convey useful contextual cues while the later may mislead the classifier. Specifically, we decouple label-specific features with a Transformer decoder attached to the backbone network, and model the confounders which may give rise to spurious correlations by clustering spatial features of all training images. Based on label-specific features and confounders, we employ a cross-attention module to implement causal intervention, quantifying the causal correlations from all object categories to each predicted object category. Finally, we obtain image labels by combining the predictions from decoupled features and causal label correlations. Extensive experiments clearly validate the effectiveness of our approach for multi-label image recognition in both common and cross-dataset settings. | In Pursuit of Causal Label Correlations for Multi-label Image Recognition | [
"Zhao-Min Chen",
"Xin Jin",
"YisuGe",
"Sixian Chan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yAa5l92TtQ | @inproceedings{
wang2024proving,
title={Proving Theorems Recursively},
author={Haiming Wang and Huajian Xin and Zhengying Liu and Wenda Li and Yinya Huang and Jianqiao Lu and Zhicheng YANG and Jing Tang and Jian Yin and Zhenguo Li and Xiaodan Liang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yAa5l92TtQ}
} | Recent advances in automated theorem proving leverages language models to explore expanded search spaces by step-by-step proof generation. However, such approaches are usually based on short-sighted heuristics (e.g., log probability or value function scores) that potentially lead to suboptimal or even distracting subgoals, preventing us from finding longer proofs. To address this challenge, we propose POETRY (PrOvE Theorems RecursivelY), which proves theorems in a recursive, level-by-level manner in the Isabelle theorem prover. Unlike previous step-by-step methods, POETRY searches for a verifiable sketch of the proof at each level and focuses on solving the current level's theorem or conjecture. Detailed proofs of intermediate conjectures within the sketch are temporarily replaced by a placeholder tactic called sorry, deferring their proofs to subsequent levels. This approach allows the theorem to be tackled incrementally by outlining the overall theorem at the first level and then solving the intermediate conjectures at deeper levels. Experiments are conducted on the miniF2F and PISA datasets and significant performance gains are observed in our POETRY approach over state-of-the-art methods. POETRY on miniF2F achieves an average proving success rate improvement of 5.1%. Moreover, we observe a substantial increase in the maximum proof length found by POETRY, from 10 to 26. | Proving Theorems Recursively | [
"Haiming Wang",
"Huajian Xin",
"Zhengying Liu",
"Wenda Li",
"Yinya Huang",
"Jianqiao Lu",
"Zhicheng YANG",
"Jing Tang",
"Jian Yin",
"Zhenguo Li",
"Xiaodan Liang"
] | NeurIPS.cc/2024/Conference | 2405.14414 | [
"https://github.com/wiio12/poetry"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=yAAQWBMGiT | @inproceedings{
dong2024sketchy,
title={Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning},
author={Yijun Dong and Hoang Phan and Xiang Pan and Qi Lei},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=yAAQWBMGiT}
} | We revisit data selection in a modern context of finetuning from a fundamental perspective. Extending the classical wisdom of variance minimization in low dimensions to high-dimensional finetuning, our generalization analysis unveils the importance of additionally reducing bias induced by low-rank approximation. Inspired by the variance-bias tradeoff in high dimensions from the theory, we introduce Sketchy Moment Matching (SkMM), a scalable data selection scheme with two stages. (i) First, the bias is controlled using gradient sketching that explores the finetuning parameter space for an informative low-dimensional subspace $\mathcal{S}$; (ii) then the variance is reduced over $\mathcal{S}$ via moment matching between the original and selected datasets. Theoretically, we show that gradient sketching is fast and provably accurate: selecting $n$ samples by reducing variance over $\mathcal{S}$ preserves the fast-rate generalization $O(\dim(\mathcal{S})/n)$, independent of the parameter dimension. Empirically, we concretize the variance-bias balance via synthetic experiments and demonstrate the effectiveness of SkMM for finetuning in real vision tasks. | Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning | [
"Yijun Dong",
"Hoang Phan",
"Xiang Pan",
"Qi Lei"
] | NeurIPS.cc/2024/Conference | 2407.06120 | [
"https://github.com/xiang-pan/sketchy_moment_matching"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=y9zIRxshzj | @inproceedings{
c{\"u}ppers2024causal,
title={Causal Discovery from Event Sequences by Local Cause-Effect Attribution},
author={Joscha C{\"u}ppers and Sascha Xu and Ahmed Musa and Jilles Vreeken},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y9zIRxshzj}
} | Sequences of events, such as crashes in the stock market or outages in a network, contain strong temporal dependencies, whose understanding is crucial to react to and influence future events. In this paper, we study the problem of discovering the underlying causal structure from event sequences. To this end, we introduce a new causal model, where individual events of the cause trigger events of the effect with dynamic delays. We show that in contrast to existing methods based on Granger causality, our model is identifiable for both instant and delayed effects.
We base our approach on the Algorithmic Markov Condition, by which we identify the true causal network as the one that minimizes the Kolmogorov complexity. As the Kolmogorov complexity is not computable, we instantiate our model using Minimum Description Length and show that the resulting score identifies the causal direction. To discover causal graphs, we introduce the Cascade algorithm, which adds edges in topological order. Extensive evaluation shows that Cascade outperforms existing methods in settings with instantaneous effects, noise, and multiple colliders, and discovers insightful causal graphs on real-world data. | Causal Discovery from Event Sequences by Local Cause-Effect Attribution | [
"Joscha Cüppers",
"Sascha Xu",
"Ahmed Musa",
"Jilles Vreeken"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=y9sHKrdnRt | @inproceedings{
zheng2024mcdit,
title={{MC}-DiT: Contextual Enhancement via Clean-to-Clean Reconstruction for Masked Diffusion Models},
author={Guanghao Zheng and Yuchen Liu and Wenrui Dai and Chenglin Li and Junni Zou and Hongkai Xiong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y9sHKrdnRt}
} | Diffusion Transformer (DiT) is emerging as a cutting-edge trend in the landscape of generative diffusion models for image generation. Recently, masked-reconstruction strategies have been considered to improve the efficiency and semantic consistency in training DiT but suffer from deficiency in contextual information extraction. In this paper, we provide a new insight to reveal that noisy-to-noisy masked-reconstruction harms sufficient utilization of contextual information. We further demonstrate the insight with theoretical analysis and empirical study on the mutual information between unmasked and masked patches. Guided by such insight, we propose a novel training paradigm named MC-DiT for fully learning contextual information via diffusion denoising at different noise variances with clean-to-clean mask-reconstruction. Moreover, to avoid model collapse, we design two complementary branches of DiT decoders for enhancing the use of noisy patches and mitigating excessive reliance on clean patches in reconstruction. Extensive experimental results on 256$\times$256 and 512$\times$512 image generation on the ImageNet dataset demonstrate that the proposed MC-DiT achieves state-of-the-art performance in unconditional and conditional image generation with enhanced convergence speed. | MC-DiT: Contextual Enhancement via Clean-to-Clean Reconstruction for Masked Diffusion Models | [
"Guanghao Zheng",
"Yuchen Liu",
"Wenrui Dai",
"Chenglin Li",
"Junni Zou",
"Hongkai Xiong"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=y9huwsnGRJ | @inproceedings{
mei2024continuously,
title={Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving},
author={Jianbiao Mei and Yukai Ma and Xuemeng Yang and Licheng Wen and Xinyu Cai and Xin Li and Daocheng Fu and Bo Zhang and Pinlong Cai and Min Dou and Botian Shi and Liang He and Yong Liu and Yu Qiao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y9huwsnGRJ}
} | Autonomous driving has advanced significantly due to sensors, machine learning, and artificial intelligence improvements. However, prevailing methods struggle with intricate scenarios and causal relationships, hindering adaptability and interpretability in varied environments. To address the above problems, we introduce LeapAD, a novel paradigm for autonomous driving inspired by the human cognitive process. Specifically, LeapAD emulates human attention by selecting critical objects relevant to driving decisions, simplifying environmental interpretation, and mitigating decision-making complexities. Additionally, LeapAD incorporates an innovative dual-process decision-making module, which consists of an Analytic Process (System-II) for thorough analysis and reasoning, along with a Heuristic Process (System-I) for swift and empirical processing. The Analytic Process leverages its logical reasoning to accumulate linguistic driving experience, which is then transferred to the Heuristic Process by supervised fine-tuning. Through reflection mechanisms and a growing memory bank, LeapAD continuously improves itself from past mistakes in a closed-loop environment. Closed-loop testing in CARLA shows that LeapAD outperforms all methods relying solely on camera input, requiring 1-2 orders of magnitude less labeled data. Experiments also demonstrate that as the memory bank expands, the Heuristic Process with only 1.8B parameters can inherit the knowledge from a GPT-4 powered Analytic Process and achieve continuous performance improvement. Project page: https://pjlab-adg.github.io/LeapAD | Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving | [
"Jianbiao Mei",
"Yukai Ma",
"Xuemeng Yang",
"Licheng Wen",
"Xinyu Cai",
"Xin Li",
"Daocheng Fu",
"Bo Zhang",
"Pinlong Cai",
"Min Dou",
"Botian Shi",
"Liang He",
"Yong Liu",
"Yu Qiao"
] | NeurIPS.cc/2024/Conference | 2405.15324 | [
"https://github.com/pjlab-adg/leapad"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=y929esCZNJ | @inproceedings{
teo2024momentumsmoe,
title={Momentum{SM}oE: Integrating Momentum into Sparse Mixture of Experts},
author={Rachel Teo and Tan Minh Nguyen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y929esCZNJ}
} | Sparse Mixture of Experts (SMoE) has become the key to unlocking unparalleled scalability in deep learning. SMoE has the potential to exponentially increase in parameter count while maintaining the efficiency of the model by only activating a small subset of these parameters for a given sample. However, it has been observed that SMoE suffers from unstable training and has difficulty adapting to new distributions, leading to the model's lack of robustness to data contamination. To overcome these limitations, we first establish a connection between the dynamics of the expert representations in SMoEs and gradient descent on a multi-objective optimization problem. Leveraging our framework, we then integrate momentum into SMoE and propose a new family of SMoEs, named MomentumSMoE. We theoretically prove and numerically validate that MomentumSMoE is more stable and robust than SMoE. In particular, we verify the advantages of MomentumSMoE over SMoE on a variety of practical tasks including ImageNet-1K object recognition and WikiText-103 language modeling. We demonstrate the applicability of MomentumSMoE to many types of SMoE models, including those in the Sparse MoE model for vision (V-MoE) and the Generalist Language Model (GLaM). We also show that other advanced momentum-based optimization methods, such as Adam, can be easily incorporated into the MomentumSMoE framework for designing new SMoE models with even better performance, almost negligible additional computation cost, and simple implementations. | MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts | [
"Rachel Teo",
"Tan Minh Nguyen"
] | NeurIPS.cc/2024/Conference | 2410.14574 | [
"https://github.com/rachtsy/momentumsmoe"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=y8Rm4VNRPH | @inproceedings{
yang2024parallelizing,
title={Parallelizing Linear Transformers with the Delta Rule over Sequence Length},
author={Songlin Yang and Bailin Wang and Yu Zhang and Yikang Shen and Yoon Kim},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y8Rm4VNRPH}
} | Transformers with linear attention (i.e., linear transformers) and state-space models have recently been suggested as a viable linear-time alternative to transformers with softmax attention. However, these models still underperform transformers especially on tasks that require in-context retrieval. While more expressive variants of linear transformers which replace the additive update in linear transformers with the delta rule (DeltaNet) have been found to be more effective at associative recall, existing algorithms for training such models do not parallelize over sequence length and are thus inefficient to train on modern hardware. This work describes a hardware-efficient algorithm for training linear transformers with the delta rule, which exploits a memory-efficient representation for computing products of Householder matrices. This algorithm allows us to scale up DeltaNet to standard language modeling settings. We train a 1.3B model for 100B tokens and find that it outperforms recent linear-time baselines such as Mamba and GLA in terms of perplexity and zero-shot performance on downstream tasks. We also experiment with two hybrid models which combine DeltaNet layers with (1) sliding-window attention layers every other layer or (2) two global attention layers, and find that these hybrids outperform strong transformer baselines. | Parallelizing Linear Transformers with the Delta Rule over Sequence Length | [
"Songlin Yang",
"Bailin Wang",
"Yu Zhang",
"Yikang Shen",
"Yoon Kim"
] | NeurIPS.cc/2024/Conference | 2406.06484 | [
"https://github.com/sustcsonglin/flash-linear-attention"
] | https://huggingface.co/papers/2406.06484 | 3 | 3 | 1 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=y8P633E5HQ | @inproceedings{
lin2024equivariant,
title={Equivariant Machine Learning on Graphs with Nonlinear Spectral Filters},
author={Ya-Wei Eileen Lin and Ronen Talmon and Ron Levie},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y8P633E5HQ}
} | Equivariant machine learning is an approach for designing deep learning models that respect the symmetries of the problem, with the aim of reducing model complexity and improving generalization.
In this paper, we focus on an extension of shift equivariance, which is the basis of convolution networks on images, to general graphs. Unlike images, graphs do not have a natural notion of domain translation.
Therefore, we consider the graph functional shifts as the symmetry group: the unitary operators that commute with the graph shift operator.
Notably, such symmetries operate in the signal space rather than directly in the spatial space.
We remark that each linear filter layer of a standard spectral graph neural network (GNN) commutes with graph functional shifts, but the activation function breaks this symmetry. Instead, we propose nonlinear spectral filters (NLSFs) that are fully equivariant to graph functional shifts and show that they have universal approximation properties.
The proposed NLSFs are based on a new form of spectral domain that is transferable between graphs.
We demonstrate the superior performance of NLSFs over existing spectral GNNs in node and graph classification benchmarks. | Equivariant Machine Learning on Graphs with Nonlinear Spectral Filters | [
"Ya-Wei Eileen Lin",
"Ronen Talmon",
"Ron Levie"
] | NeurIPS.cc/2024/Conference | 2406.01249 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=y8HUXkwAOg | @inproceedings{
vareille2024chronoepilogi,
title={ChronoEpilogi: Scalable Time Series Selection with Multiple Solutions},
author={Etienne Vareille and Michele Linardi and Ioannis Tsamardinos and Vassilis Christophides},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y8HUXkwAOg}
} | We consider the problem of selecting all the minimal-size subsets of multivariate time-series (TS) variables whose past leads to an optimal predictive model for the future (forecasting) of a given target variable (multiple feature selection problem for times-series). Identifying these subsets leads to gaining insights, domain intuition,and a better understanding of the data-generating mechanism; it is often the first step in causal modeling. While identifying a single solution to the feature selection problem suffices for forecasting purposes, identifying all such minimal-size, optimally predictive subsets is necessary for knowledge discovery and important to avoid misleading a practitioner. We develop the theory of multiple feature selection for time-series data, propose the ChronoEpilogi algorithm, and prove its soundness and completeness under two mild, broad, non-parametric distributional assumptions, namely Compositionality of the distribution and Interchangeability of time-series variables in solutions. Experiments on synthetic and real datasets demonstrate the scalability of ChronoEpilogi to hundreds of TS variables and its efficacy in identifying multiple solutions. In the real datasets, ChronoEpilogi is shown to reduce the number of TS variables by 96% (on average) by conserving or even improving forecasting performance. Furthermore, it is on par with GroupLasso performance, with the added benefit of providing multiple solutions. | ChronoEpilogi: Scalable Time Series Selection with Multiple Solutions | [
"Etienne Vareille",
"Michele Linardi",
"Ioannis Tsamardinos",
"Vassilis Christophides"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=y7oxY5pq4j | @inproceedings{
yang2024robir,
title={Rob{IR}: Robust Inverse Rendering for High-Illumination Scenes},
author={Ziyi Yang and Chenyanzhen and Xinyu Gao and YazhenYuan and Wu Yu and Xiaowei Zhou and Xiaogang Jin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y7oxY5pq4j}
} | Implicit representation has opened up new possibilities for inverse rendering. However, existing implicit neural inverse rendering methods struggle to handle strongly illuminated scenes with significant shadows and slight reflections. The existence of shadows and reflections can lead to an inaccurate understanding of the scene, making precise factorization difficult. To this end, we present RobIR, an implicit inverse rendering approach that uses ACES tone mapping and regularized visibility estimation to reconstruct accurate BRDF of the object. By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to accurately decouple environment lighting and the object's PBR materials without imposing strict constraints on the scene. Even in high-illumination scenes with shadows and specular reflections, our method can recover high-quality albedo and roughness with no shadow interference. RobIR outperforms existing methods in both quantitative and qualitative evaluations. | RobIR: Robust Inverse Rendering for High-Illumination Scenes | [
"Ziyi Yang",
"Chenyanzhen",
"Xinyu Gao",
"YazhenYuan",
"Wu Yu",
"Xiaowei Zhou",
"Xiaogang Jin"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=y6qhVtFG77 | @inproceedings{
li2024neurobolt,
title={Neuro{BOLT}: Resting-state {EEG}-to-f{MRI} Synthesis with Multi-dimensional Feature Mapping},
author={Yamin Li and Ange Lou and Ziyuan Xu and SHENGCHAO ZHANG and Shiyu Wang and Dario J. Englot and Soheil Kolouri and Daniel Moyer and Roza G Bayrak and Catie Chang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y6qhVtFG77}
} | Functional magnetic resonance imaging (fMRI) is an indispensable tool in modern neuroscience, providing a non-invasive window into whole-brain dynamics at millimeter-scale spatial resolution. However, fMRI is constrained by issues such as high operation costs and immobility. With the rapid advancements in cross-modality synthesis and brain decoding, the use of deep neural networks has emerged as a promising solution for inferring whole-brain, high-resolution fMRI features directly from electroencephalography (EEG), a more widely accessible and portable neuroimaging modality. Nonetheless, the complex projection from neural activity to fMRI hemodynamic responses and the spatial ambiguity of EEG pose substantial challenges both in modeling and interpretability. Relatively few studies to date have developed approaches for EEG-fMRI translation, and although they have made significant strides, the inference of fMRI signals in a given study has been limited to a small set of brain areas and to a single condition (i.e., either resting-state or a specific task). The capability to predict fMRI signals in other brain areas, as well as to generalize across conditions, remain critical gaps in the field. To tackle these challenges, we introduce a novel and generalizable framework: NeuroBOLT, i.e., Neuro-to-BOLD Transformer, which leverages multi-dimensional representation learning from temporal, spatial, and spectral domains to translate raw EEG data to the corresponding fMRI activity signals across the brain. Our experiments demonstrate that NeuroBOLT effectively reconstructs unseen resting-state fMRI signals from primary sensory, high-level cognitive areas, and deep subcortical brain regions, achieving state-of-the-art accuracy with the potential to generalize across varying conditions and sites, which significantly advances the integration of these two modalities. | NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping | [
"Yamin Li",
"Ange Lou",
"Ziyuan Xu",
"SHENGCHAO ZHANG",
"Shiyu Wang",
"Dario J. Englot",
"Soheil Kolouri",
"Daniel Moyer",
"Roza G Bayrak",
"Catie Chang"
] | NeurIPS.cc/2024/Conference | 2410.05341 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=y6JotynERr | @inproceedings{
morafah2024towards,
title={Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration},
author={Mahdi Morafah and Vyacheslav Kungurtsev and Hojin Matthew Chang and Chen Chen and Bill Lin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y6JotynERr}
} | Federated Learning (FL) has emerged as a promising paradigm for collaborative machine learning, while preserving user data privacy. Despite its potential, standard FL algorithms lack support for diverse heterogeneous device prototypes, which vary significantly in model and dataset sizes---from small IoT devices to large workstations. This limitation is only partially addressed by existing knowledge distillation (KD) techniques, which often fail to transfer knowledge effectively across a broad spectrum of device prototypes with varied capabilities. This failure primarily stems from two issues: the dilution of informative logits from more capable devices by those from less capable ones, and the use of a single integrated logits as the distillation target across all devices, which neglects their individual learning capacities and and the unique contributions of each device. To address these challenges, we introduce TAKFL, a novel KD-based framework that treats the knowledge transfer from each device prototype's ensemble as a separate task, independently distilling each to preserve its unique contributions and avoid dilution. TAKFL also incorporates a KD-based self-regularization technique to mitigate the issues related to the noisy and unsupervised ensemble distillation process. To integrate the separately distilled knowledge, we introduce an adaptive task arithmetic knowledge integration process, allowing each student model to customize the knowledge integration for optimal performance. Additionally, we present theoretical results demonstrating the effectiveness of task arithmetic in transferring knowledge across heterogeneous device prototypes with varying capacities. Comprehensive evaluations of our method across both computer vision (CV) and natural language processing (NLP) tasks demonstrate that TAKFL achieves state-of-the-art results in a variety of datasets and settings, significantly outperforming existing KD-based methods. Our code is released at https://github.com/MMorafah/TAKFL and the project website is available at https://mmorafah.github.io/takflpage . | Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration | [
"Mahdi Morafah",
"Vyacheslav Kungurtsev",
"Hojin Matthew Chang",
"Chen Chen",
"Bill Lin"
] | NeurIPS.cc/2024/Conference | 2409.18461 | [
"https://github.com/mmorafah/takfl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=y2fAmldTIf | @inproceedings{
zhang2024heprune,
title={{HEP}rune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning},
author={Yancheng Zhang and Mengxin Zheng and Yuzhang Shang and Xun Chen and Qian Lou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y2fAmldTIf}
} | Non-interactive cryptographic computing, Fully Homomorphic Encryption (FHE), provides a promising solution for private neural network training on encrypted data. One challenge of FHE-based private training is its large computational overhead, especially the multiple rounds of forward and backward execution on each encrypted data sample. Considering the existence of largely redundant data samples, pruning them will significantly speed up the training, as proven in plain non-FHE training.
Executing the data pruning of encrypted data on the server side is not trivial since the knowledge calculation of data pruning needs complex and expensive executions on encrypted data. There is a lack of FHE-based data pruning protocol for efficient, private training. In this paper, we propose, \textit{HEPrune}, to construct a FHE data-pruning protocol and then design an FHE-friendly data-pruning algorithm under client-aided or non-client-aided settings, respectively. We also observed that data sample pruning may not always remove ciphertexts, leaving large empty slots and limiting the effects of data pruning. Thus, in HEPrune, we further propose ciphertext-wise pruning to reduce ciphertext computation numbers without hurting accuracy. Experimental results show that our work can achieve a $16\times$ speedup with only a $0.6\%$ accuracy drop over prior work.
The code is publicly available at \href{https://github.com/UCF-Lou-Lab-PET/Private-Data-Prune}. | HEPrune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning | [
"Yancheng Zhang",
"Mengxin Zheng",
"Yuzhang Shang",
"Xun Chen",
"Qian Lou"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=y10avdRFNK | @inproceedings{
terpin2024learning,
title={Learning diffusion at lightspeed},
author={Antonio Terpin and Nicolas Lanzetti and Mart{\'\i}n Gadea and Florian Dorfler},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=y10avdRFNK}
} | Diffusion regulates numerous natural processes and the dynamics of many successful generative models. Existing models to learn the diffusion terms from observational data rely on complex bilevel optimization problems and model only the drift of the system.
We propose a new simple model, JKOnet*, which bypasses the complexity of existing architectures while presenting significantly enhanced representational capabilities: JKOnet* recovers the potential, interaction, and internal energy components of the underlying diffusion process. JKOnet* minimizes a simple quadratic loss and outperforms other baselines in terms of sample efficiency, computational complexity, and accuracy. Additionally, JKOnet* provides a closed-form optimal solution for linearly parametrized functionals, and, when applied to predict the evolution of cellular processes from real-world data, it achieves state-of-the-art accuracy at a fraction of the computational cost of all existing methods.
Our methodology is based on the interpretation of diffusion processes as energy-minimizing trajectories in the probability space via the so-called JKO scheme, which we study via its first-order optimality conditions. | Learning diffusion at lightspeed | [
"Antonio Terpin",
"Nicolas Lanzetti",
"Martín Gadea",
"Florian Dorfler"
] | NeurIPS.cc/2024/Conference | 2406.12616 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=xzCuBjHQbS | @inproceedings{
benning2024random,
title={Random Function Descent},
author={Felix Benning and Leif D{\"o}ring},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xzCuBjHQbS}
} | Classical worst-case optimization theory neither explains the success of optimization in machine learning, nor does it help with step size selection. In this paper we demonstrate the viability and advantages of replacing the classical 'convex function' framework with a 'random function' framework. With complexity $\mathcal{O}(n^3d^3)$, where $n$ is the number of steps and $d$ the number of dimensions, Bayesian optimization with gradients has not been viable in large dimension so far. By bridging the gap between Bayesian optimization (i.e. random function optimization theory) and classical optimization we establish viability. Specifically, we use a 'stochastic Taylor approximation' to rediscover gradient descent, which is scalable in high dimension due to $\mathcal{O}(nd)$ complexity. This rediscovery yields a specific step size schedule we call Random Function Descent (RFD). The advantage of this random function framework is that RFD is scale invariant and that it provides a theoretical foundation for common step size heuristics such as gradient clipping and gradual learning rate warmup. | Random Function Descent | [
"Felix Benning",
"Leif Döring"
] | NeurIPS.cc/2024/Conference | 2305.01377 | [
"https://github.com/FelixBenning/pyrfd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xymhWyiZOp | @inproceedings{
narayanaswamy2024on,
title={On the Use of Anchoring for Training Vision Models},
author={Vivek Narayanaswamy and Kowshik Thopalli and Rushil Anirudh and Yamen Mubarka and Wesam A. Sakla and Jayaraman J. Thiagarajan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xymhWyiZOp}
} | Anchoring is a recent, architecture-agnostic principle for training deep neural networks that has been shown to significantly improve uncertainty estimation, calibration, and extrapolation capabilities. In this paper, we systematically explore anchoring as a general protocol for training vision models, providing fundamental insights into its training and inference processes and their implications for generalization and safety. Despite its promise, we identify a critical problem in anchored training that can lead to an increased risk of learning undesirable shortcuts, thereby limiting its generalization capabilities. To address this, we introduce a new anchored training protocol that employs a simple regularizer to mitigate this issue and significantly enhances generalization. We empirically evaluate our proposed approach across datasets and architectures of varying scales and complexities, demonstrating substantial performance gains in generalization and safety metrics compared to the standard training protocol. The open-source code is available at https://software.llnl.gov/anchoring. | On the Use of Anchoring for Training Vision Models | [
"Vivek Narayanaswamy",
"Kowshik Thopalli",
"Rushil Anirudh",
"Yamen Mubarka",
"Wesam A. Sakla",
"Jayaraman J. Thiagarajan"
] | NeurIPS.cc/2024/Conference | 2406.00529 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=xxY8d4rnSb | @inproceedings{
rommel2024manipose,
title={ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation},
author={C{\'e}dric Rommel and Victor Letzelter and Nermin Samet and Renaud Marlet and Matthieu Cord and Patrick Perez and Eduardo Valle},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xxY8d4rnSb}
} | We propose ManiPose, a manifold-constrained multi-hypothesis model for human-pose 2D-to-3D lifting. We provide theoretical and empirical evidence that, due to the depth ambiguity inherent to monocular 3D human pose estimation, traditional regression models suffer from pose-topology consistency issues, which standard evaluation metrics (MPJPE, P-MPJPE and PCK) fail to assess. ManiPose addresses depth ambiguity by proposing multiple candidate 3D poses for each 2D input, each with its estimated plausibility. Unlike previous multi-hypothesis approaches, ManiPose forgoes generative models, greatly facilitating its training and usage. By constraining the outputs to lie on the human pose manifold, ManiPose guarantees the consistency of all hypothetical poses, in contrast to previous works. We showcase the performance of ManiPose on real-world datasets, where it outperforms state-of-the-art models in pose consistency by a large margin while being very competitive on the MPJPE metric. | ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation | [
"Cédric Rommel",
"Victor Letzelter",
"Nermin Samet",
"Renaud Marlet",
"Matthieu Cord",
"Patrick Perez",
"Eduardo Valle"
] | NeurIPS.cc/2024/Conference | 2312.06386 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xvYI7TCiU6 | @inproceedings{
dou2024measuring,
title={Measuring Mutual Policy Divergence for Multi-Agent Sequential Exploration},
author={Haowen Dou and Lujuan Dang and Zhirong Luan and Badong Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xvYI7TCiU6}
} | Despite the success of Multi-Agent Reinforcement Learning (MARL) algorithms in cooperative tasks, previous works, unfortunately, face challenges in heterogeneous scenarios since they simply disable parameter sharing for agent specialization. Sequential updating scheme was thus proposed, naturally diversifies agents by encouraging agents to learn from preceding ones. However, the exploration strategy in sequential scheme has not been investigated. Benefiting from updating one-by-one, agents have the access to the information from preceding agents. Thus, in this work, we propose to exploit the preceding information to enhance exploration and heterogeneity sequentially. We present Multi-Agent Divergence Policy Optimization (MADPO), equipped with mutual policy divergence maximization framework. We quantify the policy discrepancies between episodes to enhance exploration and between agents to heterogenize agents, termed intra-agent and inter-agent policy divergence. To address the issue that traditional divergence measurements lack stability and directionality, we propose to employ the conditional Cauchy-Schwarz divergence to provide entropy-guided exploration incentives. Extensive experiments show that the proposed method outperforms state-of-the-art sequential updating approaches in two challenging multi-agent tasks with various heterogeneous scenarios. | Measuring Mutual Policy Divergence for Multi-Agent Sequential Exploration | [
"Haowen Dou",
"Lujuan Dang",
"Zhirong Luan",
"Badong Chen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xvVeSZoVJO | @inproceedings{
wang2024rcdn,
title={{RCDN}: Towards Robust Camera-Insensitivity Collaborative Perception via Dynamic Feature-based 3D Neural Modeling},
author={Tianhang Wang and Fan Lu and Zehan Zheng and Zhijun Li and Guang Chen and changjun jiang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xvVeSZoVJO}
} | Collaborative perception is dedicated to tackling the constraints of single-agent perception, such as occlusions, based on the multiple agents' multi-view sensor inputs. However, most existing works assume an ideal condition that all agents' multi-view cameras are continuously available. In reality, cameras may be highly noisy, obscured or even failed during the collaboration. In this work, we introduce a new robust camera-insensitivity problem: how to overcome the issues caused by the failed camera perspectives, while stabilizing high collaborative performance with low calibration cost? To address above problems, we propose RCDN, a Robust Camera-insensitivity collaborative perception with a novel Dynamic feature-based 3D Neural modeling mechanism. The key intuition of RCDN is to construct collaborative neural rendering field representations to recover failed perceptual messages sent by multiple agents. To better model collaborative neural rendering field, RCDN first establishes a geometry BEV feature based time-invariant static field with other agents via fast hash grid modeling. Based on the static background field, the proposed time-varying dynamic field can model corresponding motion vector for foregrounds with appropriate positions. To validate RCDN, we create OPV2V-N, a new large-scale dataset with manual labelling under different camera failed scenarios. Extensive experiments conducted on OPV2V-N show that RCDN can be ported to other baselines and improve their robustness in extreme camera-insensitivity setting. Our code and datasets will be available soon. | RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via Dynamic Feature-based 3D Neural Modeling | [
"Tianhang Wang",
"Fan Lu",
"Zehan Zheng",
"Zhijun Li",
"Guang Chen",
"changjun jiang"
] | NeurIPS.cc/2024/Conference | 2405.16868 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xvTMc9Ovx3 | @inproceedings{
nan2024onroad,
title={On-Road Object Importance Estimation: A New Dataset and A Model with Multi-Fold Top-Down Guidance},
author={Zhixiong Nan and Yilong Chen and Tianfei Zhou and Tao Xiang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xvTMc9Ovx3}
} | This paper addresses the problem of on-road object importance estimation, which utilizes video sequences captured from the driver's perspective as the input. Although this problem is significant for safer and smarter driving systems, the exploration of this problem remains limited. On one hand, publicly-available large-scale datasets are scarce in the community. To address this dilemma, this paper contributes a new large-scale dataset named Traffic Object Importance (TOI). On the other hand, existing methods often only consider either bottom-up feature or single-fold guidance, leading to limitations in handling highly dynamic and diverse traffic scenarios. Different from existing methods, this paper proposes a model that integrates multi-fold top-down guidance with the bottom-up feature. Specifically, three kinds of top-down guidance factors (i.e., driver intention, semantic context, and traffic rule) are integrated into our model. These factors are important for object importance estimation, but none of the existing methods simultaneously consider them. To our knowledge, this paper proposes the first on-road object importance estimation model that fuses multi-fold top-down guidance factors with bottom-up feature. Extensive experiments demonstrate that our model outperforms state-of-the-art methods by large margins, achieving 23.1% Average Precision (AP) improvement compared with the recently proposed model (i.e., Goal). | On-Road Object Importance Estimation: A New Dataset and A Model with Multi-Fold Top-Down Guidance | [
"Zhixiong Nan",
"Yilong Chen",
"Tianfei Zhou",
"Tao Xiang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xutrKezbPF | @inproceedings{
saidutta2024cifd,
title={{CIFD}: Controlled Information Flow to Enhance Knowledge Distillation},
author={Yashas Malur Saidutta and Rakshith Sharma Srinivasa and Jaejin Cho and Ching-Hua Lee and Chouchang Yang and Yilin Shen and Hongxia Jin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xutrKezbPF}
} | Knowledge Distillation is the mechanism by which the insights gained from a larger teacher model are transferred to a smaller student model. However, the transfer suffers when the teacher model is significantly larger than the student. To overcome this, prior works have proposed training intermediately sized models, Teacher Assistants (TAs) to help the transfer process. However, training TAs is expensive, as training these models is a knowledge transfer task in itself. Further, these TAs are larger than the student model and training them especially in large data settings can be computationally intensive. In this paper, we propose a novel framework called Controlled Information Flow for Knowledge Distillation (CIFD) consisting of two components. First, we propose a significantly smaller alternatives to TAs, the Rate-Distortion Module (RDM) which uses the teacher's penultimate layer embedding and a information rate-constrained bottleneck layer to replace the Teacher Assistant model. RDMs are smaller and easier to train than TAs, especially in large data regimes, since they operate on the teacher embeddings and do not need to relearn low level input feature extractors. Also, by varying the information rate across the bottleneck, RDMs can replace TAs of different sizes. Secondly, we propose the use of Information Bottleneck Module in the student model, which is crucial for regularization in the presence of a large number of RDMs. We show comprehensive state-of-the-art results of the proposed method over large datasets like Imagenet. Further, we show the significant improvement in distilling CLIP like models over a huge 12M image-text dataset. It outperforms CLIP specialized distillation methods across five zero-shot classification datasets and two zero-shot image-text retrieval datasets. | CIFD: Controlled Information Flow to Enhance Knowledge Distillation | [
"Yashas Malur Saidutta",
"Rakshith Sharma Srinivasa",
"Jaejin Cho",
"Ching-Hua Lee",
"Chouchang Yang",
"Yilin Shen",
"Hongxia Jin"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xtK3gZjQDC | @inproceedings{
toni2024towards,
title={Towards Human-{AI} Complementarity with Prediction Sets},
author={Giovanni De Toni and Nastaran Okati and Suhas Thejaswi and Eleni Straitouri and Manuel Gomez Rodriguez},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xtK3gZjQDC}
} | Decision support systems based on prediction sets have proven to be effective at helping human experts solve classification tasks. Rather than providing single-label predictions, these systems provide sets of label predictions constructed using conformal prediction, namely prediction sets, and ask human experts to predict label values from these sets. In this paper, we first show that the prediction sets constructed using conformal prediction are, in general, suboptimal in terms of average accuracy. Then, we show that the problem of finding the optimal prediction sets under which the human experts achieve the highest average accuracy is NP-hard. More strongly, unless P = NP, we show that the problem is hard to approximate to any factor less than the size of the label set. However, we introduce a simple and efficient greedy algorithm that, for a large class of expert models and non-conformity scores, is guaranteed to find prediction sets that provably offer equal or greater performance than those constructed using conformal prediction. Further, using a simulation study with both synthetic and real expert predictions, we demonstrate that, in practice, our greedy algorithm finds near-optimal prediction sets offering greater performance than conformal prediction. | Towards Human-AI Complementarity with Prediction Sets | [
"Giovanni De Toni",
"Nastaran Okati",
"Suhas Thejaswi",
"Eleni Straitouri",
"Manuel Gomez Rodriguez"
] | NeurIPS.cc/2024/Conference | 2405.17544 | [
"https://github.com/Networks-Learning/towards-human-ai-complementarity-predictions-sets"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xse8QMGnyM | @inproceedings{
kim2024toward,
title={Toward Approaches to Scalability in 3D Human Pose Estimation},
author={Jun-Hee Kim and Seong-Whan Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xse8QMGnyM}
} | In the field of 3D Human Pose Estimation (HPE), scalability and generalization across diverse real-world scenarios remain significant challenges. This paper addresses two key bottlenecks to scalability: limited data diversity caused by 'popularity bias' and increased 'one-to-many' depth ambiguity arising from greater pose diversity. We introduce the Biomechanical Pose Generator (BPG), which leverages biomechanical principles, specifically the normal range of motion, to autonomously generate a wide array of plausible 3D poses without relying on a source dataset, thus overcoming the restrictions of popularity bias. To address depth ambiguity, we propose the Binary Depth Coordinates (BDC), which simplifies depth estimation into a binary classification of joint positions (front or back). This method decomposes a 3D pose into three core elements—2D pose, bone length, and binary depth decision—substantially reducing depth ambiguity and enhancing model robustness and accuracy, particularly in complex poses. Our results demonstrate that these approaches increase the diversity and volume of pose data while consistently achieving performance gains, even amid the complexities introduced by increased pose diversity. | Toward Approaches to Scalability in 3D Human Pose Estimation | [
"Jun-Hee Kim",
"Seong-Whan Lee"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xrbgXJomJp | @inproceedings{
bui2024inverse,
title={Inverse Factorized Soft Q-Learning for Cooperative Multi-agent Imitation Learning},
author={The Viet Bui and Tien Anh Mai and Thanh Hong Nguyen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xrbgXJomJp}
} | This paper concerns imitation learning (IL) in cooperative multi-agent systems.
The learning problem under consideration poses several challenges, characterized by high-dimensional state and action spaces and intricate inter-agent dependencies. In a single-agent setting, IL was shown to be done efficiently via an inverse soft-Q learning process. However, extending this framework to a multi-agent context introduces the need to simultaneously learn both local value functions to capture local observations and individual actions, and a joint value function for exploiting centralized learning.
In this work, we introduce a new multi-agent IL algorithm designed to address these challenges. Our approach enables the
centralized learning by leveraging mixing networks to aggregate decentralized Q functions.
We further establish conditions for the mixing networks under which the multi-agent IL objective function exhibits convexity within the Q function space.
We present extensive experiments conducted on some challenging multi-agent game environments, including an advanced version of the Star-Craft multi-agent challenge (SMACv2), which demonstrates the effectiveness of our algorithm. | Inverse Factorized Soft Q-Learning for Cooperative Multi-agent Imitation Learning | [
"The Viet Bui",
"Tien Anh Mai",
"Thanh Hong Nguyen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xqrlhsbcwN | @inproceedings{
wang2024approximated,
title={Approximated Orthogonal Projection Unit: Stabilizing Regression Network Training Using Natural Gradient},
author={ShaoQi Wang and Chunjie Yang and Siwei Lou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xqrlhsbcwN}
} | Neural networks (NN) are extensively studied in cutting-edge soft sensor models due to their feature extraction and function approximation capabilities. Current research into network-based methods primarily focuses on models' offline accuracy. Notably, in industrial soft sensor context, online optimizing stability and interpretability are prioritized, followed by accuracy. This requires a clearer understanding of network's training process. To bridge this gap, we propose a novel NN named the Approximated Orthogonal Projection Unit (AOPU) which has solid mathematical basis and presents superior training stability. AOPU truncates the gradient backpropagation at dual parameters, optimizes the trackable parameters updates, and enhances the robustness of training. We further prove that AOPU attains minimum variance estimation in NN, wherein the truncated gradient approximates the natural gradient. Empirical results on two chemical process datasets clearly show that AOPU outperforms other models in achieving stable convergence, marking a significant advancement in soft sensor field. | Approximated Orthogonal Projection Unit: Stabilizing Regression Network Training Using Natural Gradient | [
"ShaoQi Wang",
"Chunjie Yang",
"Siwei Lou"
] | NeurIPS.cc/2024/Conference | 2409.15393 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xqc8yyhScL | @inproceedings{
li2024is,
title={Is Programming by Example solved by {LLM}s?},
author={Wen-Ding Li and Kevin Ellis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xqc8yyhScL}
} | Programming-by-Examples (PBE) aims to generate an algorithm from input-output examples.
Such systems are practically and theoretically important:
from an end-user perspective, they are deployed to millions of people, and from an AI perspective, PBE corresponds to a very general form of few-shot inductive inference.
Given the success of Large Language Models (LLMs) in code-generation tasks, we investigate here the extent to which LLMs can be said to have "solved" PBE.
We experiment on classic domains such as lists and strings, and an uncommon graphics programming domain not well represented in typical pretraining data.
We find that pretrained models are not effective at PBE, but that they can be fine-tuned for much higher performance, provided the test problems are in-distribution.
We analyze empirically what causes these models to succeed and fail, and take steps toward understanding how to achieve better out-of-distribution generalization.
Collectively these results suggest that LLMs make strong progress toward solving the typical suite of PBE tasks, potentially increasing the flexibility and applicability of PBE systems, while also identifying ways in which LLMs still fall short. | Is Programming by Example solved by LLMs? | [
"Wen-Ding Li",
"Kevin Ellis"
] | NeurIPS.cc/2024/Conference | 2406.08316 | [
""
] | https://huggingface.co/papers/2406.08316 | 1 | 11 | 1 | 2 | [] | [] | [
"xu3kev/llm_visual_program_sythensis"
] | [] | [] | [
"xu3kev/llm_visual_program_sythensis"
] | 1 | poster |
null | https://openreview.net/forum?id=xpRUi8amtC | @inproceedings{
chen2024scene,
title={Scene Graph Generation with Role-Playing Large Language Models},
author={Guikun Chen and Jin Li and Wenguan Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xpRUi8amtC}
} | Current approaches for open-vocabulary scene graph generation (OVSGG) use vision-language models such as CLIP and follow a standard zero-shot pipeline – computing similarity between the query image and the text embeddings for each category (i.e., text classifiers). In this work, we argue that the text classifiers adopted by existing OVSGG methods, i.e., category-/part-level prompts, are scene-agnostic as they remain unchanged across contexts. Using such fixed text classifiers not only struggles to model visual relations with high variance, but also falls short in adapting to distinct contexts. To plug these intrinsic shortcomings, we devise SDSGG, a scene-specific description based OVSGG framework where the weights of text classifiers are adaptively adjusted according to the visual content. In particular, to generate comprehensive and diverse descriptions oriented to the scene, an LLM is asked to play different roles (e.g., biologist and engineer) to analyze and discuss the descriptive features of a given scene from different views. Unlike previous efforts simply treating the generated descriptions as mutually equivalent text classifiers, SDSGG is equipped with an advanced renormalization mechanism to adjust the influence of each text classifier based on its relevance to the presented scene (this is what the term “specific” means). Furthermore, to capture the complicated interplay between subjects and objects, we propose a new lightweight module called mutual visual adapter. It refines CLIP’s ability to recognize relations by learning an interaction-aware semantic space. Extensive experiments on prevalent benchmarks show that SDSGG significantly outperforms top-leading methods. | Scene Graph Generation with Role-Playing Large Language Models | [
"Guikun Chen",
"Jin Li",
"Wenguan Wang"
] | NeurIPS.cc/2024/Conference | 2410.15364 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xojbzSYIVS | @inproceedings{
liu2024llmesr,
title={{LLM}-{ESR}: Large Language Models Enhancement for Long-tailed Sequential Recommendation},
author={Qidong Liu and Xian Wu and Yejing Wang and Zijian Zhang and Feng Tian and Yefeng Zheng and Xiangyu Zhao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xojbzSYIVS}
} | Sequential recommender systems (SRS) aim to predict users' subsequent choices based on their historical interactions and have found applications in diverse fields such as e-commerce and social media. However, in real-world systems, most users interact with only a handful of items, while the majority of items are seldom consumed. These two issues, known as the long-tail user and long-tail item challenges, often pose difficulties for existing SRS. These challenges can adversely affect user experience and seller benefits, making them crucial to address. Though a few works have addressed the challenges, they still struggle with the seesaw or noisy issues due to the intrinsic scarcity of interactions. The advancements in large language models (LLMs) present a promising solution to these problems from a semantic perspective. As one of the pioneers in this field, we propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR). This framework utilizes semantic embeddings derived from LLMs to enhance SRS without adding extra inference load. To address the long-tail item challenge, we design a dual-view modeling framework that combines semantics from LLMs and collaborative signals from conventional SRS. For the long-tail user challenge, we propose a retrieval augmented self-distillation method to enhance user preference representation using more informative interactions from similar users. To verify the effectiveness and versatility of our proposed enhancement framework, we conduct extensive experiments on three real-world datasets using three popular SRS models. The results consistently show that our method surpasses existing baselines. The implementation code is available in Supplementary Material. | LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation | [
"Qidong Liu",
"Xian Wu",
"Yejing Wang",
"Zijian Zhang",
"Feng Tian",
"Yefeng Zheng",
"Xiangyu Zhao"
] | NeurIPS.cc/2024/Conference | 2405.20646 | [
"https://github.com/liuqidong07/LLM-ESR"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=xoc4QOvbDs | @inproceedings{
wang2024evaluate,
title={Evaluate then Cooperate: Shapley-based View Cooperation Enhancement for Multi-view Clustering},
author={Fangdi Wang and Jiaqi Jin and Jingtao Hu and Suyuan Liu and Xihong Yang and Siwei Wang and Xinwang Liu and En Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xoc4QOvbDs}
} | The fundamental goal of deep multi-view clustering is to achieve preferable task performance through inter-view cooperation. Although numerous DMVC approaches have been proposed, the collaboration role of individual views have not been well investigated in existing literature. Moreover, how to further enhance view cooperation for better fusion still needs to be explored. In this paper, we firstly consider DMVC as an unsupervised cooperative game where each view can be regarded as a participant. Then, we introduce the Shapley value and propose a novel MVC framework termed Shapley-based Cooperation Enhancing Multi-view Clustering (SCE-MVC), which evaluates view cooperation with game theory. Specially, we employ the optimal transport distance between fused cluster distributions and single view component as the utility function for computing shapley values. Afterwards, we apply shapley values to assess the contribution of each view and utilize these contributions to promote view cooperation. Comprehensive experimental results well support the effectiveness of our framework adopting to existing DMVC frameworks, demonstrating the importance and necessity of enhancing the cooperation among views. | Evaluate then Cooperate: Shapley-based View Cooperation Enhancement for Multi-view Clustering | [
"Fangdi Wang",
"Jiaqi Jin",
"Jingtao Hu",
"Suyuan Liu",
"Xihong Yang",
"Siwei Wang",
"Xinwang Liu",
"En Zhu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xoCFd1WKpf | @inproceedings{
li2024unified,
title={Unified Lexical Representation for Interpretable Visual-Language Alignment},
author={Yifan Li and Yikai Wang and Yanwei Fu and Dongyu Ru and Zheng Zhang and Tong He},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xoCFd1WKpf}
} | Visual-Language Alignment (VLA) has gained a lot of attention since CLIP's groundbreaking work.
Although CLIP performs well, the typical direct latent feature alignment lacks clarity in its representation and similarity scores.
On the other hand, lexical representation, a vector whose element represents the similarity between the sample and a word from the vocabulary, is a natural sparse representation and interpretable, providing exact matches for individual words.
However, lexical representations are difficult to learn due to no ground-truth supervision and false-discovery issues, and thus requires complex design to train effectively.
In this paper, we introduce LexVLA, a more interpretable VLA framework by learning a unified lexical representation for both modalities without complex design.
We use DINOv2 as our visual model for its local-inclined features and Llama 2, a generative language model, to leverage its in-context lexical prediction ability.
To avoid the false discovery, we propose an overuse penalty to refrain the lexical representation from falsely frequently activating meaningless words.
We demonstrate that these two pre-trained uni-modal models can be well-aligned by fine-tuning on the modest multi-modal dataset and avoid intricate training configurations.
On cross-modal retrieval benchmarks, LexVLA, trained on the CC-12M multi-modal dataset, outperforms baselines fine-tuned on larger datasets (e.g., YFCC15M) and those trained from scratch on even bigger datasets (e.g., 1.1B data, including CC-12M).
We conduct extensive experiments to analyze LexVLA.
Codes are available at https://github.com/Clementine24/LexVLA. | Unified Lexical Representation for Interpretable Visual-Language Alignment | [
"Yifan Li",
"Yikai Wang",
"Yanwei Fu",
"Dongyu Ru",
"Zheng Zhang",
"Tong He"
] | NeurIPS.cc/2024/Conference | 2407.17827 | [
"https://github.com/clementine24/lexvla"
] | https://huggingface.co/papers/2407.17827 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=xnmm1jThkv | @inproceedings{
hiremath2024hybrid,
title={Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models},
author={Sujai Hiremath and Jacqueline R. M. A. Maasch and Mengxiao Gao and Promit Ghosal and Kyra Gan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xnmm1jThkv}
} | Learning the unique directed acyclic graph corresponding to an unknown causal model is a challenging task. Methods based on functional causal models can identify a unique graph, but either suffer from the curse of dimensionality or impose strong parametric assumptions. To address these challenges, we propose a novel hybrid approach for global causal discovery in observational data that leverages local causal substructures. We first present a topological sorting algorithm that leverages ancestral relationships in linear structural equation models to establish a compact top-down hierarchical ordering, encoding more causal information than linear orderings produced by existing methods. We demonstrate that this approach generalizes to nonlinear settings with arbitrary noise. We then introduce a nonparametric constraint-based algorithm that prunes spurious edges by searching for local conditioning sets, achieving greater accuracy than current methods. We provide theoretical guarantees for correctness and worst-case polynomial time complexities, with empirical validation on synthetic data. | Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models | [
"Sujai Hiremath",
"Jacqueline R. M. A. Maasch",
"Mengxiao Gao",
"Promit Ghosal",
"Kyra Gan"
] | NeurIPS.cc/2024/Conference | 2405.14496 | [
"https://github.com/sujai1/hybrid-discovery"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xjyU6zmZD7 | @inproceedings{
guo2024take,
title={Take A Shortcut Back: Mitigating the Gradient Vanishing for Training Spiking Neural Networks},
author={Yufei Guo and Yuanpei Chen and Zecheng Hao and Weihang Peng and Zhou Jie and Yuhan Zhang and Xiaode Liu and Zhe Ma},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xjyU6zmZD7}
} | The Spiking Neural Network (SNN) is a biologically inspired neural network infrastructure that has recently garnered significant attention. It utilizes binary spike activations to transmit information, thereby replacing multiplications with additions and resulting in high energy efficiency. However, training an SNN directly poses a challenge due to the undefined gradient of the firing spike process. Although prior works have employed various surrogate gradient training methods that use an alternative function to replace the firing process during back-propagation, these approaches ignore an intrinsic problem: gradient vanishing. To address this issue, we propose a shortcut back-propagation method in the paper, which advocates for transmitting the gradient directly from the loss to the shallow layers. This enables us to present the gradient to the shallow layers directly, thereby significantly mitigating the gradient vanishing problem. Additionally, this method does not introduce any burden during the inference phase.
To strike a balance between final accuracy and ease of training, we also propose an evolutionary training framework and implement it by inducing a balance coefficient that dynamically changes with the training epoch, which further improves the network's performance. Extensive experiments conducted over static and dynamic datasets using several popular network structures reveal that our method consistently outperforms state-of-the-art methods. | Take A Shortcut Back: Mitigating the Gradient Vanishing for Training Spiking Neural Networks | [
"Yufei Guo",
"Yuanpei Chen",
"Zecheng Hao",
"Weihang Peng",
"Zhou Jie",
"Yuhan Zhang",
"Xiaode Liu",
"Zhe Ma"
] | NeurIPS.cc/2024/Conference | 2401.04486 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xjXYgdFM5M | @inproceedings{
huang2024reasons,
title={Reasons and Solutions for the Decline in Model Performance after Editing},
author={Xiusheng Huang and Jiaxiang Liu and Yequan Wang and Kang Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xjXYgdFM5M}
} | Knowledge editing technology has received widespread attention for low-cost updates of incorrect or outdated knowledge in large-scale language models. However, recent research has found that edited models often exhibit varying degrees of performance degradation. The reasons behind this phenomenon and potential solutions have not yet been provided. In order to investigate the reasons for the performance decline of the edited model and optimize the editing method, this work explores the underlying reasons from both data and model perspectives. Specifically, 1) from a data perspective, to clarify the impact of data on the performance of editing models, this paper first constructs a **M**ulti-**Q**uestion **D**ataset (**MQD**) to evaluate the impact of different types of editing data on model performance. The performance of the editing model is mainly affected by the diversity of editing targets and sequence length, as determined through experiments. 2) From a model perspective, this article explores the factors that affect the performance of editing models. The results indicate a strong correlation between the L1-norm of the editing model layer and the editing accuracy, and clarify that this is an important factor leading to the bottleneck of editing performance. Finally, in order to improve the performance of the editing model, this paper further proposes a **D**ump **for** **S**equence (**D4S**) method, which successfully overcomes the previous editing bottleneck by reducing the L1-norm of the editing layer, allowing users to perform multiple effective edits and minimizing model damage. Our code is available at https://github.com/nlpkeg/D4S. | Reasons and Solutions for the Decline in Model Performance after Editing | [
"Xiusheng Huang",
"Jiaxiang Liu",
"Yequan Wang",
"Kang Liu"
] | NeurIPS.cc/2024/Conference | 2410.23843 | [
"https://github.com/nlpkeg/D4S"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xgiurUq0ss | @inproceedings{
liu2024ddk,
title={{DDK}: Distilling Domain Knowledge for Efficient Large Language Models},
author={Jiaheng Liu and Chenchen Zhang and Jinyang Guo and Yuanxing Zhang and Haoran Que and Ken Deng and ZhiqiBai and Jie Liu and Ge Zhang and JiakaiWang and Yanan Wu and Congnan Liu and Jiamang Wang and Lin Qu and Wenbo Su and Bo Zheng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xgiurUq0ss}
} | Despite the advanced intelligence abilities of large language models (LLMs) in various applications, they still face significant computational and storage demands. Knowledge Distillation (KD) has emerged as an effective strategy to improve the performance of a smaller LLM (i.e., the student model) by transferring knowledge from a high-performing LLM (i.e., the teacher model). Prevailing techniques in LLM distillation typically use a black-box model API to generate high-quality pretrained and aligned datasets, or utilize white-box distillation by altering the loss function to better transfer knowledge from the teacher LLM. However, these methods ignore the knowledge differences between the student and teacher LLMs across domains. This results in excessive focus on domains with minimal performance gaps and insufficient attention to domains with large gaps, reducing overall performance. In this paper, we introduce a new LLM distillation framework called DDK, which dynamically adjusts the composition of the distillation dataset in a smooth manner according to the domain performance differences between the teacher and student models, making the distillation process more stable and effective. Extensive evaluations show that DDK significantly improves the performance of student models, outperforming both continuously pretrained baselines and existing knowledge distillation methods by a large margin. | DDK: Distilling Domain Knowledge for Efficient Large Language Models | [
"Jiaheng Liu",
"Chenchen Zhang",
"Jinyang Guo",
"Yuanxing Zhang",
"Haoran Que",
"Ken Deng",
"ZhiqiBai",
"Jie Liu",
"Ge Zhang",
"JiakaiWang",
"Yanan Wu",
"Congnan Liu",
"Jiamang Wang",
"Lin Qu",
"Wenbo Su",
"Bo Zheng"
] | NeurIPS.cc/2024/Conference | 2407.16154 | [
""
] | https://huggingface.co/papers/2407.16154 | 6 | 21 | 2 | 16 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=xgP5ynlZWf | @inproceedings{
chen2024restoreagent,
title={RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models},
author={Haoyu Chen and Wenbo Li and Jinjin Gu and Jingjing Ren and Sixiang Chen and Tian Ye and Renjing Pei and Kaiwen Zhou and Fenglong Song and Lei Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xgP5ynlZWf}
} | Natural images captured by mobile devices often suffer from multiple types of degradation, such as noise, blur, and low light. Traditional image restoration methods require manual selection of specific tasks, algorithms, and execution sequences, which is time-consuming and may yield suboptimal results. All-in-one models, though capable of handling multiple tasks, typically support only a limited range and often produce overly smooth, low-fidelity outcomes due to their broad data distribution fitting. To address these challenges, we first define a new pipeline for restoring images with multiple degradations, and then introduce RestoreAgent, an intelligent image restoration system leveraging multimodal large language models. RestoreAgent autonomously assesses the type and extent of degradation in input images and performs restoration through (1) determining the appropriate restoration tasks, (2) optimizing the task sequence, (3) selecting the most suitable models, and (4) executing the restoration. Experimental results demonstrate the superior performance of RestoreAgent in handling complex degradation, surpassing human experts. Furthermore, the system’s modular design facilitates the fast integration of new tasks and models. | RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models | [
"Haoyu Chen",
"Wenbo Li",
"Jinjin Gu",
"Jingjing Ren",
"Sixiang Chen",
"Tian Ye",
"Renjing Pei",
"Kaiwen Zhou",
"Fenglong Song",
"Lei Zhu"
] | NeurIPS.cc/2024/Conference | 2407.18035 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xeviQPXTMU | @inproceedings{
yang2024fedgmark,
title={Fed{GM}ark: Certifiably Robust Watermarking for Federated Graph Learning},
author={Yuxin Yang and Qiang Li and Yuan Hong and Binghui Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xeviQPXTMU}
} | Federated graph learning (FedGL) is an emerging learning paradigm to collaboratively train graph data from various clients. However, during the development and deployment of FedGL models, they are susceptible to illegal copying and model theft. Backdoor-based watermarking is a well-known method for mitigating these attacks, as it offers ownership verification to the model owner. We take the first step to protect the ownership of FedGL models via backdoor-based watermarking. Existing techniques have challenges in achieving the goal: 1) they either cannot be directly applied or yield unsatisfactory performance; 2) they are vulnerable to watermark removal attacks; and 3) they lack of formal guarantees. To address all the challenges, we propose FedGMark, the first certified robust backdoor-based watermarking for FedGL. FedGMark leverages the unique graph structure and client information in FedGL to learn customized and diverse watermarks. It also designs a novel GL architecture that facilitates defending against both the empirical and theoretically worst-case watermark removal attacks. Extensive experiments validate the promising empirical and provable watermarking performance of FedGMark. Source code is available at: https://github.com/Yuxin104/FedGMark. | FedGMark: Certifiably Robust Watermarking for Federated Graph Learning | [
"Yuxin Yang",
"Qiang Li",
"Yuan Hong",
"Binghui Wang"
] | NeurIPS.cc/2024/Conference | 2410.17533 | [
"https://github.com/yuxin104/fedgmark"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xeXRhTUmcf | @inproceedings{
nguyen2024combining,
title={Combining Statistical Depth and Fermat Distance for Uncertainty Quantification},
author={Hai-Vy Nguyen and Fabrice Gamboa and Reda Chhaibi and Sixin Zhang and Serge Gratton and Thierry Giaccone},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xeXRhTUmcf}
} | We measure the out-of-domain uncertainty in the prediction of Neural Networks using a statistical notion called "Lens Depth'' (LD) combined with Fermat Distance, which is able to capture precisely the "depth'' of a point with respect to a distribution in feature space, without any distributional assumption. Our method also has no trainable parameter. The method is applied directly in the feature space at test time and does not intervene in training process. As such, it does not impact the performance of the original model. The proposed method gives excellent qualitative results on toy datasets and can give competitive or better uncertainty estimation on standard deep learning datasets compared to strong baseline methods. | Combining Statistical Depth and Fermat Distance for Uncertainty Quantification | [
"Hai-Vy Nguyen",
"Fabrice Gamboa",
"Reda Chhaibi",
"Sixin Zhang",
"Serge Gratton",
"Thierry Giaccone"
] | NeurIPS.cc/2024/Conference | 2404.08476 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xcqSOfHt4g | @inproceedings{
shi2024simplified,
title={Simplified and Generalized Masked Diffusion for Discrete Data},
author={Jiaxin Shi and Kehang Han and Zhe Wang and Arnaud Doucet and Michalis Titsias},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xcqSOfHt4g}
} | Masked (or absorbing) diffusion is actively explored as an alternative to autoregressive models for generative modeling of discrete data. However, existing work in this area has been hindered by unnecessarily complex model formulations and unclear relationships between different perspectives, leading to suboptimal parameterization, training objectives, and ad hoc adjustments to counteract these issues. In this work, we aim to provide a simple and general framework that unlocks the full potential of masked diffusion models. We show that the continuous-time variational objective of masked diffusion models is a simple weighted integral of cross-entropy losses. Our framework also enables training generalized masked diffusion models with state-dependent masking schedules. When evaluated by perplexity, our models trained on OpenWebText surpass prior diffusion language models at GPT-2 scale and demonstrate superior performance on 4 out of 5 zero-shot language modeling tasks. Furthermore, our models vastly outperform previous discrete diffusion models on pixel-level image modeling, achieving 2.75 (CIFAR-10) and 3.40 (ImageNet 64x64) bits per dimension that are better than autoregressive models of similar sizes. | Simplified and Generalized Masked Diffusion for Discrete Data | [
"Jiaxin Shi",
"Kehang Han",
"Zhe Wang",
"Arnaud Doucet",
"Michalis Titsias"
] | NeurIPS.cc/2024/Conference | 2406.04329 | [
""
] | https://huggingface.co/papers/2406.04329 | 3 | 4 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=xcF2VbyZts | @inproceedings{
li2024socialgpt,
title={Social{GPT}: Prompting {LLM}s for Social Relation Reasoning via Greedy Segment Optimization},
author={Wanhua Li and Zibin Meng and Jiawei Zhou and Donglai Wei and Chuang Gan and Hanspeter Pfister},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xcF2VbyZts}
} | Social relation reasoning aims to identify relation categories such as friends, spouses, and colleagues from images. While current methods adopt the paradigm of training a dedicated network end-to-end using labeled image data, they are limited in terms of generalizability and interpretability. To address these issues, we first present a simple yet well-crafted framework named SocialGPT, which combines the perception capability of Vision Foundation Models (VFMs) and the reasoning capability of Large Language Models (LLMs) within a modular framework, providing a strong baseline for social relation recognition. Specifically, we instruct VFMs to translate image content into a textual social story, and then utilize LLMs for text-based reasoning. SocialGPT introduces systematic design principles to adapt VFMs and LLMs separately and bridge their gaps. Without additional model training, it achieves competitive zero-shot results on two databases while offering interpretable answers, as LLMs can generate language-based explanations for the decisions. The manual prompt design process for LLMs at the reasoning phase is tedious and an automated prompt optimization method is desired. As we essentially convert a visual classification task into a generative task of LLMs, automatic prompt optimization encounters a unique long prompt optimization issue. To address this issue, we further propose the Greedy Segment Prompt Optimization (GSPO), which performs a greedy search by utilizing gradient information at the segment level. Experimental results show that GSPO significantly improves performance, and our method also generalizes to different image styles. The code is available at https://github.com/Mengzibin/SocialGPT. | SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization | [
"Wanhua Li",
"Zibin Meng",
"Jiawei Zhou",
"Donglai Wei",
"Chuang Gan",
"Hanspeter Pfister"
] | NeurIPS.cc/2024/Conference | 2410.21411 | [
"https://github.com/mengzibin/socialgpt"
] | https://huggingface.co/papers/2410.21411 | 2 | 19 | 3 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=xbuaSTqAEz | @inproceedings{
yao2024customized,
title={Customized Multiple Clustering via Multi-Modal Subspace Proxy Learning},
author={Jiawei Yao and Qi Qian and Juhua Hu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xbuaSTqAEz}
} | Multiple clustering aims to discover various latent structures of data from different aspects. Deep multiple clustering methods have achieved remarkable performance by exploiting complex patterns and relationships in data. However, existing works struggle to flexibly adapt to diverse user-specific needs in data grouping, which may require manual understanding of each clustering. To address these limitations, we introduce Multi-Sub, a novel end-to-end multiple clustering approach that incorporates a multi-modal subspace proxy learning framework in this work. Utilizing the synergistic capabilities of CLIP and GPT-4, Multi-Sub aligns textual prompts expressing user preferences with their corresponding visual representations. This is achieved by automatically generating proxy words from large language models that act as subspace bases, thus allowing for the customized representation of data in terms specific to the user’s interests. Our method consistently outperforms existing baselines across a broad set of datasets in visual multiple clustering tasks. Our code is available at https://github.com/Alexander-Yao/Multi-Sub. | Customized Multiple Clustering via Multi-Modal Subspace Proxy Learning | [
"Jiawei Yao",
"Qi Qian",
"Juhua Hu"
] | NeurIPS.cc/2024/Conference | 2411.03978 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xavWvnJTST | @inproceedings{
kaleb2024feedback,
title={Feedback control guides credit assignment in recurrent neural networks},
author={Klara Kaleb and Barbara Feulner and Juan A. Gallego and Claudia Clopath},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xavWvnJTST}
} | How do brain circuits learn to generate behaviour?
While significant strides have been made in understanding learning in artificial neural networks, applying this knowledge to biological networks remains challenging.
For instance, while backpropagation is known to perform accurate credit assignment of error in artificial neural networks, how a similarly powerful process can be realized within the constraints of biological circuits remains largely unclear.
One of the major challenges is that the brain's extensive recurrent connectivity requires the propagation of error through both space and time, a problem that is notoriously difficult to solve in vanilla recurrent neural networks.
Moreover, the extensive feedback connections in the brain are known to influence forward network activity, but the interaction between feedback-driven activity changes and local, synaptic plasticity-based learning is not fully understood.
Building on our previous work modelling motor learning, this work investigates the mechanistic properties of pre-trained networks with feedback control on a standard motor task.
We show that feedback control of the ongoing recurrent network dynamics approximates the optimal first-order gradient with respect to the network activities, allowing for rapid, ongoing movement correction.
Moreover, we show that trial-by-trial adaptation to a persistent perturbation using a local, biologically plausible learning rule that integrates recent activity and error feedback is both more accurate and more efficient with feedback control during learning, due to the decoupling of the recurrent network dynamics and the injection of an adaptive, second-order gradient into the network dynamics.
Thus, our results suggest that feedback control may guide credit assignment in biological recurrent neural networks, enabling both rapid and efficient learning in the brain. | Feedback control guides credit assignment in recurrent neural networks | [
"Klara Kaleb",
"Barbara Feulner",
"Juan A. Gallego",
"Claudia Clopath"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xaqPAkJnAS | @inproceedings{
shen2024beyond,
title={Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning},
author={Zhixiang Shen and Shuo Wang and zhao kang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xaqPAkJnAS}
} | Unsupervised Multiplex Graph Learning (UMGL) aims to learn node representations on various edge types without manual labeling. However, existing research overlooks a key factor: the reliability of the graph structure. Real-world data often exhibit a complex nature and contain abundant task-irrelevant noise, severely compromising UMGL's performance. Moreover, existing methods primarily rely on contrastive learning to maximize mutual information across different graphs, limiting them to multiplex graph redundant scenarios and failing to capture view-unique task-relevant information. In this paper, we focus on a more realistic and challenging task: to unsupervisedly learn a fused graph from multiple graphs that preserve sufficient task-relevant information while removing task-irrelevant noise. Specifically, our proposed Information-aware Unsupervised Multiplex Graph Fusion framework (InfoMGF) uses graph structure refinement to eliminate irrelevant noise and simultaneously maximizes view-shared and view-unique task-relevant information, thereby tackling the frontier of non-redundant multiplex graph. Theoretical analyses further guarantee the effectiveness of InfoMGF. Comprehensive experiments against various baselines on different downstream tasks demonstrate its superior performance and robustness. Surprisingly, our unsupervised method even beats the sophisticated supervised approaches. The source code and datasets are available at https://github.com/zxlearningdeep/InfoMGF. | Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning | [
"Zhixiang Shen",
"Shuo Wang",
"zhao kang"
] | NeurIPS.cc/2024/Conference | 2409.17386 | [
"https://github.com/zxlearningdeep/infomgf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xabStWAUtr | @inproceedings{
zhang2024cooccurrence,
title={Co-occurrence is not Factual Association in Language Models},
author={Xiao Zhang and Miao Li and Ji Wu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xabStWAUtr}
} | Pretrained language models can encode a large amount of knowledge and utilize it for various reasoning tasks, yet they can still struggle to learn novel factual knowledge effectively from finetuning on limited textual demonstrations. In this work, we show that the reason for this deficiency is that language models are biased to learn word co-occurrence statistics instead of true factual associations. We identify the differences between two forms of knowledge representation in language models: knowledge in the form of co-occurrence statistics is encoded in the middle layers of the transformer model and does not generalize well to reasoning scenarios beyond simple question answering, while true factual associations are encoded in the lower layers and can be freely utilized in various reasoning tasks. Based on these observations, we propose two strategies to improve the learning of factual associations in language models. We show that training on text with implicit rather than explicit factual associations can force the model to learn factual associations instead of co-occurrence statistics, significantly improving the generalization of newly learned knowledge. We also propose a simple training method to actively forget the learned co-occurrence statistics, which unblocks and enhances the learning of factual associations when training on plain narrative text. On both synthetic and real-world corpora, the two proposed strategies improve the generalization of the knowledge learned during finetuning to reasoning scenarios such as indirect and multi-hop question answering. | Co-occurrence is not Factual Association in Language Models | [
"Xiao Zhang",
"Miao Li",
"Ji Wu"
] | NeurIPS.cc/2024/Conference | 2409.14057 | [
"https://github.com/amounts-tidings/fact_learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=xZxXNhndXU | @inproceedings{
fischer2024dynamic,
title={Dynamic 3D Gaussian Fields for Urban Areas},
author={Tobias Fischer and Jonas Kulhanek and Samuel Rota Bul{\`o} and Lorenzo Porzi and Marc Pollefeys and Peter Kontschieder},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xZxXNhndXU}
} | We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas. Existing works are not well suited for applications like mixed-reality or closed-loop simulation due to their limited visual quality and non-interactive rendering speeds. Recently, rasterization-based approaches have achieved high-quality NVS at impressive speeds. However, these methods are limited to small-scale, homogeneous data, i.e. they cannot handle severe appearance and geometry variations due to weather, season, and lighting and do not scale to larger, dynamic areas with thousands of images. We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas, handles heterogeneous input data, and substantially improves rendering speeds. We use 3D Gaussians as an efficient geometry scaffold while relying on neural fields as a compact and flexible appearance model. We integrate scene dynamics via a scene graph at global scale while modeling articulated motions on a local level via deformations. This decomposed approach enables flexible scene composition suitable for real-world applications. In experiments, we surpass the state-of-the-art by over 3 dB in PSNR and more than 200x in rendering speed. | Dynamic 3D Gaussian Fields for Urban Areas | [
"Tobias Fischer",
"Jonas Kulhanek",
"Samuel Rota Bulò",
"Lorenzo Porzi",
"Marc Pollefeys",
"Peter Kontschieder"
] | NeurIPS.cc/2024/Conference | 2406.03175 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=xZKXGvLB0c | @inproceedings{
mejia2024causal,
title={Causal vs. Anticausal merging of predictors},
author={Sergio Hernan Garrido Mejia and Patrick Bl{\"o}baum and Bernhard Sch{\"o}lkopf and Dominik Janzing},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xZKXGvLB0c}
} | We study the differences arising from merging predictors in the causal and anticausal directions using the same data.
In particular we study the asymmetries that arise in a simple model where we merge the predictors using one binary variable as target and two continuous variables as predictors.
We use Causal Maximum Entropy (CMAXENT) as inductive bias to merge the predictors, however, we expect similar differences to hold also when we use other merging methods that take into account asymmetries between cause and effect.
We show that if we observe all bivariate distributions, the CMAXENT solution reduces to a logistic regression in the causal direction and Linear Discriminant Analysis (LDA) in the anticausal direction.
Furthermore, we study how the decision boundaries of these two solutions differ whenever we observe only some of the bivariate distributions implications for Out-Of-Variable (OOV) generalisation. | Causal vs. Anticausal merging of predictors | [
"Sergio Hernan Garrido Mejia",
"Patrick Blöbaum",
"Bernhard Schölkopf",
"Dominik Janzing"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xXRnUU7xTL | @inproceedings{
wei2024selfcodealign,
title={SelfCodeAlign: Self-Alignment for Code Generation},
author={Yuxiang Wei and Federico Cassano and Jiawei Liu and Yifeng Ding and Naman Jain and Zachary Mueller and Harm de Vries and Leandro Von Werra and Arjun Guha and LINGMING ZHANG},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xXRnUU7xTL}
} | Instruction tuning is a supervised fine-tuning approach that significantly improves the ability of large language models (LLMs) to follow human instructions. For programming tasks, most models are finetuned with costly human-annotated instruction-response pairs or those generated by large, proprietary LLMs, which may not be permitted. We propose SelfCodeAlign, the first fully transparent and permissive pipeline for self-aligning code LLMs without extensive human annotations or distillation. SelfCodeAlign employs the same base model for inference throughout the data generation process. It first extracts diverse coding concepts from high-quality seed snippets to generate new tasks. It then samples multiple responses per task, pairs each with test cases, and validates them in a sandbox environment. Finally, passing examples are selected for instruction tuning. In our primary experiments, we use SelfCodeAlign with CodeQwen1.5-7B to generate a dataset of 74k instruction-response pairs. Finetuning on this dataset leads to a model that achieves a 67.1 pass@1 on HumanEval+, surpassing CodeLlama-70B-Instruct despite being ten times smaller. Across all benchmarks, this finetuned model consistently outperforms the original version trained with OctoPack, the previous state-of-the-art method for instruction tuning without human annotations or distillation. Additionally, we show that SelfCodeAlign is effective across LLMs of various sizes, from 3B to 33B, and that the base models can benefit more from alignment with their own data distribution. We further validate each component’s effectiveness in our pipeline, showing that SelfCodeAlign outperforms both direct distillation from GPT-4o and leading GPT-3.5-based distillation methods, such as OSS-Instruct and Evol-Instruct. SelfCodeAlign has also led to the creation of StarCoder2-Instruct, the first fully transparent, permissively licensed, and self-aligned code LLM that achieves state-of-the-art coding performance. Overall, SelfCodeAlign shows for the first time that a strong instruction-tuned code LLM can result from self-alignment rather than distillation. | SelfCodeAlign: Self-Alignment for Code Generation | [
"Yuxiang Wei",
"Federico Cassano",
"Jiawei Liu",
"Yifeng Ding",
"Naman Jain",
"Zachary Mueller",
"Harm de Vries",
"Leandro Von Werra",
"Arjun Guha",
"LINGMING ZHANG"
] | NeurIPS.cc/2024/Conference | 2410.24198 | [
"https://github.com/bigcode-project/selfcodealign"
] | https://huggingface.co/papers/2410.24198 | 4 | 20 | 2 | 10 | [
"bigcode/starcoder2-15b-instruct-v0.1"
] | [] | [
"NiansuhAI/Main",
"srinuksv/Main"
] | [
"bigcode/starcoder2-15b-instruct-v0.1"
] | [] | [
"NiansuhAI/Main",
"srinuksv/Main"
] | 1 | poster |
null | https://openreview.net/forum?id=xW6ga9i4eA | @inproceedings{
wang2024pfedclub,
title={pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning},
author={Jiaqi Wang and Qi Li and Lingjuan Lyu and Fenglong Ma},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xW6ga9i4eA}
} | Federated learning, a pioneering paradigm, enables collaborative model training without exposing users’ data to central servers. Most existing federated learning systems necessitate uniform model structures across all clients, restricting their practicality. Several methods have emerged to aggregate diverse client models; however, they either lack the ability of personalization, raise privacy and security concerns, need prior knowledge, or ignore the capability and functionality of personalized models. In this paper, we present an innovative approach, named pFedClub, which addresses these challenges. pFedClub introduces personalized federated learning through the substitution of controllable neural network blocks/layers. Initially, pFedClub dissects heterogeneous client models into blocks and organizes them into functional groups on the server. Utilizing the designed CMSR (Controllable Model Searching and Reproduction) algorithm, pFedClub generates a range of personalized candidate models for each client. A model matching technique is then applied to select the optimal personalized model, serving as a teacher model to guide each client’s training process. We conducted extensive experiments across three datasets, examining both IID and non-IID settings. The results demonstrate that pFedClub outperforms baseline approaches, achieving state-of-the-art performance. Moreover, our model insight analysis reveals that pFedClub generates personalized models of reasonable size in a controllable manner, significantly reducing computational costs. | pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning | [
"Jiaqi Wang",
"Qi Li",
"Lingjuan Lyu",
"Fenglong Ma"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xUoNgR1Byy | @inproceedings{
marks2024interpreting,
title={Interpreting Learned Feedback Patterns in Large Language Models},
author={Luke Marks and Amir Abdullah and Clement Neo and Rauno Arike and David Krueger and Philip Torr and Fazl Barez},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xUoNgR1Byy}
} | Reinforcement learning from human feedback (RLHF) is widely used to train large language models (LLMs). However, it is unclear whether LLMs accurately learn the underlying preferences in human feedback data. We coin the term **Learned Feedback Pattern** (LFP) for patterns in an LLM's activations learned during RLHF that improve its performance on the fine-tuning task. We hypothesize that LLMs with LFPs accurately aligned to the fine-tuning feedback exhibit consistent activation patterns for outputs that would have received similar feedback during RLHF. To test this, we train probes to estimate the feedback signal implicit in the activations of a fine-tuned LLM. We then compare these estimates to the true feedback, measuring how accurate the LFPs are to the fine-tuning feedback. Our probes are trained on a condensed, sparse and interpretable representation of LLM activations, making it easier to correlate features of the input with our probe's predictions. We validate our probes by comparing the neural features they correlate with positive feedback inputs against the features GPT-4 describes and classifies as related to LFPs. Understanding LFPs can help minimize discrepancies between LLM behavior and training objectives, which is essential for the **safety** and **alignment** of LLMs. | Interpreting Learned Feedback Patterns in Large Language Models | [
"Luke Marks",
"Amir Abdullah",
"Clement Neo",
"Rauno Arike",
"David Krueger",
"Philip Torr",
"Fazl Barez"
] | NeurIPS.cc/2024/Conference | 2310.08164 | [
"https://github.com/apartresearch/interpreting-reward-models"
] | https://huggingface.co/papers/2310.08164 | 1 | 4 | 0 | 6 | [
"amirabdullah19852020/pythia-70m_utility_reward",
"amirabdullah19852020/pythia-160m_utility_reward",
"amirabdullah19852020/pythia-70m_sentiment_reward",
"amirabdullah19852020/pythia-160m_sentiment_reward",
"amirabdullah19852020/gpt-neo-125m_sentiment_reward",
"amirabdullah19852020/gpt-neo-125m_utility_reward"
] | [] | [] | [
"amirabdullah19852020/pythia-70m_utility_reward",
"amirabdullah19852020/pythia-160m_utility_reward",
"amirabdullah19852020/pythia-70m_sentiment_reward",
"amirabdullah19852020/pythia-160m_sentiment_reward",
"amirabdullah19852020/gpt-neo-125m_sentiment_reward",
"amirabdullah19852020/gpt-neo-125m_utility_reward"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=xUjBZR6b1T | @inproceedings{
mou2024revideo,
title={ReVideo: Remake a Video with Motion and Content Control},
author={Chong Mou and Mingdeng Cao and Xintao Wang and Zhaoyang Zhang and Ying Shan and Jian Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xUjBZR6b1T}
} | Despite significant advancements in video generation and editing using diffusion models, achieving accurate and localized video editing remains a substantial challenge. Additionally, most existing video editing methods primarily focus on altering visual content, with limited research dedicated to motion editing. In this paper, we present a novel attempt to Remake a Video (ReVideo) which stands out from existing methods by allowing precise video editing in specific areas through the specification of both content and motion. Content editing is facilitated by modifying the first frame, while the trajectory-based motion control offers an intuitive user interaction experience. ReVideo addresses a new task involving the coupling and training imbalance between content and motion control. To tackle this, we develop a three-stage training strategy that progressively decouples these two aspects from coarse to fine. Furthermore, we propose a spatiotemporal adaptive fusion module to integrate content and motion control across various sampling steps and spatial locations. Extensive experiments demonstrate that our ReVideo has promising performance on several accurate video editing applications, i.e., (1) locally changing video content while keeping the motion constant, (2) keeping content unchanged and customizing new motion trajectories, (3) modifying both content and motion trajectories. Our method can also seamlessly extend these applications to multi-area editing without specific training, demonstrating its flexibility and robustness. | ReVideo: Remake a Video with Motion and Content Control | [
"Chong Mou",
"Mingdeng Cao",
"Xintao Wang",
"Zhaoyang Zhang",
"Ying Shan",
"Jian Zhang"
] | NeurIPS.cc/2024/Conference | 2405.13865 | [
""
] | https://huggingface.co/papers/2405.13865 | 4 | 23 | 3 | 6 | [
"Adapter/ReVideo"
] | [] | [] | [
"Adapter/ReVideo"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=xSziO6gQgG | @inproceedings{
thrampoulidis2024implicit,
title={Implicit Optimization Bias of Next-token Prediction in Linear Models},
author={Christos Thrampoulidis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xSziO6gQgG}
} | We initiate an investigation into the optimization properties of next-token prediction (NTP), the dominant training paradigm for modern language models. Specifically, we study the structural properties of the solutions selected by gradient-based optimizers among the many possible minimizers of the NTP objective. By framing NTP as cross-entropy minimization across \emph{distinct} contexts, each tied with a \emph{sparse} conditional probability distribution across a finite vocabulary of tokens, we introduce ``NTP-separability conditions'' that enable reaching the data-entropy lower bound. With this setup, and focusing on linear models with fixed context embeddings, we characterize the optimization bias of gradient descent (GD): Within the data subspace defined by the sparsity patterns of distinct contexts, GD selects parameters that equate the logits' differences of in-support tokens to their log-odds. In the orthogonal subspace, the GD parameters diverge in norm and select the direction that maximizes a margin specific to NTP. These findings extend previous research on implicit bias in one-hot classification to the NTP setting, highlighting key differences and prompting further research into the optimization and generalization properties of NTP, irrespective of the specific architecture used to generate the context embeddings. | Implicit Optimization Bias of Next-token Prediction in Linear Models | [
"Christos Thrampoulidis"
] | NeurIPS.cc/2024/Conference | 2402.18551 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xSU27DgWEr | @inproceedings{
wang2024on,
title={On \$f\$-Divergence Principled Domain Adaptation: An Improved Framework},
author={Ziqiao Wang and Yongyi Mao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xSU27DgWEr}
} | Unsupervised domain adaptation (UDA) plays a crucial role in addressing distribution shifts in machine learning. In this work, we improve the theoretical foundations of UDA proposed in Acuna et al. (2021) by refining their $f$-divergence-based discrepancy and additionally introducing a new measure, $f$-domain discrepancy ($f$-DD). By removing the absolute value function and incorporating a scaling parameter, $f$-DD obtains novel target error and sample complexity bounds, allowing us to recover previous KL-based results and bridging the gap between algorithms and theory presented in Acuna et al. (2021). Using a localization technique, we also develop a fast-rate generalization bound. Empirical results demonstrate the superior performance of $f$-DD-based learning algorithms over previous works in popular UDA benchmarks. | On f-Divergence Principled Domain Adaptation: An Improved Framework | [
"Ziqiao Wang",
"Yongyi Mao"
] | NeurIPS.cc/2024/Conference | [
"https://github.com/ziqiaowanggeothe/f-dd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xRdpCOdghl | @inproceedings{
shao2024enhancing,
title={Enhancing Semi-Supervised Learning via Representative and Diverse Sample Selection},
author={Qian Shao and Jiangrui Kang and Qiyuan Chen and Zepeng Li and Hongxia Xu and Yiwen Cao and JIAJUAN LIANG and Jian Wu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xRdpCOdghl}
} | Semi-Supervised Learning (SSL) has become a preferred paradigm in many deep learning tasks, which reduces the need for human labor. Previous studies primarily focus on effectively utilising the labelled and unlabeled data to improve performance. However, we observe that how to select samples for labelling also significantly impacts performance, particularly under extremely low-budget settings. The sample selection task in SSL has been under-explored for a long time. To fill in this gap, we propose a Representative and Diverse Sample Selection approach (RDSS). By adopting a modified Frank-Wolfe algorithm to minimise a novel criterion $\alpha$-Maximum Mean Discrepancy ($\alpha$-MMD), RDSS samples a representative and diverse subset for annotation from the unlabeled data. We demonstrate that minimizing $\alpha$-MMD enhances the generalization ability of low-budget learning. Experimental results show that RDSS consistently improves the performance of several popular SSL frameworks and outperforms the state-of-the-art sample selection approaches used in Active Learning (AL) and Semi-Supervised Active Learning (SSAL), even with constrained annotation budgets. | Enhancing Semi-Supervised Learning via Representative and Diverse Sample Selection | [
"Qian Shao",
"Jiangrui Kang",
"Qiyuan Chen",
"Zepeng Li",
"Hongxia Xu",
"Yiwen Cao",
"JIAJUAN LIANG",
"Jian Wu"
] | NeurIPS.cc/2024/Conference | 2409.11653 | [
"https://github.com/yanhuiailab/rdss"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xRQxan3WkM | @inproceedings{
zhang2024the,
title={The Implicit Bias of Adam on Separable Data},
author={Chenyang Zhang and Difan Zou and Yuan Cao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xRQxan3WkM}
} | Adam has become one of the most favored optimizers in deep learning problems. Despite its success in practice, numerous mysteries persist regarding its theoretical understanding. In this paper, we study the implicit bias of Adam in linear logistic regression. Specifically, we show that when the training data are linearly separable, the iterates of Adam converge towards a linear classifier that achieves the maximum $\ell_\infty$-margin in direction. Notably, for a general class of diminishing learning rates, this convergence occurs within polynomial time. Our result shed light on the difference between Adam and (stochastic) gradient descent from a theoretical perspective. | The Implicit Bias of Adam on Separable Data | [
"Chenyang Zhang",
"Difan Zou",
"Yuan Cao"
] | NeurIPS.cc/2024/Conference | 2406.10650 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xQWJBeK5rh | @inproceedings{
wang2024structural,
title={Structural Inference of Dynamical Systems with Conjoined State Space Models},
author={Aoran Wang and Jun Pang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xQWJBeK5rh}
} | This paper introduces SICSM, a novel structural inference framework that integrates Selective State Space Models (selective SSMs) with Generative Flow Networks (GFNs) to handle the challenges posed by dynamical systems with irregularly sampled trajectories and partial observations.
By utilizing the robust temporal modeling capabilities of selective SSMs, our approach learns input-dependent transition functions that adapt to non-uniform time intervals, thereby enhancing the accuracy of structural inference.
By aggregating dynamics across diverse temporal dependencies and channeling them into the GFN, the SICSM adeptly approximates the posterior distribution of the system's structure.
This process not only enables precise inference of complex interactions within partially observed systems but also ensures the seamless integration of prior knowledge, enhancing the model’s accuracy and robustness.
Extensive evaluations on sixteen diverse datasets demonstrate that SICSM outperforms existing methods, particularly in scenarios characterized by irregular sampling and incomplete observations, which highlight its potential as a reliable tool for scientific discovery and system diagnostics in disciplines that demand precise modeling of complex interactions. | Structural Inference of Dynamical Systems with Conjoined State Space Models | [
"Aoran Wang",
"Jun Pang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xOCAURlVM9 | @inproceedings{
xu2024assembly,
title={Assembly Fuzzy Representation on Hypergraph for Open-Set 3D Object Retrieval},
author={Yang Xu and Yifan Feng and Jun Zhang and Jun-Hai Yong and Yue Gao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xOCAURlVM9}
} | The lack of object-level labels presents a significant challenge for 3D object retrieval in the open-set environment. However, part-level shapes of objects often share commonalities across categories but remain underexploited in existing retrieval methods. In this paper, we introduce the Hypergraph-Based Assembly Fuzzy Representation (HARF) framework, which navigates the intricacies of open-set 3D object retrieval through a bottom-up lens of Part Assembly. To tackle the challenge of assembly isomorphism and unification, we propose the Hypergraph Isomorphism Convolution (HIConv) for smoothing and adopt the Isomorphic Assembly Embedding (IAE) module to generate assembly embeddings with geometric-semantic consistency. To address the challenge of open-set category generalization, our method employs high-order correlations and fuzzy representation to mitigate distribution skew through the Structure Fuzzy Reconstruction (SFR) module, by constructing a leveraged hypergraph based on local certainty and global uncertainty correlations. We construct three open-set retrieval datasets for 3D objects with part-level annotations: OP-SHNP, OP-INTRA, and OP-COSEG. Extensive experiments and ablation studies on these three benchmarks show our method outperforms current state-of-the-art methods. | Assembly Fuzzy Representation on Hypergraph for Open-Set 3D Object Retrieval | [
"Yang Xu",
"Yifan Feng",
"Jun Zhang",
"Jun-Hai Yong",
"Yue Gao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xO9GHdmK76 | @inproceedings{
xu2024infinitedimensional,
title={Infinite-Dimensional Feature Interaction},
author={Chenhui Xu and Fuxun Yu and Maoliang Li and Zihao Zheng and Zirui Xu and Jinjun Xiong and Xiang Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xO9GHdmK76}
} | The past neural network design has largely focused on feature \textit{representation space} dimension and its capacity scaling (e.g., width, depth), but overlooked the feature \textit{interaction space} scaling.
Recent advancements have shown shifted focus towards element-wise multiplication to facilitate higher-dimensional feature interaction space for better information transformation. Despite this progress, multiplications predominantly capture low-order interactions, thus remaining confined to a finite-dimensional interaction space. To transcend this limitation, classic kernel methods emerge as a promising solution to engage features in an infinite-dimensional space. We introduce InfiNet, a model architecture that enables feature interaction within an infinite-dimensional space created by RBF kernel. Our experiments reveal that InfiNet achieves new state-of-the-art, owing to its capability to leverage infinite-dimensional interactions, significantly enhancing model performance. | Infinite-Dimensional Feature Interaction | [
"Chenhui Xu",
"Fuxun Yu",
"Maoliang Li",
"Zihao Zheng",
"Zirui Xu",
"Jinjun Xiong",
"Xiang Chen"
] | NeurIPS.cc/2024/Conference | 2405.13972 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xNncVKbwwS | @inproceedings{
yang2024universal,
title={Universal Online Convex Optimization with \$1\$ Projection per Round},
author={Wenhao Yang and Yibo Wang and Peng Zhao and Lijun Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xNncVKbwwS}
} | To address the uncertainty in function types, recent progress in online convex optimization (OCO) has spurred the development of universal algorithms that simultaneously attain minimax rates for multiple types of convex functions. However, for a $T$-round online problem, state-of-the-art methods typically conduct $O(\log T)$ projections onto the domain in each round, a process potentially time-consuming with complicated feasible sets. In this paper, inspired by the black-box reduction of Cutkosky and Orabona [2018], we employ a surrogate loss defined over simpler domains to develop universal OCO algorithms that only require $1$ projection. Embracing the framework of prediction with expert advice, we maintain a set of experts for each type of functions and aggregate their predictions via a meta-algorithm. The crux of our approach lies in a uniquely designed expert-loss for strongly convex functions, stemming from an innovative decomposition of the regret into the meta-regret and the expert-regret. Our analysis sheds new light on the surrogate loss, facilitating a rigorous examination of the discrepancy between the regret of the original loss and that of the surrogate loss, and carefully controlling meta-regret under the strong convexity condition. With only $1$ projection per round, we establish optimal regret bounds for general convex, exponentially concave, and strongly convex functions simultaneously. Furthermore, we enhance the expert-loss to exploit the smoothness property, and demonstrate that our algorithm can attain small-loss regret for multiple types of convex and smooth functions. | Universal Online Convex Optimization with 1 Projection per Round | [
"Wenhao Yang",
"Yibo Wang",
"Peng Zhao",
"Lijun Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xNlQjS0dtO | @inproceedings{
lyu2024keeping,
title={Keeping {LLM}s Aligned After Fine-tuning: The Crucial Role of Prompt Templates},
author={Kaifeng Lyu and Haoyu Zhao and Xinran Gu and Dingli Yu and Anirudh Goyal and Sanjeev Arora},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xNlQjS0dtO}
} | Public LLMs such as the Llama 2-Chat underwent alignment training and were considered safe. Recently Qi et al. (2024) reported that even benign fine-tuning on seemingly safe datasets can give rise to unsafe behaviors in the models. The current paper is about methods and best practices to mitigate such loss of alignment. We focus on the setting where a public model is fine-tuned before serving users for specific usage, where the model should improve on the downstream task while maintaining alignment. Through extensive experiments on several chat models (Meta's Llama 2-Chat, Mistral AI's Mistral 7B Instruct v0.2, and OpenAI's GPT-3.5 Turbo), this paper uncovers that the prompt templates used during fine-tuning and inference play a crucial role in preserving safety alignment, and proposes the “Pure Tuning, Safe Testing” (PTST) strategy --- fine-tune models without a safety prompt, but include it at test time. This seemingly counterintuitive strategy incorporates an intended distribution shift to encourage alignment preservation. Fine-tuning experiments on GSM8K, ChatDoctor, and OpenOrca show that PTST significantly reduces the rise of unsafe behaviors. | Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates | [
"Kaifeng Lyu",
"Haoyu Zhao",
"Xinran Gu",
"Dingli Yu",
"Anirudh Goyal",
"Sanjeev Arora"
] | NeurIPS.cc/2024/Conference | 2402.18540 | [
"https://github.com/vfleaking/ptst"
] | https://huggingface.co/papers/2402.18540 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=xNZEjFe0mh | @inproceedings{
guo2024communicationefficient,
title={Communication-Efficient Federated Group Distributionally Robust Optimization},
author={Zhishuai Guo and Tianbao Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xNZEjFe0mh}
} | Federated learning faces challenges due to the heterogeneity in data volumes and distributions at different clients, which can compromise model generalization ability to various distributions.
Existing approaches to address this issue based on group distributionally robust optimization (GDRO) often lead to high communication and sample complexity.
To this end, this work introduces algorithms tailored for communication-efficient Federated Group Distributionally Robust Optimization (FGDRO). Our contributions are threefold: Firstly, we introduce the FGDRO-CVaR algorithm, which optimizes the average top-K losses while reducing communication complexity to $O(1/\epsilon^4)$, where $\epsilon$ denotes the desired precision level. Secondly, our FGDRO-KL algorithm is crafted to optimize KL regularized FGDRO, cutting communication complexity to $O(1/\epsilon^3)$. Lastly, we propose FGDRO-KL-Adam to utilize Adam-type local updates in FGDRO-KL, which not only maintains a communication cost of $O(1/\epsilon^3)$ but also shows potential to surpass SGD-type local steps in practical applications.
The effectiveness of our algorithms has been demonstrated on a variety of real-world tasks, including natural language processing and computer vision. | Communication-Efficient Federated Group Distributionally Robust Optimization | [
"Zhishuai Guo",
"Tianbao Yang"
] | NeurIPS.cc/2024/Conference | 2410.06369 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xM5m7J6Lbl | @inproceedings{
berdoz2024can,
title={Can an {AI} Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies},
author={Fr{\'e}d{\'e}ric Berdoz and Roger Wattenhofer},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xM5m7J6Lbl}
} | While autonomous agents often surpass humans in their ability to handle vast and complex data, their potential misalignment (i.e., lack of transparency regarding their true objective) has thus far hindered their use in critical applications such as social decision processes. More importantly, existing alignment methods provide no formal guarantees on the safety of such models. Drawing from utility and social choice theory, we provide a novel quantitative definition of alignment in the context of social decision-making. Building on this definition, we introduce probably approximately aligned (i.e., near-optimal) policies, and we derive a sufficient condition for their existence. Lastly, recognizing the practical difficulty of satisfying this condition, we introduce the relaxed concept of safe (i.e., nondestructive) policies, and we propose a simple yet robust method to safeguard the black-box policy of any autonomous agent, ensuring all its actions are verifiably safe for the society. | Can an AI Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies | [
"Frédéric Berdoz",
"Roger Wattenhofer"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xL7Ve14AHA | @inproceedings{
huang2024regularized,
title={Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact Subproblem Solver for Training Structured Neural Network},
author={Zih-Syuan Huang and Ching-pei Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xL7Ve14AHA}
} | We propose a Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm for training structured neural networks. Similar to existing regularized adaptive methods, the subproblem for computing the update directions of \ramda involves a nonsmooth regularizer and a diagonal preconditioner, and therefore does not possess a closed-form solution in general. We thus also carefully devise an implementable inexactness condition that retains convergence guarantees similar to the exact versions, and propose a companion efficient solver for the subproblems of both \ramda and existing methods to make them practically feasible. We leverage the theory of manifold identification in variational analysis to show that, even in the presence of such inexactness, the iterates of RAMDA attain the ideal structure induced by the regularizer at the stationary point of asymptotic convergence. This structure is locally optimal near the point of convergence, so RAMDA is guaranteed to obtain the best structure possible among all methods converging to the same point, making it the first regularized adaptive method to output models that possess outstanding predictive performance while being (locally) optimally structured. Extensive numerical experiments in large-scale modern computer vision, language modeling, and speech tasks show that the proposed RAMDA is efficient and consistently outperforms state of the art for training structured neural network. Our code is available at (removed for anonymous review). | Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact Subproblem Solver for Training Structured Neural Network | [
"Zih-Syuan Huang",
"Ching-pei Lee"
] | NeurIPS.cc/2024/Conference | 2403.14398 | [
"https://github.com/ismoptgroup/ramda_exp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xImeJtdUiw | @inproceedings{
garau-luis2024multimodal,
title={Multi-modal Transfer Learning between Biological Foundation Models},
author={Juan Jose Garau-Luis and Patrick Philippe Bordes and Liam Gonzalez and Ma{\v{s}}a Roller and Bernardo P de Almeida and Christopher F. Blum and Lorenz Hexemer and Stefan Laurent and Maren Lang and Thomas PIERROT and Guillaume Richard},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xImeJtdUiw}
} | Biological sequences encode fundamental instructions for the building blocks of life, in the form of DNA, RNA, and proteins. Modeling these sequences is key to understand disease mechanisms and is an active research area in computational biology. Recently, Large Language Models have shown great promise in solving certain biological tasks but current approaches are limited to a single sequence modality (DNA, RNA, or protein). Key problems in genomics intrinsically involve multiple modalities, but it remains unclear how to adapt general-purpose sequence models to those cases. In this work we propose a multi-modal model that connects DNA, RNA, and proteins by leveraging information from different pre-trained modality-specific encoders. We demonstrate its capabilities by applying it to the largely unsolved problem of predicting how multiple \rna transcript isoforms originate from the same gene (i.e. same DNA sequence) and map to different transcription expression levels across various human tissues. We show that our model, dubbed IsoFormer, is able to accurately predict differential transcript expression, outperforming existing methods and leveraging the use of multiple modalities. Our framework also achieves efficient transfer knowledge from the encoders pre-training as well as in between modalities. We open-source our model, paving the way for new multi-modal gene expression approaches. | Multi-modal Transfer Learning between Biological Foundation Models | [
"Juan Jose Garau-Luis",
"Patrick Philippe Bordes",
"Liam Gonzalez",
"Maša Roller",
"Bernardo P de Almeida",
"Christopher F. Blum",
"Lorenz Hexemer",
"Stefan Laurent",
"Maren Lang",
"Thomas PIERROT",
"Guillaume Richard"
] | NeurIPS.cc/2024/Conference | 2406.14150 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=xDrKZOZEOc | @inproceedings{
li2024fast,
title={Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization},
author={Yang Li and Jinpei Guo and Runzhong Wang and Hongyuan Zha and Junchi Yan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xDrKZOZEOc}
} | Diffusion models have recently advanced Combinatorial Optimization (CO) as a powerful backbone for neural solvers. However, their iterative sampling process requiring denoising across multiple noise levels incurs substantial overhead. We propose to learn direct mappings from different noise levels to the optimal solution for a given instance, facilitating high-quality generation with minimal shots. This is achieved through an optimization consistency training protocol, which, for a given instance, minimizes the difference among samples originating from varying generative trajectories and time steps relative to the optimal solution. The proposed model enables fast single-step solution generation while retaining the option of multi-step sampling to trade for sampling quality, which offers a more effective and efficient alternative backbone for neural solvers. In addition, within the training-to-testing (T2T) framework, to bridge the gap between training on historical instances and solving new instances, we introduce a novel consistency-based gradient search scheme during the test stage, enabling more effective exploration of the solution space learned during training. It is achieved by updating the latent solution probabilities under objective gradient guidance during the alternation of noise injection and denoising steps. We refer to this model as Fast T2T. Extensive experiments on two popular tasks, the Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS), demonstrate the superiority of Fast T2T regarding both solution quality and efficiency, even outperforming LKH given limited time budgets. Notably, Fast T2T with merely one-step generation and one-step gradient search can mostly outperform the SOTA diffusion-based counterparts that require hundreds of steps, while achieving tens of times speedup. | Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization | [
"Yang Li",
"Jinpei Guo",
"Runzhong Wang",
"Hongyuan Zha",
"Junchi Yan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xCUXJqQySD | @inproceedings{
kuang2024medrealsim,
title={Med-Real2Sim: Non-Invasive Medical Digital Twins using Physics-Informed Self-Supervised Learning},
author={Keying Kuang and Frances Dean and Jack B. Jedlicki and David Ouyang and Anthony Philippakis and David Sontag and Ahmed Alaa},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xCUXJqQySD}
} | A digital twin is a virtual replica of a real-world physical phenomena that uses mathematical modeling to characterize and simulate its defining features. By constructing digital twins for disease processes, we can perform in-silico simulations that mimic patients' health conditions and counterfactual outcomes under hypothetical interventions in a virtual setting. This eliminates the need for invasive procedures or uncertain treatment decisions. In this paper, we propose a method to identify digital twin model parameters using only noninvasive patient health data. We approach the digital twin modeling as a composite inverse problem, and observe that its structure resembles pretraining and finetuning in self-supervised learning (SSL). Leveraging this, we introduce a physics-informed SSL algorithm that initially pretrains a neural network on the pretext task of learning a differentiable simulator of a physiological process. Subsequently, the model is trained to reconstruct physiological measurements from noninvasive modalities while being constrained by the physical equations learned in pretraining. We apply our method to identify digital twins of cardiac hemodynamics using noninvasive echocardiogram videos, and demonstrate its utility in unsupervised disease detection and in-silico clinical trials. | Med-Real2Sim: Non-Invasive Medical Digital Twins using Physics-Informed Self-Supervised Learning | [
"Keying Kuang",
"Frances Dean",
"Jack B. Jedlicki",
"David Ouyang",
"Anthony Philippakis",
"David Sontag",
"Ahmed Alaa"
] | NeurIPS.cc/2024/Conference | 2403.00177 | [
"https://github.com/alaalab/cardiopinn"
] | https://huggingface.co/papers/2403.00177 | 1 | 0 | 0 | 7 | [] | [] | [
"alaa-lab/Med-Real2Sim"
] | [] | [] | [
"alaa-lab/Med-Real2Sim"
] | 1 | poster |
null | https://openreview.net/forum?id=xCIbVuXwPM | @inproceedings{
nueve2024trading,
title={Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification},
author={Enrique Nueve and Dhamma Kimpara and Bo Waggoner and Jessica Finocchiaro},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xCIbVuXwPM}
} | In multiclass classification over $n$ outcomes, we typically optimize some surrogate loss $L: \mathbb{R}^d \times\mathcal{Y} \to \mathbb{R}$ assigning real-valued error to predictions in $\mathbb{R}^d$. In this paradigm, outcomes must be embedded into the reals with dimension $d \approx n$ in order to design a consistent surrogate loss. Consistent losses are well-motivated theoretically, yet for large $n$, such as in information retrieval and structured prediction tasks, their optimization may be computationally infeasible. In practice, outcomes are typically embedded into some $\mathbb{R}^d$ for $d \ll n$, with little known about their suitability for multiclass classification. We investigate two approaches for trading off consistency and dimensionality in multiclass classification while using a convex surrogate loss. We first formalize partial consistency when the optimized surrogate has dimension $d \ll n$.
We then check if partial consistency holds under a given embedding and low-noise assumption, providing insight into when to use a particular embedding into $\mathbb{R}^d$. Finally, we present a new method to construct (fully) consistent losses with $d \ll n$ out of multiple problem instances. Our practical approach leverages parallelism to sidestep lower bounds on $d$. | Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification | [
"Enrique Nueve",
"Dhamma Kimpara",
"Bo Waggoner",
"Jessica Finocchiaro"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=x9eFgahVBI | @inproceedings{
wibisono2024from,
title={From Unstructured Data to In-Context Learning: Exploring What Tasks Can Be Learned and When},
author={Kevin Christian Wibisono and Yixin Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x9eFgahVBI}
} | Large language models (LLMs) like transformers demonstrate impressive in-context learning (ICL) capabilities, allowing them to make
predictions for new tasks based on prompt exemplars without parameter updates. While existing ICL theories often assume structured training data resembling ICL tasks (e.g., x-y pairs for linear regression), LLMs are typically trained unsupervised on unstructured text, such as web content, which lacks clear parallels to tasks like word analogy. To address this gap, we examine what enables ICL in models trained on unstructured data, focusing on critical sequence model requirements and training data structure. We find that many ICL capabilities can
emerge simply from co-occurrence of semantically related word pairs in unstructured data; word analogy completion, for example, can provably arise purely through co-occurrence modeling, using classical language models like continuous bag of words (CBOW), without needing positional information or attention mechanisms. However, positional information becomes crucial for logic reasoning tasks requiring generalization to unseen tokens. Finally, we identify two cases where ICL fails: one in logic reasoning tasks that require generalizing to new, unseen patterns, and another in analogy completion where relevant word pairs appear only in fixed training positions. These findings suggest that LLMs' ICL abilities depend heavily on the structural elements within their training data. | From Unstructured Data to In-Context Learning: Exploring What Tasks Can Be Learned and When | [
"Kevin Christian Wibisono",
"Yixin Wang"
] | NeurIPS.cc/2024/Conference | 2406.00131 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=x7usmidzxj | @inproceedings{
hong2024on,
title={On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions},
author={Yusu Hong and Junhong Lin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x7usmidzxj}
} | In this paper, we study Adam in non-convex smooth scenarios with potential unbounded gradients and affine variance noise. We consider a general noise model which governs affine variance noise, bounded noise, and sub-Gaussian noise. We show that Adam with a specific hyper-parameter setup can find a stationary point with a $\mathcal{O}(\text{poly}(\log T)/\sqrt{T})$ rate in high probability under this general noise model where $T$ denotes total number iterations, matching the lower rate of stochastic first-order algorithms up to logarithm factors. We also provide a probabilistic convergence result for Adam under a generalized smooth condition which allows unbounded smoothness parameters and has been illustrated empirically to capture the smooth property of many practical objective functions more accurately. | On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions | [
"Yusu Hong",
"Junhong Lin"
] | NeurIPS.cc/2024/Conference | 2402.03982 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=x7pjdDod6Z | @inproceedings{
liu2024meshformer,
title={MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model},
author={Minghua Liu and Chong Zeng and Xinyue Wei and Ruoxi Shi and Linghao Chen and Chao Xu and Mengqi Zhang and Zhaoning Wang and Xiaoshuai Zhang and Isabella Liu and Hongzhi Wu and Hao Su},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x7pjdDod6Z}
} | Open-world 3D reconstruction models have recently garnered significant attention. However, without sufficient 3D inductive bias, existing methods typically entail expensive training costs and struggle to extract high-quality 3D meshes. In this work, we introduce MeshFormer, a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision. Specifically, instead of using a triplane representation, we store features in 3D sparse voxels and combine transformers with 3D convolutions to leverage an explicit 3D structure and projective bias. In addition to sparse-view RGB input, we require the network to take input and generate corresponding normal maps. The input normal maps can be predicted by 2D diffusion models, significantly aiding in the guidance and refinement of the geometry's learning. Moreover, by combining Signed Distance Function (SDF) supervision with surface rendering, we directly learn to generate high-quality meshes without the need for complex multi-stage training processes. By incorporating these explicit 3D biases, MeshFormer can be trained efficiently and deliver high-quality textured meshes with fine-grained geometric details. It can also be integrated with 2D diffusion models to enable fast single-image-to-3D and text-to-3D tasks. **Videos are available at https://meshformer3d.github.io/** | MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model | [
"Minghua Liu",
"Chong Zeng",
"Xinyue Wei",
"Ruoxi Shi",
"Linghao Chen",
"Chao Xu",
"Mengqi Zhang",
"Zhaoning Wang",
"Xiaoshuai Zhang",
"Isabella Liu",
"Hongzhi Wu",
"Hao Su"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=x7AD0343Jz | @inproceedings{
thomm2024limits,
title={Limits of Transformer Language Models on Learning to Compose Algorithms},
author={Jonathan Thomm and Giacomo Camposampiero and Aleksandar Terzic and Michael Hersche and Bernhard Sch{\"o}lkopf and Abbas Rahimi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x7AD0343Jz}
} | We analyze the capabilities of Transformer language models in learning compositional discrete tasks. To this end, we evaluate training LLaMA models and prompting GPT-4 and Gemini on four tasks demanding to learn a composition of several discrete sub-tasks. In particular, we measure how well these models can reuse primitives observable in the sub-tasks to learn the composition task. Our results indicate that compositional learning in state-of-the-art Transformer language models is highly sample inefficient: LLaMA requires more data samples than relearning all sub-tasks from scratch to learn the compositional task; in-context prompting with few samples is unreliable and fails at executing the sub-tasks or correcting the errors in multi-round code generation. Further, by leveraging complexity theory, we support these findings with a theoretical analysis focused on the sample inefficiency of gradient descent in memorizing feedforward models. We open source our code at https://github.com/IBM/limitations-lm-algorithmic-compositional-learning. | Limits of Transformer Language Models on Learning to Compose Algorithms | [
"Jonathan Thomm",
"Giacomo Camposampiero",
"Aleksandar Terzic",
"Michael Hersche",
"Bernhard Schölkopf",
"Abbas Rahimi"
] | NeurIPS.cc/2024/Conference | 2402.05785 | [
"https://github.com/ibm/limitations-lm-algorithmic-compositional-learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=x69O84Df2G | @inproceedings{
russo2024multireward,
title={Multi-Reward Best Policy Identification},
author={Alessio Russo and Filippo Vannella},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x69O84Df2G}
} | Rewards are a critical aspect of formulating Reinforcement Learning (RL) problems; often, one may be interested in testing multiple reward functions, or the problem may naturally involve multiple rewards.
In this study, we investigate the _Multi-Reward Best Policy Identification_ (MR-BPI) problem, where the goal is to determine the best policy for all rewards in a given set $\mathcal{R}$ with minimal sample complexity and a prescribed confidence level. We derive a fundamental instance-specific lower bound on the sample complexity required by any Probably Correct (PC) algorithm in this setting. This bound guides the design of an optimal exploration policy attaining minimal sample complexity. However, this lower bound involves solving a hard non-convex optimization problem. We address this challenge by devising a convex approximation, enabling the design of sample-efficient algorithms. We propose MR-NaS, a PC algorithm with competitive performance on hard-exploration tabular environments. Extending this approach to Deep RL (DRL), we also introduce DBMR-BPI, an efficient algorithm for model-free exploration in multi-reward settings. | Multi-Reward Best Policy Identification | [
"Alessio Russo",
"Filippo Vannella"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=x4Kk4FxLs3 | @inproceedings{
zhao2024pard,
title={Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation},
author={Lingxiao Zhao and Xueying Ding and Leman Akoglu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x4Kk4FxLs3}
} | Graph generation has been dominated by autoregressive models due to their simplicity and effectiveness, despite their sensitivity to ordering. Yet diffusion models have garnered increasing attention, as they offer comparable performance while being permutation-invariant. Current graph diffusion models generate graphs in a one-shot fashion, but they require extra features and thousands of denoising steps to achieve optimal performance. We introduce PARD, a Permutation-invariant Auto Regressive Diffusion model that integrates diffusion models with autoregressive methods. PARD harnesses the effectiveness and efficiency of the autoregressive model while maintaining permutation invariance without ordering sensitivity. Specifically, we show that contrary to sets, elements in a graph are not entirely un-ordered and there is a unique partial order for nodes and edges. With this partial order, PARD generates a graph in a block-by-block, autoregressive fashion, where each block’s probability is conditionally modeled by a shared diffusion model with an equivariant network. To ensure efficiency while being expressive, we further propose a higher-order graph transformer, which integrates transformer with PPGN (Maronet al., 2019). Like GPT, we extend the higher-order graph transformer to support parallel training of all blocks. Without any extra features, PARD achieves state-of-the-art performance on molecular and non-molecular datasets, and scales to large datasets like MOSES containing 1.9M molecules. | Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation | [
"Lingxiao Zhao",
"Xueying Ding",
"Leman Akoglu"
] | NeurIPS.cc/2024/Conference | 2402.03687 | [
"https://github.com/lingxiaoshawn/pard"
] | https://huggingface.co/papers/2402.03687 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=x4HMnqs6IE | @inproceedings{
xu2024textid,
title={\${\textbackslash}text\{{ID}\}{\textasciicircum}3\$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition},
author={Jianqing Xu and Shen Li and Jiaying Wu and Miao Xiong and Ailin Deng and Jiazhen Ji and Yuge Huang and Guodong Mu and Wenjie Feng and Shouhong Ding and Bryan Hooi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x4HMnqs6IE}
} | Synthetic face recognition (SFR) aims to generate synthetic face datasets that mimic the distribution of real face data, which allows for training face recognition models in a privacy-preserving manner. Despite the remarkable potential of diffusion models in image generation, current diffusion-based SFR models struggle with generalization to real-world faces. To address this limitation, we outline three key objectives for SFR: (1) promoting diversity across identities (inter-class diversity), (2) ensuring diversity within each identity by injecting various facial attributes (intra-class diversity), and (3) maintaining identity consistency within each identity group (intra-class identity preservation). Inspired by these goals, we introduce a diffusion-fueled SFR model termed $\text{ID}^3$. $\text{ID}^3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances. Theoretically, we show that minimizing this loss is equivalent to maximizing the lower bound of an adjusted conditional log-likelihood over ID-preserving data. This equivalence motivates an ID-preserving sampling algorithm, which operates over an adjusted gradient vector field, enabling the generation of fake face recognition datasets that approximate the distribution of real-world faces. Extensive experiments across five challenging benchmarks validate the advantages of $\text{ID}^3$. | ID^3: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition | [
"Jianqing Xu",
"Shen Li",
"Jiaying Wu",
"Miao Xiong",
"Ailin Deng",
"Jiazhen Ji",
"Yuge Huang",
"Guodong Mu",
"Wenjie Feng",
"Shouhong Ding",
"Bryan Hooi"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=x4EoTQW7ka | @inproceedings{
woo2024dropbp,
title={Drop{BP}: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation},
author={Sunghyeon Woo and Baeseong park and Byeongwook Kim and Minjung Jo and Se Jung Kwon and Dongsuk Jeon and Dongsoo Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x4EoTQW7ka}
} | Large language models (LLMs) have achieved significant success across various domains. However, training these LLMs typically involves substantial memory and computational costs during both forward and backward propagation. While parameter-efficient fine-tuning (PEFT) considerably reduces the training memory associated with parameters, it does not address the significant computational costs and activation memory. In this paper, we propose Dropping Backward Propagation (DropBP), a novel approach designed to reduce computational costs and activation memory while maintaining accuracy. DropBP randomly drops layers during backward propagation, which is essentially equivalent to training shallow submodules generated by undropped layers and residual connections. Additionally, DropBP calculates the sensitivity of each layer to assign an appropriate drop rate, thereby stabilizing the training process. DropBP is not only applicable to full fine-tuning but can also be orthogonally integrated with all types of PEFT by dropping layers during backward propagation. Specifically, DropBP can reduce training time by 44% with comparable accuracy to the baseline, accelerate convergence to the same perplexity by 1.5$\times$, and enable training with a sequence length 6.2$\times$ larger on a single NVIDIA-A100 GPU. Furthermore, our DropBP enabled a throughput increase of 79% on a NVIDIA A100 GPU and 117% on an Intel Gaudi2 HPU. The code is available at [https://github.com/WooSunghyeon/dropbp](https://github.com/WooSunghyeon/dropbp). | DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation | [
"Sunghyeon Woo",
"Baeseong park",
"Byeongwook Kim",
"Minjung Jo",
"Se Jung Kwon",
"Dongsuk Jeon",
"Dongsoo Lee"
] | NeurIPS.cc/2024/Conference | 2402.17812 | [
"https://github.com/woosunghyeon/dropbp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=x33oWJQyH0 | @inproceedings{
longa2024unsupervised,
title={Unsupervised Object Detection with Theoretical Guarantees},
author={Marian Longa and Joao F. Henriques},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x33oWJQyH0}
} | Unsupervised object detection using deep neural networks is typically a difficult problem with few to no guarantees about the learned representation. In this work we present the first unsupervised object detection method that is theoretically guaranteed to recover the true object positions up to quantifiable small shifts. We develop an unsupervised object detection architecture and prove that the learned variables correspond to the true object positions up to small shifts related to the encoder and decoder receptive field sizes, the object sizes, and the widths of the Gaussians used in the rendering process. We perform detailed analysis of how the error depends on each of these variables and perform synthetic experiments validating our theoretical predictions up to a precision of individual pixels. We also perform experiments on CLEVR-based data and show that, unlike current SOTA object detection methods (SAM, CutLER), our method's prediction errors always lie within our theoretical bounds. We hope that this work helps open up an avenue of research into object detection methods with theoretical guarantees. | Unsupervised Object Detection with Theoretical Guarantees | [
"Marian Longa",
"Joao F. Henriques"
] | NeurIPS.cc/2024/Conference | 2406.07284 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=x2zY4hZcmg | @inproceedings{
banerjee2024dynamic,
title={Dynamic Model Predictive Shielding for Provably Safe Reinforcement Learning},
author={Arko Banerjee and Kia Rahmani and Joydeep Biswas and Isil Dillig},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x2zY4hZcmg}
} | Among approaches for provably safe reinforcement learning, Model Predictive Shielding (MPS) has proven effective at complex tasks in continuous, high-dimensional state spaces, by leveraging a *backup policy* to ensure safety when the learned policy attempts to take risky actions. However, while MPS can ensure safety both during and after training, it often hinders task progress due to the conservative and task-oblivious nature of backup policies.
This paper introduces *Dynamic Model Predictive Shielding* (DMPS), which optimizes reinforcement learning objectives while maintaining provable safety. DMPS employs a local planner to dynamically select safe recovery actions that maximize both short-term progress as well as long-term rewards. Crucially, the planner and the neural policy play a synergistic role in DMPS. When planning recovery actions for ensuring safety, the planner utilizes the neural policy to estimate long-term rewards, allowing it to *observe* beyond its short-term planning horizon.
Conversely, the neural policy under training learns from the recovery plans proposed by the planner, converging to policies that are both *high-performing* and *safe* in practice.
This approach guarantees safety during and after training, with bounded recovery regret that decreases exponentially with planning horizon depth. Experimental results demonstrate that DMPS converges to policies that rarely require shield interventions after training and achieve higher rewards compared to several state-of-the-art baselines. | Dynamic Model Predictive Shielding for Provably Safe Reinforcement Learning | [
"Arko Banerjee",
"Kia Rahmani",
"Joydeep Biswas",
"Isil Dillig"
] | NeurIPS.cc/2024/Conference | 2405.13863 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=x2780VcMOI | @inproceedings{
simon2024a,
title={A Polar coordinate system represents syntax in large language models},
author={Pablo J. Diego Simon and St{\'e}phane d'Ascoli and Emmanuel Chemla and Yair Lakretz and Jean-Remi King},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=x2780VcMOI}
} | Originally formalized with symbolic representations, syntactic trees may also be effectively represented in the activations of large language models (LLMs). Indeed, a ''Structural Probe'' can find a subspace of neural activations, where syntactically-related words are relatively close to one-another. However, this syntactic code remains incomplete: the distance between the Structural Probe word embeddings can represent the \emph{existence} but not the type and direction of syntactic relations. Here, we hypothesize that syntactic relations are, in fact, coded by the relative direction between nearby embeddings. To test this hypothesis, we introduce a ''Polar Probe'' trained to read syntactic relations from both the distance and the direction between word embeddings. Our approach reveals three main findings. First, our Polar Probe successfully recovers the type and direction of syntactic relations, and substantially outperforms the Structural Probe by nearly two folds. Second, we confirm that this polar coordinate system exists in a low-dimensional subspace of the intermediate layers of many LLMs and becomes increasingly precise in the latest frontier models. Third, we demonstrate with a new benchmark that similar syntactic relations are coded similarly across the nested levels of syntactic trees. Overall, this work shows that LLMs spontaneously learn a geometry of neural activations that explicitly represents the main symbolic structures of linguistic theory. | A Polar coordinate system represents syntax in large language models | [
"Pablo J. Diego Simon",
"Stéphane d'Ascoli",
"Emmanuel Chemla",
"Yair Lakretz",
"Jean-Remi King"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wzof7Y66xs | @inproceedings{
goren2024hierarchical,
title={Hierarchical Selective Classification},
author={Shani Goren and Ido Galil and Ran El-Yaniv},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wzof7Y66xs}
} | Deploying deep neural networks for risk-sensitive tasks necessitates an uncertainty estimation mechanism. This paper introduces *hierarchical selective classification*, extending selective classification to a hierarchical setting. Our approach leverages the inherent structure of class relationships, enabling models to reduce the specificity of their predictions when faced with uncertainty. In this paper, we first formalize hierarchical risk and coverage, and introduce hierarchical risk-coverage curves. Next, we develop algorithms for hierarchical selective classification (which we refer to as "inference rules"), and propose an efficient algorithm that guarantees a target accuracy constraint with high probability. Lastly, we conduct extensive empirical studies on over a thousand ImageNet classifiers, revealing that training regimes such as CLIP, pretraining on ImageNet21k and knowledge distillation boost hierarchical selective performance. | Hierarchical Selective Classification | [
"Shani Goren",
"Ido Galil",
"Ran El-Yaniv"
] | NeurIPS.cc/2024/Conference | 2405.11533 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wz2KvvEk44 | @inproceedings{
zhang2024focus,
title={Focus On What Matters: Separated Models For Visual-Based {RL} Generalization},
author={Di Zhang and Bowen Lv and Hai Zhang and Feifan Yang and Junqiao Zhao and Hang Yu and Chang Huang and Hongtu Zhou and Chen Ye and changjun jiang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wz2KvvEk44}
} | A primary challenge for visual-based Reinforcement Learning (RL) is to generalize effectively across unseen environments. Although previous studies have explored different auxiliary tasks to enhance generalization, few adopt image reconstruction due to concerns about exacerbating overfitting to task-irrelevant features during training. Perceiving the pre-eminence of image reconstruction in representation learning, we propose SMG (\blue{S}eparated \blue{M}odels for \blue{G}eneralization), a novel approach that exploits image reconstruction for generalization. SMG introduces two model branches to extract task-relevant and task-irrelevant representations separately from visual observations via cooperatively reconstruction. Built upon this architecture, we further emphasize the importance of task-relevant features for generalization. Specifically, SMG incorporates two additional consistency losses to guide the agent's focus toward task-relevant areas across different scenarios, thereby achieving free from overfitting. Extensive experiments in DMC demonstrate the SOTA performance of SMG in generalization, particularly excelling in video-background settings. Evaluations on robotic manipulation tasks further confirm the robustness of SMG in real-world applications. Source code is available at \url{https://anonymous.4open.science/r/SMG/}. | Focus On What Matters: Separated Models For Visual-Based RL Generalization | [
"Di Zhang",
"Bowen Lv",
"Hai Zhang",
"Feifan Yang",
"Junqiao Zhao",
"Hang Yu",
"Chang Huang",
"Hongtu Zhou",
"Chen Ye",
"changjun jiang"
] | NeurIPS.cc/2024/Conference | 2410.10834 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wyYsCI3K7U | @inproceedings{
j{\"a}{\"a}saari2024lorann,
title={Lo{RANN}: Low-Rank Matrix Factorization for Approximate Nearest Neighbor Search},
author={Elias J{\"a}{\"a}saari and Ville Hyv{\"o}nen and Teemu Roos},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wyYsCI3K7U}
} | Approximate nearest neighbor (ANN) search is a key component in many modern machine learning pipelines; recent use cases include retrieval-augmented generation (RAG) and vector databases. Clustering-based ANN algorithms, that use score computation methods based on product quantization (PQ), are often used in industrial-scale applications due to their scalability and suitability for distributed and disk-based implementations. However, they have slower query times than the leading graph-based ANN algorithms. In this work, we propose a new supervised score computation method based on the observation that inner product approximation is a multivariate (multi-output) regression problem that can be solved efficiently by reduced-rank regression. Our experiments show that on modern high-dimensional data sets, the proposed reduced-rank regression (RRR) method is superior to PQ in both query latency and memory usage. We also introduce LoRANN, a clustering-based ANN library that leverages the proposed score computation method. LoRANN is competitive with the leading graph-based algorithms and outperforms the state-of-the-art GPU ANN methods on high-dimensional data sets. | LoRANN: Low-Rank Matrix Factorization for Approximate Nearest Neighbor Search | [
"Elias Jääsaari",
"Ville Hyvönen",
"Teemu Roos"
] | NeurIPS.cc/2024/Conference | 2410.18926 | [
"https://github.com/ejaasaari/lorann-experiments"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ww62xltEfB | @inproceedings{
kinoshita2024a,
title={A provable control of sensitivity of neural networks through a direct parameterization of the overall bi-Lipschitzness},
author={Yuri Kinoshita and Taro Toyoizumi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ww62xltEfB}
} | While neural networks can enjoy an outstanding flexibility and exhibit unprecedented performance, the mechanism behind their behavior is still not well-understood. To tackle this fundamental challenge, researchers have tried to restrict and manipulate some of their properties in order to gain new insights and better control on them. Especially, throughout the past few years, the concept of *bi-Lipschitzness* has been proved as a beneficial inductive bias in many areas. However, due to its complexity, the design and control of bi-Lipschitz architectures are falling behind, and a model that is precisely designed for bi-Lipschitzness realizing a direct and simple control of the constants along with solid theoretical analysis is lacking. In this work, we investigate and propose a novel framework for bi-Lipschitzness that can achieve such a clear and tight control based on convex neural networks and the Legendre-Fenchel duality. Its desirable properties are illustrated with concrete experiments to illustrate its broad range of applications. | A provable control of sensitivity of neural networks through a direct parameterization of the overall bi-Lipschitzness | [
"Yuri Kinoshita",
"Taro Toyoizumi"
] | NeurIPS.cc/2024/Conference | 2404.09821 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wvQHQgnpGN | @inproceedings{
zhou2024solving,
title={Solving Zero-Sum Markov Games with Continous State via Spectral Dynamic Embedding},
author={Chenhao Zhou and Zebang Shen and Chao Zhang and Hanbin Zhao and Hui Qian},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wvQHQgnpGN}
} | In this paper, we propose a provably efficient natural policy gradient algorithm called Spectral Dynamic Embedding Policy Optimization (\SDEPO) for two-player zero-sum stochastic Markov games with continuous state space and finite action space.
In the policy evaluation procedure of our algorithm, a novel kernel embedding method is employed to construct a finite-dimensional linear approximations to the state-action value function.
We explicitly analyze the approximation error in policy evaluation, and show that \SDEPO\ achieves an $\tilde{O}(\frac{1}{(1-\gamma)^3\epsilon})$ last-iterate convergence to the $\epsilon-$optimal Nash equilibrium, which is independent of the cardinality of the state space.
The complexity result matches the best-known results for global convergence of policy gradient algorithms for single agent setting.
Moreover, we also propose a practical variant of \SDEPO\ to deal with continuous action space and empirical results demonstrate the practical superiority of the proposed method. | Solving Zero-Sum Markov Games with Continous State via Spectral Dynamic Embedding | [
"Chenhao Zhou",
"Zebang Shen",
"Chao Zhang",
"Hanbin Zhao",
"Hui Qian"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wsqDJHPUHN | @inproceedings{
lei2024on,
title={On the Ability of Developers' Training Data Preservation of Learnware},
author={Hao-Yi Lei and Zhi-Hao Tan and Zhi-Hua Zhou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wsqDJHPUHN}
} | The learnware paradigm aims to enable users to leverage numerous existing well-trained models instead of building machine learning models from scratch. In this paradigm, developers worldwide can submit their well-trained models spontaneously into a learnware dock system, and the system helps developers generate specification for each model to form a learnware. As the key component, a specification should characterize the capabilities of the model, enabling it to be adequately identified and reused, while preserving the developer's original data. Recently, the RKME (Reduced Kernel Mean Embedding) specification was proposed and most commonly utilized. This paper provides a theoretical analysis of RKME specification about its preservation ability for developer's training data. By modeling it as a geometric problem on manifolds and utilizing tools from geometric analysis, we prove that the RKME specification is able to disclose none of the developer's original data and possesses robust defense against common inference attacks, while preserving sufficient information for effective learnware identification. | On the Ability of Developers' Training Data Preservation of Learnware | [
"Hao-Yi Lei",
"Zhi-Hao Tan",
"Zhi-Hua Zhou"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wsHMb4J2o9 | @inproceedings{
chizat2024the,
title={The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks},
author={L{\'e}na{\"\i}c Chizat and Praneeth Netrapalli},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wsHMb4J2o9}
} | Deep learning succeeds by doing hierarchical feature learning, yet tuning hyper-parameters (HP) such as initialization scales, learning rates etc., only give indirect control over this behavior. In this paper, we introduce a key notion to predict and control feature learning: the angle $\theta_\ell$ between the feature updates and the backward pass (at layer index $\ell$). We show that the magnitude of feature updates after one GD step, at any training time, can be expressed via a simple and general *feature speed formula* in terms of this angle $\theta_\ell$, the loss decay, and the magnitude of the backward pass. This angle $\theta_\ell$ is controlled by the conditioning of the layer-to-layer Jacobians and at random initialization, it is determined by the spectrum of a certain kernel, which coincides with the Neural Tangent Kernel when $\ell=\text{depth}$. Given $\theta_\ell$, the feature speed formula provides us with rules to adjust HPs (scales and learning rates) so as to satisfy certain dynamical properties, such as feature learning and loss decay. We investigate the implications of our approach for ReLU MLPs and ResNets in the large width-then-depth limit. Relying on prior work, we show that in ReLU MLPs with iid initialization, the angle degenerates with depth as $\cos(\theta_\ell)=\Theta(1/\sqrt{\ell})$. In contrast, ResNets with branch scale $O(1/\sqrt{\text{depth}})$ maintain a non-degenerate angle $\cos(\theta_\ell)=\Theta(1)$. We use these insights to recover key properties of known HP scalings (such as $\mu$P), and also introduce a new HP scaling for large depth ReLU MLPs with favorable theoretical properties. | The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks | [
"Lénaïc Chizat",
"Praneeth Netrapalli"
] | NeurIPS.cc/2024/Conference | 2311.18718 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wsGzvhnoaX | @inproceedings{
liu2024quantum,
title={Quantum Algorithms for Non-smooth Non-convex Optimization},
author={Chengchang Liu and Chaowen Guan and Jianhao He and John C.S. Lui},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wsGzvhnoaX}
} | This paper considers the problem for finding the $(\delta,\epsilon)$-Goldstein stationary point of Lipschitz continuous objective, which is a rich function class to cover a great number of important applications.
We construct a novel zeroth-order quantum estimator for the gradient of the smoothed surrogate.
Based on such estimator, we propose a novel quantum algorithm that achieves a query complexity of $\tilde{\mathcal{O}}(d^{3/2}\delta^{-1}\epsilon^{-3})$ on the stochastic function value oracle, where $d$ is the dimension of the problem.
We also enhance the query complexity to $\tilde{\mathcal{O}}(d^{3/2}\delta^{-1}\epsilon^{-7/3})$ by introducing a variance reduction variant.
Our findings demonstrate the clear advantages of utilizing quantum techniques for non-convex non-smooth optimization, as they outperform the optimal classical methods on the dependency of $\epsilon$ by a factor of $\epsilon^{-2/3}$. | Quantum Algorithms for Non-smooth Non-convex Optimization | [
"Chengchang Liu",
"Chaowen Guan",
"Jianhao He",
"John C.S. Lui"
] | NeurIPS.cc/2024/Conference | 2410.16189 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wqs2RMq4CW | @inproceedings{
liu2024corruptionrobust,
title={Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification},
author={Haolin Liu and Artin Tajdini and Andrew Wagenmaker and Chen-Yu Wei},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wqs2RMq4CW}
} | In linear bandits, how can a learner effectively learn when facing corrupted rewards? While significant work has explored this question, a holistic understanding across different adversarial models and corruption measures is lacking, as is a full characterization of the minimax regret bounds. In this work, we compare two types of corruptions commonly considered: strong corruption, where the corruption level depends on the learner’s chosen action, and weak corruption, where the corruption level does not depend on the learner’s chosen action. We provide a unified framework to analyze these corruptions. For stochastic linear bandits, we fully characterize the gap between the minimax regret under strong and weak corruptions. We also initiate the study of corrupted adversarial linear bandits, obtaining upper and lower bounds with matching dependencies on the corruption level. Next, we reveal a connection between corruption-robust learning and learning with gap-dependent misspecification—a setting first studied by Liu et al. (2023a), where the misspecification level of an action or policy is proportional to its suboptimality. We present a general reduction that enables any corruption-robust algorithm to handle gap-dependent misspecification. This allows us to recover the results of Liu et al. (2023a) in a black-box manner and significantly generalize them to settings like linear MDPs, yielding the first results for gap-dependent misspecification in reinforcement learning. However, this general reduction does not attain the optimal rate for gap-dependent misspecification. Motivated by this, we develop a specialized algorithm that achieves optimal bounds for gap-dependent misspecification in linear bandits, thus answering an open question posed by Liu et al. (2023a). | Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification | [
"Haolin Liu",
"Artin Tajdini",
"Andrew Wagenmaker",
"Chen-Yu Wei"
] | NeurIPS.cc/2024/Conference | 2410.07533 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wqLC4G1GN3 | @inproceedings{
li2024solving,
title={Solving Inverse Problems via Diffusion Optimal Control},
author={Henry Li and Marcus Aloysius Pereira},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wqLC4G1GN3}
} | Existing approaches to diffusion-based inverse problem solvers frame the signal recovery task as a probabilistic sampling episode, where the solution is drawn from the desired posterior distribution. This framework suffers from several critical drawbacks, including the intractability of the conditional likelihood function, strict dependence on the score network approximation, and poor $\mathbf{x}_0$ prediction quality. We demonstrate that these limitations can be sidestepped by reframing the generative process as a discrete optimal control episode. We derive a diffusion-based optimal controller inspired by the iterative Linear Quadratic Regulator (iLQR) algorithm. This framework is fully general and able to handle any differentiable forward measurement operator, including super-resolution, inpainting, Gaussian deblurring, nonlinear deblurring, and even highly nonlinear neural classifiers. Furthermore, we show that the idealized posterior sampling equation can be recovered as a special case of our algorithm. We then evaluate our method against a selection of neural inverse problem solvers, and establish a new baseline in image reconstruction with inverse problems. | Solving Inverse Problems via Diffusion Optimal Control | [
"Henry Li",
"Marcus Aloysius Pereira"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wpGJ2AX6SZ | @inproceedings{
alur2024human,
title={Human Expertise in Algorithmic Prediction},
author={Rohan Alur and Manish Raghavan and Devavrat Shah},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wpGJ2AX6SZ}
} | We introduce a novel framework for incorporating human expertise into algorithmic predictions. Our approach leverages human judgment to distinguish inputs which are *algorithmically indistinguishable*, or "look the same" to predictive algorithms. We argue that this framing clarifies the problem of human-AI collaboration in prediction tasks, as experts often form judgments by drawing on information which is not encoded in an algorithm's training data. Algorithmic indistinguishability yields a natural test for assessing whether experts incorporate this kind of "side information", and further provides a simple but principled method for selectively incorporating human feedback into algorithmic predictions. We show that this method provably improves the performance of any feasible algorithmic predictor and precisely quantify this improvement. We find empirically that although algorithms often outperform their human counterparts *on average*, human judgment can improve algorithmic predictions on *specific* instances (which can be identified ex-ante). In an X-ray classification task, we find that this subset constitutes nearly 30% of the patient population. Our approach provides a natural way of uncovering this heterogeneity and thus enabling effective human-AI collaboration. | Human Expertise in Algorithmic Prediction | [
"Rohan Alur",
"Manish Raghavan",
"Devavrat Shah"
] | NeurIPS.cc/2024/Conference | 2402.00793 | [
"https://github.com/ralur/heap-repl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=woRFmNJiLp | @inproceedings{
liang2024alignment,
title={Alignment at Pre-training! Towards Native Alignment for Arabic {LLM}s},
author={Juhao Liang and Zhenyang Cai and Jianqing Zhu and Huang Huang and Kewei Zong and Bang An and Mosen Alharthi and Juncai He and Lian Zhang and Haizhou Li and Benyou Wang and Jinchao Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=woRFmNJiLp}
} | The alignment of large language models (LLMs) is critical for developing effective and safe language models. Traditional approaches focus on aligning models during the instruction tuning or reinforcement learning stages, referred to in this paper as `\textit{post alignment}'. We argue that alignment during the pre-training phase, which we term 'native alignment', warrants investigation. Native alignment aims to prevent unaligned content from the beginning, rather than relying on post-hoc processing. This approach leverages extensively aligned pre-training data to enhance the effectiveness and usability of pre-trained models. Our study specifically explores the application of native alignment in the context of Arabic LLMs. We conduct comprehensive experiments and ablation studies to evaluate the impact of native alignment on model performance and alignment stability. Additionally, we release open-source Arabic LLMs that demonstrate state-of-the-art performance on various benchmarks, providing significant benefits to the Arabic LLM community. | Alignment at Pre-training! Towards Native Alignment for Arabic LLMs | [
"Juhao Liang",
"Zhenyang Cai",
"Jianqing Zhu",
"Huang Huang",
"Kewei Zong",
"Bang An",
"Mosen Alharthi",
"Juncai He",
"Lian Zhang",
"Haizhou Li",
"Benyou Wang",
"Jinchao Xu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=woENr7FJaI | @inproceedings{
zhang2024automated,
title={Automated Multi-level Preference for {MLLM}s},
author={Mengxi Zhang and Wenhao Wu and Yu Lu and YuXin Song and KANG RONG and Huanjin Yao and Jianbo Zhao and Fanglong Liu and Haocheng Feng and Jingdong Wang and Yifan Sun},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=woENr7FJaI}
} | Current multimodal Large Language Models (MLLMs) suffer from ''hallucination'', occasionally generating responses that are not grounded in the input images. To tackle this challenge, one promising path is to utilize reinforcement learning from human feedback (RLHF), which steers MLLMs towards learning superior responses while avoiding inferior ones. We rethink the common practice of using binary preferences (*i.e.*, superior, inferior), and find that adopting multi-level preferences (*e.g.*, superior, medium, inferior) is better for two benefits: 1) It narrows the gap between adjacent levels, thereby encouraging MLLMs to discern subtle differences. 2) It further integrates cross-level comparisons (beyond adjacent-level comparisons), thus providing a broader range of comparisons with hallucination examples. To verify our viewpoint, we present the Automated Multi-level Preference (**AMP**) framework for MLLMs. To facilitate this framework, we first develop an automated dataset generation pipeline that provides high-quality multi-level preference datasets without any human annotators. Furthermore, we design the Multi-level Direct Preference Optimization (MDPO) algorithm to robustly conduct complex multi-level preference learning. Additionally, we propose a new hallucination benchmark, MRHal-Bench. Extensive experiments across public hallucination and general benchmarks, as well as our MRHal-Bench, demonstrate the effectiveness of our proposed method. Code is available at https://github.com/takomc/amp. | Automated Multi-level Preference for MLLMs | [
"Mengxi Zhang",
"Wenhao Wu",
"Yu Lu",
"YuXin Song",
"KANG RONG",
"Huanjin Yao",
"Jianbo Zhao",
"Fanglong Liu",
"Haocheng Feng",
"Jingdong Wang",
"Yifan Sun"
] | NeurIPS.cc/2024/Conference | 2405.11165 | [
"https://github.com/takomc/amp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |