bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=v9RqRFSLQ2
@inproceedings{ zhu2024learning, title={Learning from Uncertain Data: From Possible Worlds to Possible Models}, author={Jiongli Zhu and Su Feng and Boris Glavic and Babak Salimi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v9RqRFSLQ2} }
We introduce an efficient method for learning linear models from uncertain data, where uncertainty is represented as a set of possible variations in the data, leading to predictive multiplicity. Our approach leverages abstract interpretation and zonotopes, a type of convex polytope, to compactly represent these dataset variations, enabling the symbolic execution of gradient descent on all possible worlds simultaneously. We develop techniques to ensure that this process converges to a fixed point and derive closed-form solutions for this fixed point. Our method provides sound over-approximations of all possible optimal models and viable prediction ranges. We demonstrate the effectiveness of our approach through theoretical and empirical analysis, highlighting its potential to reason about model and prediction uncertainty due to data quality issues in training data.
Learning from Uncertain Data: From Possible Worlds to Possible Models
[ "Jiongli Zhu", "Su Feng", "Boris Glavic", "Babak Salimi" ]
NeurIPS.cc/2024/Conference
2405.18549
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v8X70gTodR
@inproceedings{ tan2024analysing, title={Analysing the Generalisation and Reliability of Steering Vectors}, author={Daniel Chee Hian Tan and David Chanin and Aengus Lynch and Brooks Paige and Dimitrios Kanoulas and Adri{\`a} Garriga-Alonso and Robert Kirk}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v8X70gTodR} }
Steering vectors (SVs) are a new approach to efficiently adjust language model behaviour at inference time by intervening on intermediate model activations. They have shown promise in terms of improving both capabilities and model alignment. However, the reliability and generalisation properties of this approach are unknown. In this work, we rigorously investigate these properties, and show that steering vectors have substantial limitations both in- and out-of-distribution. In-distribution, steerability is highly variable across different inputs. Depending on the concept, spurious biases can substantially contribute to how effective steering is for each input, presenting a challenge for the widespread use of steering vectors. Out-of-distribution, while steering vectors often generalise well, for several concepts they are brittle to reasonable changes in the prompt, resulting in them failing to generalise well. Overall, our findings show that while steering can work well in the right circumstances, there remain many technical difficulties of applying steering vectors to guide models' behaviour at scale.
Analysing the Generalisation and Reliability of Steering Vectors
[ "Daniel Chee Hian Tan", "David Chanin", "Aengus Lynch", "Brooks Paige", "Dimitrios Kanoulas", "Adrià Garriga-Alonso", "Robert Kirk" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v8RRFNbJ43
@inproceedings{ kokhlikyan2024measuring, title={Measuring Dejavu Memorization Efficiently}, author={Narine Kokhlikyan and Bargav Jayaraman and Florian Bordes and Chuan Guo and Kamalika Chaudhuri}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v8RRFNbJ43} }
Recent research has shown that representation learning models may accidentally memorize their training data. For example, the déjà vu method shows that for certain representation learning models and training images, it is sometimes possible to correctly predict the foreground label given only the representation of he background – better than through dataset-level correlations. However, their measurement method requires training two models – one to estimate dataset-level correlations and the other to estimate memorization. This multiple model setup becomes infeasible for large open-source models. In this work, we propose alter native simple methods to estimate dataset-level correlations, and show that these can be used to approximate an off-the-shelf model’s memorization ability without any retraining. This enables, for the first time, the measurement of memorization in pre-trained open-source image representation and vision-language models. Our results show that different ways of measuring memorization yield very similar aggregate results. We also find that open-source models typically have lower aggregate memorization than similar models trained on a subset of the data. The code is available both for vision (https://github.com/facebookresearch/DejaVuOSS) and vision language (https://github.com/facebookresearch/VLMDejaVu) models.
Measuring Dejavu Memorization Efficiently
[ "Narine Kokhlikyan", "Bargav Jayaraman", "Florian Bordes", "Chuan Guo", "Kamalika Chaudhuri" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v7vYVvmfru
@inproceedings{ gong2024an, title={An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness}, author={Xiaochuan Gong and Jie Hao and Mingrui Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v7vYVvmfru} }
This paper investigates a class of stochastic bilevel optimization problems where the upper-level function is nonconvex with potentially unbounded smoothness and the lower-level problem is strongly convex. These problems have significant applications in sequential data learning, such as text classification using recurrent neural networks. The unbounded smoothness is characterized by the smoothness constant of the upper-level function scaling linearly with the gradient norm, lacking a uniform upper bound. Existing state-of-the-art algorithms require $\widetilde{O}(\epsilon^{-4})$ oracle calls of stochastic gradient or Hessian/Jacobian-vector product to find an $\epsilon$-stationary point. However, it remains unclear if we can further improve the convergence rate when the assumptions for the function in the population level also hold for each random realization almost surely (e.g., Lipschitzness of each realization of the stochastic gradient). To address this issue, we propose a new Accelerated Bilevel Optimization algorithm named AccBO. The algorithm updates the upper-level variable by normalized stochastic gradient descent with recursive momentum and the lower-level variable by the stochastic Nesterov accelerated gradient descent algorithm with averaging. We prove that our algorithm achieves an oracle complexity of $\widetilde{O}(\epsilon^{-3})$ to find an $\epsilon$-stationary point. Our proof relies on a novel lemma characterizing the dynamics of stochastic Nesterov accelerated gradient descent algorithm under distribution drift with high probability for the lower-level variable, which is of independent interest and also plays a crucial role in analyzing the hypergradient estimation error over time. Experimental results on various tasks confirm that our proposed algorithm achieves the predicted theoretical acceleration and significantly outperforms baselines in bilevel optimization.
An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness
[ "Xiaochuan Gong", "Jie Hao", "Mingrui Liu" ]
NeurIPS.cc/2024/Conference
2409.19212
[ "https://github.com/mingruiliu-ml-lab/accelerated-bilevel-optimization-unbounded-smoothness" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v6W55lCkhN
@inproceedings{ zhao2024cenas, title={{CE}-{NAS}: An End-to-End Carbon-Efficient Neural Architecture Search Framework}, author={Yiyang Zhao and Yunzhuo Liu and Bo Jiang and Tian Guo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v6W55lCkhN} }
This work presents a novel approach to neural architecture search (NAS) that aims to increase carbon efficiency for the model design process. The proposed framework CE-NAS addresses the key challenge of high carbon cost associated with NAS by exploring the carbon emission variations of energy and energy differences of different NAS algorithms. At the high level, CE-NAS leverages a reinforcement-learning agent to dynamically adjust GPU resources based on carbon intensity, predicted by a time-series transformer, to balance energy-efficient sampling and energy-intensive evaluation tasks. Furthermore, CE-NAS leverages a recently proposed multi-objective optimizer to effectively reduce the NAS search space. We demonstrate the efficacy of CE-NAS in lowering carbon emissions while achieving SOTA results for both NAS datasets and open-domain NAS tasks. For example, on the HW-NasBench dataset, CE-NAS reduces carbon emissions by up to 7.22X while maintaining a search efficiency comparable to vanilla NAS. For open-domain NAS tasks, CE-NAS achieves SOTA results with 97.35% top-1 accuracy on CIFAR-10 with only 1.68M parameters and a carbon consumption of 38.53 lbs of CO2. On ImageNet, our searched model achieves 80.6% top-1 accuracy with a 0.78 ms TensorRT latency using FP16 on NVIDIA V100, consuming only 909.86 lbs of CO2, making it comparable to other one-shot-based NAS baselines. Our code is available at https://github.com/cake-lab/CE-NAS.
CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework
[ "Yiyang Zhao", "Yunzhuo Liu", "Bo Jiang", "Tian Guo" ]
NeurIPS.cc/2024/Conference
2406.01414
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v5Un2QqnRf
@inproceedings{ jiao2024lumen, title={Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal Models}, author={Yang Jiao and Shaoxiang Chen and ZEQUN JIE and Jingjing Chen and Lin Ma and Yu-Gang Jiang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v5Un2QqnRf} }
Large Multimodal Model (LMM) is a hot research topic in the computer vision area and has also demonstrated remarkable potential across multiple disciplinary fields. A recent trend is to further extend and enhance the perception capabilities of LMMs. The current methods follow the paradigm of adapting the visual task outputs to the format of the language model, which is the main component of a LMM. This adaptation leads to convenient development of such LMMs with minimal modifications, however, it overlooks the intrinsic characteristics of diverse visual tasks and hinders the learning of perception capabilities. To address this issue, we propose a novel LMM architecture named Lumen, a Large multimodal model with versatile vision-centric capability enhancement. We decouple the LMM's learning of perception capabilities into task-agnostic and task-specific stages. Lumen first promotes fine-grained vision-language concept alignment, which is the fundamental capability for various visual tasks. Thus the output of the task-agnostic stage is a shared representation for all the tasks we address in this paper. Then the task-specific decoding is carried out by flexibly routing the shared representation to lightweight task decoders with negligible training efforts. Comprehensive experimental results on a series of vision-centric and VQA benchmarks indicate that our Lumen model not only achieves or surpasses the performance of existing LMM-based approaches in a range of vision-centric tasks while maintaining general visual understanding and instruction following capabilities.
Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal Models
[ "Yang Jiao", "Shaoxiang Chen", "ZEQUN JIE", "Jingjing Chen", "Lin Ma", "Yu-Gang Jiang" ]
NeurIPS.cc/2024/Conference
2403.07304
[ "https://github.com/sxjyjay/lumen" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v4dXL3LsGX
@inproceedings{ liang2024learning, title={Learning to Cooperate with Humans using Generative Agents}, author={Yancheng Liang and Daphne Chen and Abhishek Gupta and Simon Shaolei Du and Natasha Jaques}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v4dXL3LsGX} }
Training agents that can coordinate zero-shot with humans is a key mission in multi-agent reinforcement learning (MARL). Current algorithms focus on training simulated human partner policies which are then used to train a Cooperator agent. The simulated human is produced either through behavior cloning over a dataset of human cooperation behavior, or by using MARL to create a population of simulated agents. However, these approaches often struggle to produce a Cooperator that can coordinate well with real humans, since the simulated humans fail to cover the diverse strategies and styles employed by people in the real world. We show \emph{learning a generative model of human partners} can effectively address this issue. Our model learns a latent variable representation of the human that can be regarded as encoding the human's unique strategy, intention, experience, or style. This generative model can be flexibly trained from any (human or neural policy) agent interaction data. By sampling from the latent space, we can use the generative model to produce different partners to train Cooperator agents. We evaluate our method---Generative Agent Modeling for Multi-agent Adaptation (GAMMA)---on Overcooked, a challenging cooperative cooking game that has become a standard benchmark for zero-shot coordination. We conduct an evaluation with real human teammates, and the results show that GAMMA consistently improves performance, whether the generative model is trained on simulated populations or human datasets. Further, we propose a method for posterior sampling from the generative model that is biased towards the human data, enabling us to efficiently improve performance with only a small amount of expensive human interaction data.
Learning to Cooperate with Humans using Generative Agents
[ "Yancheng Liang", "Daphne Chen", "Abhishek Gupta", "Simon Shaolei Du", "Natasha Jaques" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v416YLOQuU
@inproceedings{ ahn2024adam, title={Adam with model exponential moving average is effective for nonconvex optimization}, author={Kwangjun Ahn and Ashok Cutkosky}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v416YLOQuU} }
In this work, we offer a theoretical analysis of two modern optimization techniques for training large and complex models: (i) adaptive optimization algorithms, such as Adam, and (ii) the model exponential moving average (EMA). Specifically, we demonstrate that a clipped version of Adam with model EMA achieves the optimal convergence rates in various nonconvex optimization settings, both smooth and nonsmooth. Moreover, when the scale varies significantly across different coordinates, we demonstrate that the coordinate-wise adaptivity of Adam is provably advantageous. Notably, unlike previous analyses of Adam, our analysis crucially relies on its core elements---momentum and discounting factors---as well as model EMA, motivating their wide applications in practice.
Adam with model exponential moving average is effective for nonconvex optimization
[ "Kwangjun Ahn", "Ashok Cutkosky" ]
NeurIPS.cc/2024/Conference
2405.18199
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v3y785TN7B
@inproceedings{ xue2024geonlf, title={Geo{NLF}: Geometry guided Pose-Free Neural Li{DAR} Fields}, author={Weiyi Xue and Zehan Zheng and Fan Lu and Haiyun Wei and Guang Chen and changjun jiang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v3y785TN7B} }
Although recent efforts have extended Neural Radiance Field (NeRF) into LiDAR point cloud synthesis, the majority of existing works exhibit a strong dependence on precomputed poses. However, point cloud registration methods struggle to achieve precise global pose estimation, whereas previous pose-free NeRFs overlook geometric consistency in global reconstruction. In light of this, we explore the geometric insights of point clouds, which provide explicit registration priors for reconstruction. Based on this, we propose Geometry guided Neural LiDAR Fields (GeoNLF), a hybrid framework performing alternately global neural reconstruction and pure geometric pose optimization. Furthermore, NeRFs tend to overfit individual frames and easily get stuck in local minima under sparse-view inputs. To tackle this issue, we develop a selective-reweighting strategy and introduce geometric constraints for robust optimization. Extensive experiments on NuScenes and KITTI-360 datasets demonstrate the superiority of GeoNLF in both novel view synthesis and multi-view registration of low-frequency large-scale point clouds.
GeoNLF: Geometry guided Pose-Free Neural LiDAR Fields
[ "Weiyi Xue", "Zehan Zheng", "Fan Lu", "Haiyun Wei", "Guang Chen", "changjun jiang" ]
NeurIPS.cc/2024/Conference
2407.05597
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v3jHuoxMw8
@inproceedings{ liu2024visionlanguage, title={Vision-Language Navigation with Energy-Based Policy}, author={Rui Liu and Wenguan Wang and Yi Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v3jHuoxMw8} }
Vision-language navigation (VLN) requires an agent to execute actions following human instructions. Existing VLN models are optimized through expert demonstrations by supervised behavioural cloning or incorporating manual reward engineering. While straightforward, these efforts overlook the accumulation of errors in the Markov decision process, and struggle to match the distribution of the expert policy. Going beyond this, we propose an Energy-based Navigation Policy (ENP) to model the joint state-action distribution using an energy-based model. At each step, low energy values correspond to the state-action pairs that the expert is most likely to perform, and vice versa. Theoretically, the optimization objective is equivalent to minimizing the forward divergence between the occupancy measure of the expert and ours. Consequently, ENP learns to globally align with the expert policy by maximizing the likelihood of the actions and modeling the dynamics of the navigation states in a collaborative manner. With a variety of VLN architectures, ENP achieves promising performances on R2R, REVERIE, RxR, and R2R-CE, unleashing the power of existing VLN models.
Vision-Language Navigation with Energy-Based Policy
[ "Rui Liu", "Wenguan Wang", "Yi Yang" ]
NeurIPS.cc/2024/Conference
2410.14250
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v1kpc060aC
@inproceedings{ dahan2024weight, title={Weight for Robustness: A Comprehensive Approach towards Optimal Fault-Tolerant Asynchronous {ML}}, author={Tehila Dahan and Kfir Yehuda Levy}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v1kpc060aC} }
We address the challenges of Byzantine-robust training in asynchronous distributed machine learning systems, aiming to enhance efficiency amid massive parallelization and heterogeneous compute resources. Asynchronous systems, marked by independently operating workers and intermittent updates, uniquely struggle with maintaining integrity against Byzantine failures, which encompass malicious or erroneous actions that disrupt learning. The inherent delays in such settings not only introduce additional bias to the system but also obscure the disruptions caused by Byzantine faults. To tackle these issues, we adapt the Byzantine framework to asynchronous dynamics by introducing a novel weighted robust aggregation framework. This allows for the extension of robust aggregators and a recent meta-aggregator to their weighted versions, mitigating the effects of delayed updates. By further incorporating a recent variance-reduction technique, we achieve an optimal convergence rate for the first time in an asynchronous Byzantine environment. Our methodology is rigorously validated through empirical and theoretical analysis, demonstrating its effectiveness in enhancing fault tolerance and optimizing performance in asynchronous ML systems.
Weight for Robustness: A Comprehensive Approach towards Optimal Fault-Tolerant Asynchronous ML
[ "Tehila Dahan", "Kfir Yehuda Levy" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v1BIm8wESL
@inproceedings{ ye2024skinned, title={Skinned Motion Retargeting with Dense Geometric Interaction Perception}, author={Zijie Ye and Jia-Wei Liu and Jia Jia and Shikun Sun and Mike Zheng Shou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v1BIm8wESL} }
Capturing and maintaining geometric interactions among different body parts is crucial for successful motion retargeting in skinned characters. Existing approaches often overlook body geometries or add a geometry correction stage after skeletal motion retargeting. This results in conflicts between skeleton interaction and geometry correction, leading to issues such as jittery, interpenetration, and contact mismatches. To address these challenges, we introduce a new retargeting framework, MeshRet, which directly models the dense geometric interactions in motion retargeting. Initially, we establish dense mesh correspondences between characters using semantically consistent sensors (SCS), effective across diverse mesh topologies. Subsequently, we develop a novel spatio-temporal representation called the dense mesh interaction (DMI) field. This field, a collection of interacting SCS feature vectors, skillfully captures both contact and non-contact interactions between body geometries. By aligning the DMI field during retargeting, MeshRet not only preserves motion semantics but also prevents self-interpenetration and ensures contact preservation. Extensive experiments on the public Mixamo dataset and our newly-collected ScanRet dataset demonstrate that MeshRet achieves state-of-the-art performance. Code available at https://github.com/abcyzj/MeshRet.
Skinned Motion Retargeting with Dense Geometric Interaction Perception
[ "Zijie Ye", "Jia-Wei Liu", "Jia Jia", "Shikun Sun", "Mike Zheng Shou" ]
NeurIPS.cc/2024/Conference
2410.20986
[ "https://github.com/abcyzj/meshret" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=v07KRLYxDX
@inproceedings{ sun2024achieving, title={Achieving Domain-Independent Certified Robustness via Knowledge Continuity}, author={Alan Sun and Chiyu Ma and Kenneth Ge and Soroush Vosoughi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=v07KRLYxDX} }
We present *knowledge continuity*, a novel definition inspired by Lipschitz continuity which aims to certify the robustness of neural networks across input domains (such as continuous and discrete domains in vision and language, respectively). Most existing approaches that seek to certify robustness, especially Lipschitz continuity, lie within the continuous domain with norm and distribution-dependent guarantees. In contrast, our proposed definition yields certification guarantees that depend only on the loss function and the intermediate learned metric spaces of the neural network. These bounds are independent of domain modality, norms, and distribution. We further demonstrate that the expressiveness of a model class is not at odds with its knowledge continuity. This implies that achieving robustness by maximizing knowledge continuity should not theoretically hinder inferential performance. Finally, to complement our theoretical results, we present several applications of knowledge continuity such as regularization, a certification algorithm, and show that knowledge continuity can be used to localize vulnerable components of a neural network.
Achieving Domain-Independent Certified Robustness via Knowledge Continuity
[ "Alan Sun", "Chiyu Ma", "Kenneth Ge", "Soroush Vosoughi" ]
NeurIPS.cc/2024/Conference
2411.01644
[ "https://github.com/alansun17904/kc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uzIWqRzjEP
@inproceedings{ jones2024learning, title={Learning to Edit Visual Programs with Self-Supervision}, author={R. Kenny Jones and Renhao Zhang and Aditya Ganeshan and Daniel Ritchie}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uzIWqRzjEP} }
We design a system that learns how to edit visual programs. Our edit network consumes a complete input program and a visual target. From this input, we task our network with predicting a local edit operation that could be applied to the input program to improve its similarity to the target. In order to apply this scheme for domains that lack program annotations, we develop a self-supervised learning approach that integrates this edit network into a bootstrapped finetuning loop along with a network that predicts entire programs in one-shot. Our joint finetuning scheme, when coupled with an inference procedure that initializes a population from the one-shot model and evolves members of this population with the edit network, helps to infer more accurate visual programs. Over multiple domains, we experimentally compare our method against the alternative of using only the one-shot model, and find that even under equal search-time budgets, our editing-based paradigm provides significant advantages.
Learning to Edit Visual Programs with Self-Supervision
[ "R. Kenny Jones", "Renhao Zhang", "Aditya Ganeshan", "Daniel Ritchie" ]
NeurIPS.cc/2024/Conference
2406.02383
[ "https://github.com/rkjones4/vpi-edit" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uyqjpycMbU
@inproceedings{ vepa2024integrating, title={Integrating Deep Metric Learning with Coreset for Active Learning in 3D Segmentation}, author={Arvind Murari Vepa and ZUKANG YANG and Andrew Choi and Jungseock Joo and Fabien Scalzo and Yizhou Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uyqjpycMbU} }
Deep learning has seen remarkable advancements in machine learning, yet it often demands extensive annotated data. Tasks like 3D semantic segmentation impose a substantial annotation burden, especially in domains like medicine, where expert annotations drive up the cost. Active learning (AL) holds great potential to alleviate this annotation burden in 3D medical segmentation. The majority of existing AL methods, however, are not tailored to the medical domain. While weakly-supervised methods have been explored to reduce annotation burden, the fusion of AL with weak supervision remains unexplored, despite its potential to significantly reduce annotation costs. Additionally, there is little focus on slice-based AL for 3D segmentation, which can also significantly reduce costs in comparison to conventional volume-based AL. This paper introduces a novel metric learning method for Coreset to perform slice-based active learning in 3D medical segmentation. By merging contrastive learning with inherent data groupings in medical imaging, we learn a metric that emphasizes the relevant differences in samples for training 3D medical segmentation models. We perform comprehensive evaluations using both weak and full annotations across four datasets (medical and non-medical). Our findings demonstrate that our approach surpasses existing active learning techniques on both weak and full annotations and obtains superior performance with low-annotation budgets which is crucial in medical imaging. Source code for this project is available in the supplementary materials and on GitHub: https://github.com/arvindmvepa/al-seg.
Integrating Deep Metric Learning with Coreset for Active Learning in 3D Segmentation
[ "Arvind Murari Vepa", "ZUKANG YANG", "Andrew Choi", "Jungseock Joo", "Fabien Scalzo", "Yizhou Sun" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uyLtEFnpQP
@inproceedings{ zheng2024duin, title={Du-{IN}: Discrete units-guided mask modeling for decoding speech from Intracranial Neural signals}, author={Hui Zheng and Haiteng Wang and Weibang Jiang and Zhongtao Chen and Li He and Peiyang Lin and Penghu Wei and Guoguang Zhao and Yunzhe Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uyLtEFnpQP} }
Invasive brain-computer interfaces with Electrocorticography (ECoG) have shown promise for high-performance speech decoding in medical applications, but less damaging methods like intracranial stereo-electroencephalography (sEEG) remain underexplored. With rapid advances in representation learning, leveraging abundant recordings to enhance speech decoding is increasingly attractive. However, popular methods often pre-train temporal models based on brain-level tokens, overlooking that brain activities in different regions are highly desynchronized during tasks. Alternatively, they pre-train spatial-temporal models based on channel-level tokens but fail to evaluate them on challenging tasks like speech decoding, which requires intricate processing in specific language-related areas. To address this issue, we collected a well-annotated Chinese word-reading sEEG dataset targeting language-related brain networks from 12 subjects. Using this benchmark, we developed the Du-IN model, which extracts contextual embeddings based on region-level tokens through discrete codex-guided mask modeling. Our model achieves state-of-the-art performance on the 61-word classification task, surpassing all baselines. Model comparisons and ablation studies reveal that our design choices, including (\romannumeral1) temporal modeling based on region-level tokens by utilizing 1D depthwise convolution to fuse channels in the ventral sensorimotor cortex (vSMC) and superior temporal gyrus (STG) and (\romannumeral2) self-supervision through discrete codex-guided mask modeling, significantly contribute to this performance. Overall, our approach -- inspired by neuroscience findings and capitalizing on region-level representations from specific brain regions -- is suitable for invasive brain modeling and represents a promising neuro-inspired AI approach in brain-computer interfaces. Code and dataset are available at https://github.com/liulab-repository/Du-IN.
Du-IN: Discrete units-guided mask modeling for decoding speech from Intracranial Neural signals
[ "Hui Zheng", "Haiteng Wang", "Weibang Jiang", "Zhongtao Chen", "Li He", "Peiyang Lin", "Penghu Wei", "Guoguang Zhao", "Yunzhe Liu" ]
NeurIPS.cc/2024/Conference
2405.11459
[ "https://github.com/liulab-repository/du-in" ]
https://huggingface.co/papers/2405.11459
0
0
0
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=uwSaDHLlYc
@inproceedings{ du2024diversitydriven, title={Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment}, author={Jiawei Du and Xin Zhang and Juncheng Hu and Wenxin Huang and Joey Tianyi Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uwSaDHLlYc} }
The sharp increase in data-related expenses has motivated research into condensing datasets while retaining the most informative features. Dataset distillation has thus recently come to the fore. This paradigm generates synthetic datasets that are representative enough to replace the original dataset in training a neural network. To avoid redundancy in these synthetic datasets, it is crucial that each element contains unique features and remains diverse from others during the synthesis stage. In this paper, we provide a thorough theoretical and empirical analysis of diversity within synthesized datasets. We argue that enhancing diversity can improve the parallelizable yet isolated synthesizing approach. Specifically, we introduce a novel method that employs dynamic and directed weight adjustment techniques to modulate the synthesis process, thereby maximizing the representativeness and diversity of each synthetic instance. Our method ensures that each batch of synthetic data mirrors the characteristics of a large, varying subset of the original dataset. Extensive experiments across multiple datasets, including CIFAR, Tiny-ImageNet, and ImageNet-1K, demonstrate the superior performance of our method, highlighting its effectiveness in producing diverse and representative synthetic datasets with minimal computational expense. Our code is available at https://github.com/AngusDujw/Diversity-Driven-Synthesis.
Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment
[ "Jiawei Du", "Xin Zhang", "Juncheng Hu", "Wenxin Huang", "Joey Tianyi Zhou" ]
NeurIPS.cc/2024/Conference
2409.17612
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=uvFDaeFR9X
@inproceedings{ agafonov2024exploring, title={Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations}, author={Artem Agafonov and Petr Ostroukhov and Roman Mozhaev and Konstantin Yakovlev and Eduard Gorbunov and Martin Tak{\'a}{\v{c}} and Alexander Gasnikov and Dmitry Kamzolov}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uvFDaeFR9X} }
Variational inequalities represent a broad class of problems, including minimization and min-max problems, commonly found in machine learning. Existing second-order and high-order methods for variational inequalities require precise computation of derivatives, often resulting in prohibitively high iteration costs. In this work, we study the impact of Jacobian inaccuracy on second-order methods. For the smooth and monotone case, we establish a lower bound with explicit dependence on the level of Jacobian inaccuracy and propose an optimal algorithm for this key setting. When derivatives are exact, our method converges at the same rate as exact optimal second-order methods. To reduce the cost of solving the auxiliary problem, which arises in all high-order methods with global convergence, we introduce several Quasi-Newton approximations. Our method with Quasi-Newton updates achieves a global sublinear convergence rate. We extend our approach with a tensor generalization for inexact high-order derivatives and support the theory with experiments.
Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations
[ "Artem Agafonov", "Petr Ostroukhov", "Roman Mozhaev", "Konstantin Yakovlev", "Eduard Gorbunov", "Martin Takáč", "Alexander Gasnikov", "Dmitry Kamzolov" ]
NeurIPS.cc/2024/Conference
2405.15990
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=uuQQwrjMzb
@inproceedings{ mittal2024adaptive, title={Adaptive Labeling for Efficient Out-of-distribution Model Evaluation}, author={Daksh Mittal and Yuanzhe Ma and Shalmali Joshi and Hongseok Namkoong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uuQQwrjMzb} }
Datasets often suffer severe selection bias; clinical labels are only available on patients for whom doctors ordered medical exams. To assess model performance outside the support of available data, we present a computational framework for adaptive labeling, providing cost-efficient model evaluations under severe distribution shifts. We formulate the problem as a Markov Decision Process over states defined by posterior beliefs on model performance. Each batch of new labels incurs a “state transition” to sharper beliefs, and we choose batches to minimize uncertainty on model performance at the end of the label collection process. Instead of relying on high-variance REINFORCE policy gradient estimators that do not scale, our adaptive labeling policy is optimized using path-wise policy gradients computed by auto-differentiating through simulated roll-outs. Our framework is agnostic to different uncertainty quantification approaches and highlights the virtue of planning in adaptive labeling. On synthetic and real datasets, we empirically demonstrate even a one-step lookahead policy substantially outperforms active learning-inspired heuristics.
Adaptive Labeling for Efficient Out-of-distribution Model Evaluation
[ "Daksh Mittal", "Yuanzhe Ma", "Shalmali Joshi", "Hongseok Namkoong" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=utMOhsgXzB
@inproceedings{ gerych2024bendvlm, title={Bend{VLM}: Test-Time Debiasing of Vision-Language Embeddings}, author={Walter Gerych and Haoran Zhang and Kimia Hamidieh and Eileen Pan and Maanas Sharma and Thomas Hartvigsen and Marzyeh Ghassemi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=utMOhsgXzB} }
Vision-language (VL) embedding models have been shown to encode biases present in their training data, such as societal biases that prescribe negative characteristics to members of various racial and gender identities. Due to their wide-spread adoption for various tasks ranging from few-shot classification to text-guided image generation, debiasing VL models is crucial. Debiasing approaches that fine-tune the VL model often suffer from catastrophic forgetting. On the other hand, fine-tuning-free methods typically utilize a ``one-size-fits-all" approach that assumes that correlation with the spurious attribute can be explained using a single linear direction across all possible inputs. In this work, we propose a nonlinear, fine-tuning-free approach for VL embedding model debiasing that tailors the debiasing operation to each unique input. This allows for a more flexible debiasing approach. Additionally, we do not require knowledge of the set of inputs a priori to inference time, making our method more appropriate for online tasks such as retrieval and text guided image generation.
BendVLM: Test-Time Debiasing of Vision-Language Embeddings
[ "Walter Gerych", "Haoran Zhang", "Kimia Hamidieh", "Eileen Pan", "Maanas Sharma", "Thomas Hartvigsen", "Marzyeh Ghassemi" ]
NeurIPS.cc/2024/Conference
2411.04420
[ "https://github.com/waltergerych/bend_vlm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uqxSLoCw3K
@inproceedings{ wang2024mixture, title={Mixture of Demonstrations for In-Context Learning}, author={Song Wang and Zihan Chen and Chengshuai Shi and Cong Shen and Jundong Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uqxSLoCw3K} }
In-Context Learning (ICL) empowers Large Language Models (LLMs) to tackle various tasks by providing input-output examples as additional inputs, referred to as demonstrations. Nevertheless, the performance of ICL could be easily impacted by the quality of selected demonstrations. Existing efforts generally learn a retriever model to score each demonstration for selecting suitable demonstrations, however, the effect is suboptimal due to the large search space and the noise from unhelpful demonstrations. In this study, we introduce MoD, which partitions the demonstration pool into groups, each governed by an expert to reduce search space. We further design an expert-wise training strategy to alleviate the impact of unhelpful demonstrations when optimizing the retriever model. During inference, experts collaboratively retrieve demonstrations for the input query to enhance the ICL performance. We validate MoD via experiments across a range of NLP datasets and tasks, demonstrating its state-of-the-art performance and shedding new light on the future design of retrieval methods for ICL.
Mixture of Demonstrations for In-Context Learning
[ "Song Wang", "Zihan Chen", "Chengshuai Shi", "Cong Shen", "Jundong Li" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uqWfLgZpV1
@inproceedings{ li2024on, title={On the Necessity of Collaboration for Online Model Selection with Decentralized Data}, author={Junfan Li and Zheshun Wu and Zenglin Xu and Irwin King}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uqWfLgZpV1} }
We consider online model selection with decentralized data over $M$ clients, and study the necessity of collaboration among clients. Previous work proposed various federated algorithms without demonstrating their necessity, while we answer the question from a novel perspective of computational constraints. We prove lower bounds on the regret, and propose a federated algorithm and analyze the upper bound. Our results show (i) collaboration is unnecessary in the absence of computational constraints on clients; (ii) collaboration is necessary if the computational cost on each client is limited to $o(K)$, where $K$ is the number of candidate hypothesis spaces. We clarify the unnecessary nature of collaboration in previous federated algorithms for distributed online multi-kernel learning, and improve the regret bounds at a smaller computational and communication cost. Our algorithm relies on three new techniques including an improved Bernstein's inequality for martingale, a federated online mirror descent framework, and decoupling model selection and prediction, which might be of independent interest.
On the Necessity of Collaboration for Online Model Selection with Decentralized Data
[ "Junfan Li", "Zheshun Wu", "Zenglin Xu", "Irwin King" ]
NeurIPS.cc/2024/Conference
2404.09494
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=up4tWnwRol
@inproceedings{ alman2024the, title={The Fine-Grained Complexity of Gradient Computation for Training Large Language Models}, author={Josh Alman and Zhao Song}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=up4tWnwRol} }
Large language models (LLMs) have made fundamental contributions over the last a few years. To train an LLM, one needs to alternatingly run `forward' computations and backward computations. The forward computation can be viewed as attention function evaluation, and the backward computation can be viewed as a gradient computation. In previous work by [Alman and Song, NeurIPS 2023], it was proved that the forward step can be performed in almost-linear time in certain parameter regimes, but that there is no truly sub-quadratic time algorithm in the remaining parameter regimes unless the popular hypothesis $\mathsf{SETH}$ is false. In this work, we show nearly identical results for the harder-seeming problem of computing the gradient of loss function of one layer attention network, and thus for the entire process of LLM training. This completely characterizes the fine-grained complexity of every step of LLM training.
The Fine-Grained Complexity of Gradient Computation for Training Large Language Models
[ "Josh Alman", "Zhao Song" ]
NeurIPS.cc/2024/Conference
2402.04497
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uoJQ9qadjY
@inproceedings{ jaiswal2024learning, title={Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios}, author={Shantanu Jaiswal and Debaditya Roy and Basura Fernando and Cheston Tan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uoJQ9qadjY} }
Complex visual reasoning and question answering (VQA) is a challenging task that requires compositional multi-step processing and higher-level reasoning capabilities beyond the immediate recognition and localization of objects and events. Here, we introduce a fully neural Iterative and Parallel Reasoning Mechanism (IPRM) that combines two distinct forms of computation -- iterative and parallel -- to better address complex VQA scenarios. Specifically, IPRM's "iterative" computation facilitates compositional step-by-step reasoning for scenarios wherein individual operations need to be computed, stored, and recalled dynamically (e.g. when computing the query “determine the color of pen to the left of the child in red t-shirt sitting at the white table”). Meanwhile, its "parallel'' computation allows for the simultaneous exploration of different reasoning paths and benefits more robust and efficient execution of operations that are mutually independent (e.g. when counting individual colors for the query: "determine the maximum occurring color amongst all t-shirts'"). We design IPRM as a lightweight and fully-differentiable neural module that can be conveniently applied to both transformer and non-transformer vision-language backbones. It notably outperforms prior task-specific methods and transformer-based attention modules across various image and video VQA benchmarks testing distinct complex reasoning capabilities such as compositional spatiotemporal reasoning (AGQA), situational reasoning (STAR), multi-hop reasoning generalization (CLEVR-Humans) and causal event linking (CLEVRER-Humans). Further, IPRM's internal computations can be visualized across reasoning steps, aiding interpretability and diagnosis of its errors.
Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios
[ "Shantanu Jaiswal", "Debaditya Roy", "Basura Fernando", "Cheston Tan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=unA5hxIn6v
@inproceedings{ chen2024meanfield, title={Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input}, author={Ziang Chen and Rong Ge}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=unA5hxIn6v} }
In this work, we study the mean-field flow for learning subspace-sparse polynomials using stochastic gradient descent and two-layer neural networks, where the input distribution is standard Gaussian and the output only depends on the projection of the input onto a low-dimensional subspace. We establish a necessary condition for SGD-learnability, involving both the characteristics of the target function and the expressiveness of the activation function. In addition, we prove that the condition is almost sufficient, in the sense that a condition slightly stronger than the necessary condition can guarantee the exponential decay of the loss functional to zero.
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
[ "Ziang Chen", "Rong Ge" ]
NeurIPS.cc/2024/Conference
2402.08948
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=umukvCdGI6
@inproceedings{ chen2024dofen, title={{DOFEN}: Deep Oblivious Forest {EN}semble}, author={Kuan-Yu Chen and Ping-Han Chiang and Hsin-Rung Chou and Chih-Sheng Chen and Tien-Hao Chang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=umukvCdGI6} }
Deep Neural Networks (DNNs) have revolutionized artificial intelligence, achieving impressive results on diverse data types, including images, videos, and texts. However, DNNs still lag behind Gradient Boosting Decision Trees (GBDT) on tabular data, a format extensively utilized across various domains. This paper introduces DOFEN, which stands for Deep Oblivious Forest ENsemble. DOFEN is a novel DNN architecture inspired by oblivious decision trees and achieves on-off sparse selection of columns. DOFEN surpasses other DNNs on tabular data, achieving state-of-the-art performance on the well-recognized benchmark: Tabular Benchmark, which includes 73 total datasets spanning a wide array of domains. The code of DOFEN is available at: https://github.com/Sinopac-Digital-Technology-Division/DOFEN
DOFEN: Deep Oblivious Forest ENsemble
[ "Kuan-Yu Chen", "Ping-Han Chiang", "Hsin-Rung Chou", "Chih-Sheng Chen", "Tien-Hao Chang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ujwIlTNrAP
@inproceedings{ sun2024altermoma, title={Alter{MOMA}: Fusion Redundancy Pruning for Camera-Li{DAR} Fusion Models with Alternative Modality Masking}, author={shiqi sun and Yantao Lu and Ning Liu and Bo Jiang and Jinchao Chen and Ying Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ujwIlTNrAP} }
Camera-LiDAR fusion models significantly enhance perception performance in autonomous driving. The fusion mechanism leverages the strengths of each modality while minimizing their weaknesses. Moreover, in practice, camera-LiDAR fusion models utilize pre-trained backbones for efficient training. However, we argue that directly loading single-modal pre-trained camera and LiDAR backbones into camera-LiDAR fusion models introduces similar feature redundancy across modalities due to the nature of the fusion mechanism. Unfortunately, existing pruning methods are developed explicitly for single-modal models, and thus, they struggle to effectively identify these specific redundant parameters in camera-LiDAR fusion models. In this paper, to address the issue above on camera-LiDAR fusion models, we propose a novelty pruning framework Alternative Modality Masking Pruning (AlterMOMA), which employs alternative masking on each modality and identifies the redundant parameters. Specifically, when one modality parameters are masked (deactivated), the absence of features from the masked backbone compels the model to reactivate previous redundant features of the other modality backbone. Therefore, these redundant features and relevant redundant parameters can be identified via the reactivation process. The redundant parameters can be pruned by our proposed importance score evaluation function, Alternative Evaluation (AlterEva), which is based on the observation of the loss changes when certain modality parameters are activated and deactivated. Extensive experiments on the nuScene and KITTI datasets encompassing diverse tasks, baseline models, and pruning algorithms showcase that AlterMOMA outperforms existing pruning methods, attaining state-of-the-art performance.
AlterMOMA: Fusion Redundancy Pruning for Camera-LiDAR Fusion Models with Alternative Modality Masking
[ "shiqi sun", "Yantao Lu", "Ning Liu", "Bo Jiang", "Jinchao Chen", "Ying Zhang" ]
NeurIPS.cc/2024/Conference
2409.17728
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ujk0XrNTQZ
@inproceedings{ mehta2024drago, title={Drago: Primal-Dual Coupled Variance Reduction for Faster Distributionally Robust Optimization}, author={Ronak Mehta and Jelena Diakonikolas and Zaid Harchaoui}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ujk0XrNTQZ} }
We consider the penalized distributionally robust optimization (DRO) problem with a closed, convex uncertainty set, a setting that encompasses learning using $f$-DRO and spectral/$L$-risk minimization. We present Drago, a stochastic primal-dual algorithm which combines cyclic and randomized components with a carefully regularized primal update to achieve dual variance reduction. Owing to its design, Drago enjoys a state-of-the-art linear convergence rate on strongly convex-strongly concave DRO problems witha fine-grained dependency on primal and dual condition numbers. The theoretical results are supported with numerical benchmarks on regression and classification tasks.
Drago: Primal-Dual Coupled Variance Reduction for Faster Distributionally Robust Optimization
[ "Ronak Mehta", "Jelena Diakonikolas", "Zaid Harchaoui" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ujE83r50tR
@inproceedings{ zhao2024octopus, title={Octopus: A Multi-modal {LLM} with Parallel Recognition and Sequential Understanding}, author={Chuyang Zhao and YuXin Song and Junru Chen and KANG RONG and Haocheng Feng and Gang Zhang and Shufan Ji and Jingdong Wang and Errui Ding and Yifan Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ujE83r50tR} }
A mainstream of Multi-modal Large Language Models (MLLMs) have two essential functions, i.e., visual recognition (e.g., grounding) and understanding (e.g., visual question answering). Presently, all these MLLMs integrate visual recognition and understanding in a same sequential manner in the LLM head, i.e., generating the response token-by-token for both recognition and understanding. We think unifying them in the same sequential manner is not optimal for two reasons: 1) parallel recognition is more efficient than sequential recognition and is actually prevailing in deep visual recognition, and 2) the recognition results can be integrated to help high-level cognition (while the current manner does not). Such motivated, this paper proposes a novel “parallel recognition → sequential understanding” framework for MLLMs. The bottom LLM layers are utilized for parallel recognition and the recognition results are relayed into the top LLM layers for sequential understanding. Specifically, parallel recognition in the bottom LLM layers is implemented via object queries, a popular mechanism in DEtection TRansformer, which we find to harmonize well with the LLM layers. Empirical studies show our MLLM named Octopus improves accuracy on popular MLLM tasks and is up to 5× faster on visual grounding tasks.
Octopus: A Multi-modal LLM with Parallel Recognition and Sequential Understanding
[ "Chuyang Zhao", "YuXin Song", "Junru Chen", "KANG RONG", "Haocheng Feng", "Gang Zhang", "Shufan Ji", "Jingdong Wang", "Errui Ding", "Yifan Sun" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ujDKXWTbJX
@inproceedings{ zhou2024jiuzhang, title={JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models}, author={Kun Zhou and Beichen Zhang and jiapeng wang and Zhipeng Chen and Xin Zhao and Jing Sha and Zhichao Sheng and Shijin Wang and Ji-Rong Wen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ujDKXWTbJX} }
Mathematical reasoning is an important capability of large language models~(LLMs) for real-world applications. To enhance this capability, existing work either collects large-scale math-related texts for pre-training, or relies on stronger LLMs (\eg GPT-4) to synthesize massive math problems. Both types of work generally lead to large costs in training or synthesis. To reduce the cost, based on open-source available texts, we propose an efficient way that trains a small LLM for math problem synthesis, to efficiently generate sufficient high-quality pre-training data. To achieve it, we create a dataset using GPT-4 to distill its data synthesis capability into the small LLM. Concretely, we craft a set of prompts based on human education stages to guide GPT-4, to synthesize problems covering diverse math knowledge and difficulty levels. Besides, we adopt the gradient-based influence estimation method to select the most valuable math-related texts. The both are fed into GPT-4 for creating the knowledge distillation dataset to train the small LLM. We leverage it to synthesize 6 million math problems for pre-training our JiuZhang3.0 model. The whole process only needs to invoke GPT-4 API 9.3k times and use 4.6B data for training. Experimental results have shown that JiuZhang3.0 achieves state-of-the-art performance on several mathematical reasoning datasets, under both natural language reasoning and tool manipulation settings. Our code and data will be publicly released in \url{https://github.com/RUCAIBox/JiuZhang3.0}.
JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models
[ "Kun Zhou", "Beichen Zhang", "jiapeng wang", "Zhipeng Chen", "Xin Zhao", "Jing Sha", "Zhichao Sheng", "Shijin Wang", "Ji-Rong Wen" ]
NeurIPS.cc/2024/Conference
2405.14365
[ "https://github.com/rucaibox/jiuzhang3.0" ]
https://huggingface.co/papers/2405.14365
0
0
0
9
[ "ToheartZhang/JiuZhang3.0-7B", "ToheartZhang/JiuZhang3.0-Synthesis-7B", "ToheartZhang/JiuZhang3.0-8x7B", "ToheartZhang/JiuZhang3.0-8B", "RichardErkhov/ToheartZhang_-_JiuZhang3.0-Synthesis-7B-gguf", "RichardErkhov/ToheartZhang_-_JiuZhang3.0-7B-gguf" ]
[]
[]
[ "ToheartZhang/JiuZhang3.0-7B", "ToheartZhang/JiuZhang3.0-Synthesis-7B", "ToheartZhang/JiuZhang3.0-8x7B", "ToheartZhang/JiuZhang3.0-8B", "RichardErkhov/ToheartZhang_-_JiuZhang3.0-Synthesis-7B-gguf", "RichardErkhov/ToheartZhang_-_JiuZhang3.0-7B-gguf" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=uikhNa4wam
@inproceedings{ kim2024fifodiffusion, title={{FIFO}-Diffusion: Generating Infinite Videos from Text without Training}, author={Jihwan Kim and Junoh Kang and Jinyoung Choi and Bohyung Han}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uikhNa4wam} }
We propose a novel inference technique based on a pretrained diffusion model for text-conditional video generation. Our approach, called FIFO-Diffusion, is conceptually capable of generating infinitely long videos without additional training. This is achieved by iteratively performing diagonal denoising, which simultaneously processes a series of consecutive frames with increasing noise levels in a queue; our method dequeues a fully denoised frame at the head while enqueuing a new random noise frame at the tail. However, diagonal denoising is a double-edged sword as the frames near the tail can take advantage of cleaner frames by forward reference but such a strategy induces the discrepancy between training and inference. Hence, we introduce latent partitioning to reduce the training-inference gap and lookahead denoising to leverage the benefit of forward referencing. Practically, FIFO-Diffusion consumes a constant amount of memory regardless of the target video length given a baseline model, while well-suited for parallel inference on multiple GPUs. We have demonstrated the promising results and effectiveness of the proposed methods on existing text-to-video generation baselines. Generated video examples and source codes are available at our project page.
FIFO-Diffusion: Generating Infinite Videos from Text without Training
[ "Jihwan Kim", "Junoh Kang", "Jinyoung Choi", "Bohyung Han" ]
NeurIPS.cc/2024/Conference
2405.11473
[ "https://github.com/jjihwan/FIFO-Diffusion_public" ]
https://huggingface.co/papers/2405.11473
2
53
5
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=uhki1rE2NZ
@inproceedings{ ziyin2024parameter, title={Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent}, author={Liu Ziyin and Mingze Wang and Hongchao Li and Lei Wu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uhki1rE2NZ} }
Symmetries are prevalent in deep learning and can significantly influence the learning dynamics of neural networks. In this paper, we examine how exponential symmetries -- a broad subclass of continuous symmetries present in the model architecture or loss function -- interplay with stochastic gradient descent (SGD). We first prove that gradient noise creates a systematic motion (a ``Noether flow") of the parameters $\theta$ along the degenerate direction to a unique initialization-independent fixed point $\theta^*$. These points are referred to as the noise equilibria because, at these points, noise contributions from different directions are balanced and aligned. Then, we show that the balance and alignment of gradient noise can serve as a novel alternative mechanism for explaining important phenomena such as progressive sharpening/flattening and representation formation within neural networks and have practical implications for understanding techniques like representation normalization and warmup.
Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent
[ "Liu Ziyin", "Mingze Wang", "Hongchao Li", "Lei Wu" ]
NeurIPS.cc/2024/Conference
2402.07193
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ugqx9tgyum
@inproceedings{ nie2024incorporating, title={Incorporating Test-Time Optimization into Training with Dual Networks for Human Mesh Recovery}, author={Yongwei Nie and Mingxian Fan and Chengjiang Long and Qing Zhang and Jian Zhu and Xuemiao Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ugqx9tgyum} }
Human Mesh Recovery (HMR) is the task of estimating a parameterized 3D human mesh from an image. There is a kind of methods first training a regression model for this problem, then further optimizing the pretrained regression model for any specific sample individually at test time. However, the pretrained model may not provide an ideal optimization starting point for the test-time optimization. Inspired by meta-learning, we incorporate the test-time optimization into training, performing a step of test-time optimization for each sample in the training batch before really conducting the training optimization over all the training samples. In this way, we obtain a meta-model, the meta-parameter of which is friendly to the test-time optimization. At test time, after several test-time optimization steps starting from the meta-parameter, we obtain much higher HMR accuracy than the test-time optimization starting from the simply pretrained regression model. Furthermore, we find test-time HMR objectives are different from training-time objectives, which reduces the effectiveness of the learning of the meta-model. To solve this problem, we propose a dual-network architecture that unifies the training-time and test-time objectives. Our method, armed with meta-learning and the dual networks, outperforms state-of-the-art regression-based and optimization-based HMR approaches, as validated by the extensive experiments. The codes are available at https://github.com/fmx789/Meta-HMR.
Incorporating Test-Time Optimization into Training with Dual Networks for Human Mesh Recovery
[ "Yongwei Nie", "Mingxian Fan", "Chengjiang Long", "Qing Zhang", "Jian Zhu", "Xuemiao Xu" ]
NeurIPS.cc/2024/Conference
2401.14121
[ "https://github.com/fmx789/meta-hmr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ugXKInqDCC
@inproceedings{ hu2024adaflow, title={AdaFlow: Imitation Learning with Variance-Adaptive Flow-Based Policies}, author={Xixi Hu and qiang liu and Xingchao Liu and Bo Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ugXKInqDCC} }
Diffusion-based imitation learning improves Behavioral Cloning (BC) on multi-modal decision-making, but comes at the cost of significantly slower inference due to the recursion in the diffusion process. It urges us to design efficient policy generators while keeping the ability to generate diverse actions. To address this challenge, we propose AdaFlow, an imitation learning framework based on flow-based generative modeling. AdaFlow represents the policy with state-conditioned ordinary differential equations (ODEs), which are known as probability flows. We reveal an intriguing connection between the conditional variance of their training loss and the discretization error of the ODEs. With this insight, we propose a variance-adaptive ODE solver that can adjust its step size in the inference stage, making AdaFlow an adaptive decision-maker, offering rapid inference without sacrificing diversity. Interestingly, it automatically reduces to a one-step generator when the action distribution is uni-modal. Our comprehensive empirical evaluation shows that AdaFlow achieves high performance with fast inference speed.
AdaFlow: Imitation Learning with Variance-Adaptive Flow-Based Policies
[ "Xixi Hu", "qiang liu", "Xingchao Liu", "Bo Liu" ]
NeurIPS.cc/2024/Conference
2402.04292
[ "https://github.com/hxixixh/adaflow" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ugL2D9idAD
@inproceedings{ yi2024filternet, title={FilterNet: Harnessing Frequency Filters for Time Series Forecasting}, author={Kun Yi and Jingru Fei and Qi Zhang and Hui He and Shufeng Hao and Defu Lian and Wei Fan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ugL2D9idAD} }
Given the ubiquitous presence of time series data across various domains, precise forecasting of time series holds significant importance and finds widespread real-world applications such as energy, weather, healthcare, etc. While numerous forecasters have been proposed using different network architectures, the Transformer-based models have state-of-the-art performance in time series forecasting. However, forecasters based on Transformers are still suffering from vulnerability to high-frequency signals, efficiency in computation, and bottleneck in full-spectrum utilization, which essentially are the cornerstones for accurately predicting time series with thousands of points. In this paper, we explore a novel perspective of enlightening signal processing for deep time series forecasting. Inspired by the filtering process, we introduce one simple yet effective network, namely FilterNet, built upon our proposed learnable frequency filters to extract key informative temporal patterns by selectively passing or attenuating certain components of time series signals. Concretely, we propose two kinds of learnable filters in the FilterNet: (i) Plain shaping filter, that adopts a universal frequency kernel for signal filtering and temporal modeling; (ii) Contextual shaping filter, that utilizes filtered frequencies examined in terms of its compatibility with input signals for dependency learning. Equipped with the two filters, FilterNet can approximately surrogate the linear and attention mappings widely adopted in time series literature, while enjoying superb abilities in handling high-frequency noises and utilizing the whole frequency spectrum that is beneficial for forecasting. Finally, we conduct extensive experiments on eight time series forecasting benchmarks, and experimental results have demonstrated our superior performance in terms of both effectiveness and efficiency compared with state-of-the-art methods. Our code is available at$^1$.
FilterNet: Harnessing Frequency Filters for Time Series Forecasting
[ "Kun Yi", "Jingru Fei", "Qi Zhang", "Hui He", "Shufeng Hao", "Defu Lian", "Wei Fan" ]
NeurIPS.cc/2024/Conference
2411.01623
[ "https://github.com/aikunyi/filternet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ufPPf9ghzP
@inproceedings{ arya2024a, title={A Neural Network Approach for Efficiently Answering Most Probable Explanation Queries in Probabilistic Models}, author={Shivvrat Arya and Tahrima Rahman and Vibhav Giridhar Gogate}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ufPPf9ghzP} }
We propose a novel neural networks based approach to efficiently answer arbitrary Most Probable Explanation (MPE) queries—a well-known NP-hard task—in large probabilistic models such as Bayesian and Markov networks, probabilistic circuits, and neural auto-regressive models. By arbitrary MPE queries, we mean that there is no predefined partition of variables into evidence and non-evidence variables. The key idea is to distill all MPE queries over a given probabilistic model into a neural network and then use the latter for answering queries, eliminating the need for time-consuming inference algorithms that operate directly on the probabilistic model. We improve upon this idea by incorporating inference-time optimization with self-supervised loss to iteratively improve the solutions and employ a teacher-student framework that provides a better initial network, which in turn, helps reduce the number of inference-time optimization steps. The teacher network utilizes a self-supervised loss function optimized for getting the exact MPE solution, while the student network learns from the teacher's near-optimal outputs through supervised loss. We demonstrate the efficacy and scalability of our approach on various datasets and a broad class of probabilistic models, showcasing its practical effectiveness.
A Neural Network Approach for Efficiently Answering Most Probable Explanation Queries in Probabilistic Models
[ "Shivvrat Arya", "Tahrima Rahman", "Vibhav Giridhar Gogate" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=ufKBRvYxtp
@inproceedings{ ghai2024sampleefficient, title={Sample-Efficient Agnostic Boosting}, author={Udaya Ghai and Karan Singh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ufKBRvYxtp} }
The theory of boosting provides a computational framework for aggregating approximate weak learning algorithms, which perform marginally better than a random predictor, into an accurate strong learner. In the realizable case, the success of the boosting approach is underscored by a remarkable fact that the resultant sample complexity matches that of a computationally demanding alternative, namely Empirical Risk Minimization (ERM). This in particular implies that the realizable boosting methodology has the potential to offer computational relief without compromising on sample efficiency. Despite recent progress, in agnostic boosting, where assumptions on the conditional distribution of labels given feature descriptions are absent, ERM outstrips the agnostic boosting methodology in being quadratically more sample efficient than all known agnostic boosting algorithms. In this paper, we make progress on closing this gap, and give a substantially more sample efficient agnostic boosting algorithm than those known, without compromising on the computational (or oracle) complexity. A key feature of our algorithm is that it leverages the ability to reuse samples across multiple rounds of boosting, while guaranteeing a generalization error strictly better than those obtained by blackbox applications of uniform convergence arguments. We also apply our approach to other previously studied learning problems, including boosting for reinforcement learning, and demonstrate improved results.
Sample-Efficient Agnostic Boosting
[ "Udaya Ghai", "Karan Singh" ]
NeurIPS.cc/2024/Conference
2410.23632
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=udZKVMPf3S
@inproceedings{ xie2024calibrating, title={Calibrating Reasoning in Language Models with Internal Consistency}, author={Zhihui Xie and Jizhou Guo and Tong Yu and Shuai Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=udZKVMPf3S} }
Large language models (LLMs) have demonstrated impressive capabilities in various reasoning tasks, aided by techniques like chain-of-thought prompting that elicits verbalized reasoning. However, LLMs often generate text with obvious mistakes and contradictions, raising doubts about their ability to robustly process and utilize generated rationales. In this work, we investigate reasoning in LLMs through the lens of internal representations, focusing on how these representations are influenced by generated rationales. Our preliminary analysis reveals that while generated rationales improve answer accuracy, inconsistencies emerge between the model’s internal representations in middle layers and those in final layers, potentially undermining the reliability of their reasoning processes. To address this, we propose internal consistency as a measure of the model’s confidence by examining the agreement of latent predictions decoded from intermediate layers. Extensive empirical studies across different models and datasets demonstrate that internal consistency effectively distinguishes between correct and incorrect reasoning paths. Motivated by this, we propose a new approach to calibrate reasoning by up-weighting reasoning paths with high internal consistency, resulting in a significant boost in reasoning performance. Further analysis uncovers distinct patterns in attention and feed-forward modules across layers, providing insights into the emergence of internal inconsistency. In summary, our results demonstrate the potential of using internal representations for self-evaluation of LLMs. Our code is available at [github.com/zhxieml/internal-consistency](https://github.com/zhxieml/internal-consistency).
Calibrating Reasoning in Language Models with Internal Consistency
[ "Zhihui Xie", "Jizhou Guo", "Tong Yu", "Shuai Li" ]
NeurIPS.cc/2024/Conference
2405.18711
[ "" ]
https://huggingface.co/papers/2405.18711
2
6
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=udTwwF7tks
@inproceedings{ ramachandran2024iteratively, title={Iteratively Refined Early Interaction Alignment for Subgraph Matching based Graph Retrieval}, author={Ashwin Ramachandran and Vaibhav Raj and Indradyumna Roy and Soumen Chakrabarti and Abir De}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=udTwwF7tks} }
Graph retrieval based on subgraph isomorphism has several real-world applications such as scene graph retrieval, molecular fingerprint detection and circuit design. Roy et al. [35] proposed IsoNet, a late interaction model for subgraph matching, which first computes the node and edge embeddings of each graph independently of paired graph and then computes a trainable alignment map. Here, we present $\texttt{IsoNet++}$, an early interaction graph neural network (GNN), based on several technical innovations. First, we compute embeddings of all nodes by passing messages within and across the two input graphs, guided by an *injective alignment* between their nodes. Second, we update this alignment in a lazy fashion over multiple *rounds*. Within each round, we run a layerwise GNN from scratch, based on the current state of the alignment. After the completion of one round of GNN, we use the last-layer embeddings to update the alignments, and proceed to the next round. Third, $\texttt{IsoNet++}$ incorporates a novel notion of node-pair partner interaction. Traditional early interaction computes attention between a node and its potential partners in the other graph, the attention then controlling messages passed across graphs. We consider *node pairs* (not single nodes) as potential partners. Existence of an edge between the nodes in one graph and non-existence in the other provide vital signals for refining the alignment. Our experiments on several datasets show that the alignments get progressively refined with successive rounds, resulting in significantly better retrieval performance than existing methods. We demonstrate that all three innovations contribute to the enhanced accuracy. Our code and datasets are publicly available at https://github.com/structlearning/isonetpp.
Iteratively Refined Early Interaction Alignment for Subgraph Matching based Graph Retrieval
[ "Ashwin Ramachandran", "Vaibhav Raj", "Indradyumna Roy", "Soumen Chakrabarti", "Abir De" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ud0RBkdBfE
@inproceedings{ han2024convergence, title={Convergence Analysis of Split Federated Learning on Heterogeneous Data}, author={Pengchao Han and Chao Huang and Geng Tian and Ming Tang and Xin Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ud0RBkdBfE} }
Split federated learning (SFL) is a recent distributed approach for collaborative model training among multiple clients. In SFL, a global model is typically split into two parts, where clients train one part in a parallel federated manner, and a main server trains the other. Despite the recent research on SFL algorithm development, the convergence analysis of SFL is missing in the literature, and this paper aims to fill this gap. The analysis of SFL can be more challenging than that of federated learning (FL), due to the potential dual-paced updates at the clients and the main server. We provide convergence analysis of SFL for strongly convex and general convex objectives on heterogeneous data. The convergence rates are $O(1/T)$ and $O(1/\sqrt[3]{T})$, respectively, where $T$ denotes the total number of rounds for SFL training. We further extend the analysis to non-convex objectives and where some clients may be unavailable during training. Numerical experiments validate our theoretical results and show that SFL outperforms FL and split learning (SL) when data is highly heterogeneous across a large number of clients.
Convergence Analysis of Split Federated Learning on Heterogeneous Data
[ "Pengchao Han", "Chao Huang", "Geng Tian", "Ming Tang", "Xin Liu" ]
NeurIPS.cc/2024/Conference
2402.15166
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ucxQrked0d
@inproceedings{ wang2024making, title={Making Offline {RL} Online: Collaborative World Models for Offline Visual Reinforcement Learning}, author={Qi Wang and Junming Yang and Yunbo Wang and Xin Jin and Wenjun Zeng and Xiaokang Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ucxQrked0d} }
Training offline RL models using visual inputs poses two significant challenges, *i.e.*, the overfitting problem in representation learning and the overestimation bias for expected future rewards. Recent work has attempted to alleviate the overestimation bias by encouraging conservative behaviors. This paper, in contrast, tries to build more flexible constraints for value estimation without impeding the exploration of potential advantages. The key idea is to leverage off-the-shelf RL simulators, which can be easily interacted with in an online manner, as the “*test bed*” for offline policies. To enable effective online-to-offline knowledge transfer, we introduce CoWorld, a model-based RL approach that mitigates cross-domain discrepancies in state and reward spaces. Experimental results demonstrate the effectiveness of CoWorld, outperforming existing RL approaches by large margins.
Making Offline RL Online: Collaborative World Models for Offline Visual Reinforcement Learning
[ "Qi Wang", "Junming Yang", "Yunbo Wang", "Xin Jin", "Wenjun Zeng", "Xiaokang Yang" ]
NeurIPS.cc/2024/Conference
2305.15260
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ucXUtMPWhv
@inproceedings{ zhang2024elastst, title={Elas{TST}: Towards Robust Varied-Horizon Forecasting with Elastic Time-Series Transformer}, author={Jiawen Zhang and Shun Zheng and Xumeng Wen and Xiaofang Zhou and Jiang Bian and Jia Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ucXUtMPWhv} }
Numerous industrial sectors necessitate models capable of providing robust forecasts across various horizons. Despite the recent strides in crafting specific architectures for time-series forecasting and developing pre-trained universal models, a comprehensive examination of their capability in accommodating varied-horizon forecasting during inference is still lacking. This paper bridges this gap through the design and evaluation of the Elastic Time-Series Transformer (ElasTST). The ElasTST model incorporates a non-autoregressive design with placeholders and structured self-attention masks, warranting future outputs that are invariant to adjustments in inference horizons. A tunable version of rotary position embedding is also integrated into ElasTST to capture time-series-specific periods and enhance adaptability to different horizons. Additionally, ElasTST employs a multi-scale patch design, effectively integrating both fine-grained and coarse-grained information. During the training phase, ElasTST uses a horizon reweighting strategy that approximates the effect of random sampling across multiple horizons with a single fixed horizon setting. Through comprehensive experiments and comparisons with state-of-the-art time-series architectures and contemporary foundation models, we demonstrate the efficacy of ElasTST's unique design elements. Our findings position ElasTST as a robust solution for the practical necessity of varied-horizon forecasting.
ElasTST: Towards Robust Varied-Horizon Forecasting with Elastic Time-Series Transformer
[ "Jiawen Zhang", "Shun Zheng", "Xumeng Wen", "Xiaofang Zhou", "Jiang Bian", "Jia Li" ]
NeurIPS.cc/2024/Conference
2411.01842
[ "https://github.com/microsoft/probts" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uatPOPWzzU
@inproceedings{ duan2024unifying, title={Unifying Homophily and Heterophily for Spectral Graph Neural Networks via Triple Filter Ensembles}, author={Rui Duan and Mingjian Guang and Junli Wang and Chungang Yan and Hongda Qi and Wenkang Su and Can Tian and Haoran Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uatPOPWzzU} }
Polynomial-based learnable spectral graph neural networks (GNNs) utilize polynomial to approximate graph convolutions and have achieved impressive performance on graphs. Nevertheless, there are three progressive problems to be solved. Some models use polynomials with better approximation for approximating filters, yet perform worse on real-world graphs. Carefully crafted graph learning methods, sophisticated polynomial approximations, and refined coefficient constraints leaded to overfitting, which diminishes the generalization of the models. How to design a model that retains the ability of polynomial-based spectral GNNs to approximate filters while it possesses higher generalization and performance? In this paper, we propose a spectral GNN with triple filter ensemble (TFE-GNN), which extracts homophily and heterophily from graphs with different levels of homophily adaptively while utilizing the initial features. Specifically, the first and second ensembles are combinations of a set of base low-pass and high-pass filters, respectively, after which the third ensemble combines them with two learnable coefficients and yield a graph convolution (TFE-Conv). Theoretical analysis shows that the approximation ability of TFE-GNN is consistent with that of ChebNet under certain conditions, namely it can learn arbitrary filters. TFE-GNN can be viewed as a reasonable combination of two unfolded and integrated excellent spectral GNNs, which motivates it to perform well. Experiments show that TFE-GNN achieves high generalization and new state-of-the-art performance on various real-world datasets.
Unifying Homophily and Heterophily for Spectral Graph Neural Networks via Triple Filter Ensembles
[ "Rui Duan", "Mingjian Guang", "Junli Wang", "Chungang Yan", "Hongda Qi", "Wenkang Su", "Can Tian", "Haoran Yang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uaNZvF1VFe
@inproceedings{ jiang2024efficient, title={Efficient Sign-Based Optimization: Accelerating Convergence via Variance Reduction}, author={Wei Jiang and Sifan Yang and Wenhao Yang and Lijun Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uaNZvF1VFe} }
Sign stochastic gradient descent (signSGD) is a communication-efficient method that transmits only the sign of stochastic gradients for parameter updating. Existing literature has demonstrated that signSGD can achieve a convergence rate of $\mathcal{O}(d^{1/2}T^{-1/4})$, where $d$ represents the dimension and $T$ is the iteration number. In this paper, we improve this convergence rate to $\mathcal{O}(d^{1/2}T^{-1/3})$ by introducing the Sign-based Stochastic Variance Reduction (SSVR) method, which employs variance reduction estimators to track gradients and leverages their signs to update. For finite-sum problems, our method can be further enhanced to achieve a convergence rate of $\mathcal{O}(m^{1/4}d^{1/2}T^{-1/2})$, where $m$ denotes the number of component functions. Furthermore, we investigate the heterogeneous majority vote in distributed settings and introduce two novel algorithms that attain improved convergence rates of $\mathcal{O}(d^{1/2}T^{-1/2} + dn^{-1/2})$ and $\mathcal{O}(d^{1/4}T^{-1/4})$ respectively, outperforming the previous results of $\mathcal{O}(dT^{-1/4} + dn^{-1/2})$ and $\mathcal{O}(d^{3/8}T^{-1/8})$, where $n$ represents the number of nodes. Numerical experiments across different tasks validate the effectiveness of our proposed methods.
Efficient Sign-Based Optimization: Accelerating Convergence via Variance Reduction
[ "Wei Jiang", "Sifan Yang", "Wenhao Yang", "Lijun Zhang" ]
NeurIPS.cc/2024/Conference
2406.00489
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uZi7H5Ac0X
@inproceedings{ jiang2024a, title={A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with Coupled Constraints}, author={Liuyuan Jiang and Quan Xiao and Victor M. Tenorio and Fernando Real-Rojas and Antonio Marques and Tianyi Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uZi7H5Ac0X} }
Interest in bilevel optimization has grown in recent years, partially due to its relevance for challenging machine-learning problems. Several exciting recent works have been centered around developing efficient gradient-based algorithms that can solve bilevel optimization problems with provable guarantees. However, the existing literature mainly focuses on bilevel problems either without constraints, or featuring only simple constraints that do not couple variables across the upper and lower levels, excluding a range of complex applications. Our paper studies this challenging but less explored scenario and develops a (fully) first-order algorithm, which we term BLOCC, to tackle BiLevel Optimization problems with Coupled Constraints. We establish rigorous convergence theory for the proposed algorithm and demonstrate its effectiveness on two well-known real-world applications - support vector machine (SVM) - based model training and infrastructure planning in transportation networks.
A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with Coupled Constraints
[ "Liuyuan Jiang", "Quan Xiao", "Victor M. Tenorio", "Fernando Real-Rojas", "Antonio Marques", "Tianyi Chen" ]
NeurIPS.cc/2024/Conference
2406.10148
[ "https://github.com/Liuyuan999/Penalty_Based_Lagrangian_Bilevel" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uYZTzcHaQB
@inproceedings{ zhou2024motiforiented, title={Motif-oriented influence maximization for viral marketing in large-scale social networks}, author={Mingyang Zhou and Weiji Cao and Hao Liao and Rui Mao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uYZTzcHaQB} }
The influence maximization (IM) problem aims to identify a budgeted set of nodes with the highest potential to influence the largest number of users in a cascade model, a key challenge in viral marketing. Traditional \emph{IM} approaches consider each user/node independently as a potential target customer. However, in many scenarios, the target customers comprise motifs, where activating only one or a few users within a motif is insufficient for effective viral marketing, which, nevertheless, receives little attention. For instance, if a motif of three friends planning to dine together, targeting all three simultaneously is crucial for a restaurant advertisement to succeed. In this paper, we address the motif-oriented influence maximization problem under the linear threshold model. We prove that the motif-oriented IM problem is NP-hard and that the influence function is neither supermodular nor submodular, in contrast to the classical \emph{IM} setting. To simplify the problem, we establish the submodular upper and lower bounds for the influence function. By leveraging the submodular property, we propose a natural greedy strategy that simultaneously maximizes both bounds. Our algorithm has an approximation ratio of $\tau\cdot (1-1/e-\varepsilon)$ and a near-linear time complexity of $O((k+l)(m+\eta)\log \eta/\varepsilon^2)$. Experimental results on diverse datasets confirm the effectiveness of our approach in motif maximization.
Motif-oriented influence maximization for viral marketing in large-scale social networks
[ "Mingyang Zhou", "Weiji Cao", "Hao Liao", "Rui Mao" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uXuObobJHO
@inproceedings{ lai2024hamiltonian, title={Hamiltonian Monte Carlo Inference of Marginalized Linear Mixed-Effects Models}, author={Jinlin Lai and Daniel Sheldon and Justin Domke}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uXuObobJHO} }
Bayesian reasoning in linear mixed-effects models (LMMs) is challenging and often requires advanced sampling techniques like Markov chain Monte Carlo (MCMC). A common approach is to write the model in a probabilistic programming language and then sample via Hamiltonian Monte Carlo (HMC). However, there are many ways a user can transform a model that make inference more or less efficient. In particular, marginalizing some variables can greatly improve inference but is difficult for users to do manually. We develop an algorithm to easily marginalize random effects in LMMs. A naive approach introduces cubic time operations within an inference algorithm like HMC, but we reduce the running time to linear using fast linear algebra techniques. We show that marginalization is always beneficial when applicable and highlight improvements in various models, especially ones from cognitive sciences.
Hamiltonian Monte Carlo Inference of Marginalized Linear Mixed-Effects Models
[ "Jinlin Lai", "Daniel Sheldon", "Justin Domke" ]
NeurIPS.cc/2024/Conference
2410.24079
[ "https://github.com/lll6924/hamiltonian_lme" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uXJlgkWdcI
@inproceedings{ zhu2024pace, title={{PACE}: Pacing Operator Learning to Accurate Optical Field Simulation for Complicated Photonic Devices}, author={Hanqing Zhu and Wenyan Cong and Guojin Chen and Shupeng Ning and Ray Chen and Jiaqi Gu and David Z. Pan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uXJlgkWdcI} }
Electromagnetic field simulation is central to designing, optimizing, and validating photonic devices and circuits. However, costly computation associated with numerical simulation poses a significant bottleneck, hindering scalability and turnaround time in the photonic circuit design process. Neural operators offer a promising alternative, but existing SOTA approaches, Neurolight, struggle with predicting high-fidelity fields for real-world complicated photonic devices, with the best reported 0.38 normalized mean absolute error in Neurolight. The interplays of highly complex light-matter interaction, e.g., scattering and resonance, sensitivity to local structure details, non-uniform learning complexity for full-domain simulation, and rich frequency information, contribute to the failure of existing neural PDE solvers. In this work, we boost the prediction fidelity to an unprecedented level for simulating complex photonic devices with a novel operator design driven by the above challenges. We propose a novel cross-axis factorized PACE operator with a strong long-distance modeling capacity to connect the full-domain complex field pattern with local device structures. Inspired by human learning, we further divide and conquer the simulation task for extremely hard cases into two progressively easy tasks, with a first-stage model learning an initial solution refined by a second model. On various complicated photonic device benchmarks, we demonstrate one sole PACE model is capable of achieving 73% lower error with 50% fewer parameters compared with various recent ML for PDE solvers. The two-stage setup further advances high-fidelity simulation for even more intricate cases. In terms of runtime, PACE demonstrates 154-577x and 11.8-12x simulation speedup over numerical solver using scipy or highly-optimized pardiso solver, respectively. We open-sourced the code and *complicated* optical device dataset at [PACE-Light](https://github.com/zhuhanqing/PACE-Light).
PACE: Pacing Operator Learning to Accurate Optical Field Simulation for Complicated Photonic Devices
[ "Hanqing Zhu", "Wenyan Cong", "Guojin Chen", "Shupeng Ning", "Ray Chen", "Jiaqi Gu", "David Z. Pan" ]
NeurIPS.cc/2024/Conference
2411.03527
[ "https://github.com/zhuhanqing/PACE-Light" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uSKzEaj9zJ
@inproceedings{ yu2024nonlocal, title={Nonlocal Attention Operator: Materializing Hidden Knowledge Towards Interpretable Physics Discovery}, author={Yue Yu and Ning Liu and Fei Lu and Tian Gao and Siavash Jafarzadeh and Stewart A Silling}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uSKzEaj9zJ} }
Despite recent popularity of attention-based neural architectures in core AI fields like natural language processing (NLP) and computer vision (CV), their potential in modeling complex physical systems remains under-explored. Learning problems in physical systems are often characterized as discovering operators that map between function spaces based on a few instances of function pairs. This task frequently presents a severely ill-posed PDE inverse problem. In this work, we propose a novel neural operator architecture based on the attention mechanism, which we coin Nonlocal Attention Operator (NAO), and explore its capability towards developing a foundation physical model. In particular, we show that the attention mechanism is equivalent to a double integral operator that enables nonlocal interactions among spatial tokens, with a data-dependent kernel characterizing the inverse mapping from data to the hidden parameter field of the underlying operator. As such, the attention mechanism extracts global prior information from training data generated by multiple systems, and suggests the exploratory space in the form of a nonlinear kernel map. Consequently, NAO can address ill-posedness and rank deficiency in inverse PDE problems by encoding regularization and achieving generalizability. Lastly, we empirically demonstrate the advantages of NAO over baseline neural models in terms of the generalizability to unseen data resolutions and system states. Our work not only suggests a novel neural operator architecture for learning an interpretable foundation model of physical systems, but also offers a new perspective towards understanding the attention mechanism.
Nonlocal Attention Operator: Materializing Hidden Knowledge Towards Interpretable Physics Discovery
[ "Yue Yu", "Ning Liu", "Fei Lu", "Tian Gao", "Siavash Jafarzadeh", "Stewart A Silling" ]
NeurIPS.cc/2024/Conference
2408.07307
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=uS0PwIBzC0
@inproceedings{ lingam2024svft, title={{SVFT}: Parameter-Efficient Fine-Tuning with Singular Vectors}, author={Vijay Lingam and Atula Tejaswi Neerkaje and Aditya Vavre and Aneesh Shetty and Gautham Krishna Gudur and Joydeep Ghosh and Eunsol Choi and Alex Dimakis and Aleksandar Bojchevski and sujay sanghavi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uS0PwIBzC0} }
Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights $\(\mathbf{W}\)$ and inject learnable matrices $\(\mathbf{\Delta W}\)$. These $\(\mathbf{\Delta W}\)$ matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically exhibit a performance gap compared to full fine-tuning. While recent PEFT methods have narrowed this gap, they do so at the expense of additional learnable parameters. We propose SVFT, a *simple* approach that structures $\(\mathbf{\Delta W}\)$ based on the specific weight matrix $\(\mathbf{W}\)$. SVFT updates $\(\mathbf{W}\)$ as a sparse combination $\(M\)$ of outer products of its singular vectors, training only the coefficients of these combinations. Crucially, we make additional off-diagonal elements in $M$ learnable, enabling a smooth trade-off between trainable parameters and expressivity—an aspect that distinctly sets our approach apart from previous works leveraging singular values. Extensive experiments on language and vision benchmarks show that SVFT recovers up to **96%** of full fine-tuning performance while training only **0.006 to 0.25%** of parameters, outperforming existing methods that achieve only up to **{85\%}** performance with **0.03 to 0.8%** of the trainable parameter budget.
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors
[ "Vijay Lingam", "Atula Tejaswi Neerkaje", "Aditya Vavre", "Aneesh Shetty", "Gautham Krishna Gudur", "Joydeep Ghosh", "Eunsol Choi", "Alex Dimakis", "Aleksandar Bojchevski", "sujay sanghavi" ]
NeurIPS.cc/2024/Conference
2405.19597
[ "https://github.com/vijaylingam95/svft" ]
https://huggingface.co/papers/2405.19597
1
0
1
10
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=uRnTYPkF3V
@inproceedings{ liu2024sequential, title={Sequential Probability Assignment with Contexts: Minimax Regret, Contextual Shtarkov Sums, and Contextual Normalized Maximum Likelihood}, author={Ziyi Liu and Idan Attias and Daniel M. Roy}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uRnTYPkF3V} }
We study the fundamental problem of sequential probability assignment, also known as online learning with logarithmic loss, with respect to an arbitrary, possibly nonparametric hypothesis class. Our goal is to obtain a complexity measure for the hypothesis class that characterizes the minimax regret and to determine a general, minimax optimal algorithm. Notably, the sequential $\ell_{\infty}$ entropy, extensively studied in the literature (Rakhlin and Sridharan, 2015, Bilodeau et al., 2020, Wu et al., 2023), was shown to not characterize minimax regret in general. Inspired by the seminal work of Shtarkov (1987) and Rakhlin, Sridharan, and Tewari (2010), we introduce a novel complexity measure, the \emph{contextual Shtarkov sum}, corresponding to the Shtarkov sum after projection onto a multiary context tree, and show that the worst case log contextual Shtarkov sum equals the minimax regret. Using the contextual Shtarkov sum, we derive the minimax optimal strategy, dubbed \emph{contextual Normalized Maximum Likelihood} (cNML). Our results hold for sequential experts, beyond binary labels, which are settings rarely considered in prior work. To illustrate the utility of this characterization, we provide a short proof of a new regret upper bound in terms of sequential $\ell_{\infty}$ entropy, unifying and sharpening state-of-the-art bounds by Bilodeau et al. (2020) and Wu et al. (2023).
Sequential Probability Assignment with Contexts: Minimax Regret, Contextual Shtarkov Sums, and Contextual Normalized Maximum Likelihood
[ "Ziyi Liu", "Idan Attias", "Daniel M. Roy" ]
NeurIPS.cc/2024/Conference
2410.03849
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uOvrwVW1yA
@inproceedings{ cheng2024sample, title={Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut}, author={Hongyu Cheng and Sammy Khalife and Barbara Fiedorowicz and Amitabh Basu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uOvrwVW1yA} }
Data-driven algorithm design is a paradigm that uses statistical and machine learning techniques to select from a class of algorithms for a computational problem an algorithm that has the best expected performance with respect to some (unknown) distribution on the instances of the problem. We build upon recent work in this line of research by considering the setup where, instead of selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved, using neural networks. In particular, given a representative sample of instances, we learn a neural network that maps an instance of the problem to the most appropriate algorithm *for that instance*. We formalize this idea and derive rigorous sample complexity bounds for this learning problem, in the spirit of recent work in data-driven algorithm design. We then apply this approach to the problem of making good decisions in the branch-and-cut framework for mixed-integer optimization (e.g., which cut to add?). In other words, the neural network will take as input a mixed-integer optimization instance and output a decision that will result in a small branch-and-cut tree for that instance. Our computational results provide evidence that our particular way of using neural networks for cut selection can make a significant impact in reducing branch-and-cut tree sizes, compared to previous data-driven approaches.
Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut
[ "Hongyu Cheng", "Sammy Khalife", "Barbara Fiedorowicz", "Amitabh Basu" ]
NeurIPS.cc/2024/Conference
2402.02328
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uO53206oLJ
@inproceedings{ zhang2024nonconvex, title={Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data}, author={Jiaojiao Zhang and Jiang Hu and Anthony Man-Cho So and Mikael Johansson}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uO53206oLJ} }
Many machine learning tasks, such as principal component analysis and low-rank matrix completion, give rise to manifold optimization problems. Although there is a large body of work studying the design and analysis of algorithms for manifold optimization in the centralized setting, there are currently very few works addressing the federated setting. In this paper, we consider nonconvex federated learning over a compact smooth submanifold in the setting of heterogeneous client data. We propose an algorithm that leverages stochastic Riemannian gradients and a manifold projection operator to improve computational efficiency, uses local updates to improve communication efficiency, and avoids client drift. Theoretically, we show that our proposed algorithm converges sub-linearly to a neighborhood of a first-order optimal solution by using a novel analysis that jointly exploits the manifold structure and properties of the loss functions. Numerical experiments demonstrate that our algorithm has significantly smaller computational and communication overhead than existing methods.
Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data
[ "Jiaojiao Zhang", "Jiang Hu", "Anthony Man-Cho So", "Mikael Johansson" ]
NeurIPS.cc/2024/Conference
2406.08465
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uNKlTQ8mBD
@inproceedings{ poesia2024learning, title={Learning Formal Mathematics From Intrinsic Motivation}, author={Gabriel Poesia and David Broman and Nick Haber and Noah Goodman}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uNKlTQ8mBD} }
How did humanity coax mathematics from the aether? We explore the Platonic view that mathematics can be discovered from its axioms---a game of conjecture and proof. We describe an agent that jointly learns to pose challenging problems for itself (conjecturing) and solve them (theorem proving). Given a mathematical domain axiomatized in dependent type theory, we first combine methods for constrained decoding and type-directed synthesis to sample valid conjectures from a language model. Our method guarantees well-formed conjectures by construction, even as we start with a randomly initialized model. We use the same model to represent a policy and value function for guiding proof search. Our agent targets generating hard but provable conjectures --- a moving target, since its own theorem proving ability also improves as it trains. We propose novel methods for hindsight relabeling on proof search trees to significantly improve the agent's sample efficiency in both tasks. Experiments on 3 axiomatic domains (propositional logic, arithmetic and group theory) demonstrate that our agent can bootstrap from only the axioms, self-improving in generating true and challenging conjectures and in finding proofs.
Learning Formal Mathematics From Intrinsic Motivation
[ "Gabriel Poesia", "David Broman", "Nick Haber", "Noah Goodman" ]
NeurIPS.cc/2024/Conference
2407.00695
[ "https://github.com/gpoesia/minimo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=uM3rQ14iex
@inproceedings{ elahi2024partial, title={Partial Structure Discovery is Sufficient for No-regret Learning in Causal Bandits}, author={Muhammad Qasim Elahi and Mahsa Ghasemi and Murat Kocaoglu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uM3rQ14iex} }
Causal knowledge about the relationships among decision variables and a reward variable in a bandit setting can accelerate the learning of an optimal decision. Current works often assume the causal graph is known, which may not always be available a priori. Motivated by this challenge, we focus on the causal bandit problem in scenarios where the underlying causal graph is unknown and may include latent confounders. While intervention on the parents of the reward node is optimal in the absence of latent confounders, this is not necessarily the case in general. Instead, one must consider a set of possibly optimal arms/interventions, each being a special subset of the ancestors of the reward node, making causal discovery beyond the parents of the reward node essential. For regret minimization, we identify that discovering the full causal structure is unnecessary; however, no existing work provides the necessary and sufficient components of the causal graph. We formally characterize the set of necessary and sufficient latent confounders one needs to detect or learn to ensure that all possibly optimal arms are identified correctly. We also propose a randomized algorithm for learning the causal graph with a limited number of samples, providing a sample complexity guarantee for any desired confidence level. In the causal bandit setup, we propose a two-stage approach. In the first stage, we learn the induced subgraph on ancestors of the reward, along with a necessary and sufficient subset of latent confounders, to construct the set of possibly optimal arms. We show that for our proposed algorithm, the number of intervention samples required to learn the set of possibly optimal arms scales polynomially with respect to the number of nodes. The second phase involves the application of a standard bandit algorithm, such as the UCB algorithm. We also establish a regret bound for our two-phase approach, which is sublinear in the number of rounds.
Partial Structure Discovery is Sufficient for No-regret Learning in Causal Bandits
[ "Muhammad Qasim Elahi", "Mahsa Ghasemi", "Murat Kocaoglu" ]
NeurIPS.cc/2024/Conference
2411.04054
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uLGyoBn7hm
@inproceedings{ li2024disentangled, title={Disentangled Representation Learning in Non-Markovian Causal Systems}, author={Adam Li and Yushu Pan and Elias Bareinboim}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uLGyoBn7hm} }
Considering various data modalities, such as images, videos, and text, humans perform causal reasoning using high-level causal variables, as opposed to operating at the low, pixel level from which the data comes. In practice, most causal reasoning methods assume that the data is described as granular as the underlying causal generative factors, which is often violated in various AI tasks. This mismatch translates into a lack of guarantees in various tasks such as generative modeling, decision-making, fairness, and generalizability, to cite a few. In this paper, we acknowledge this issue and study the problem of causal disentangled representation learning from a combination of data gathered from various heterogeneous domains and assumptions in the form of a latent causal graph. To the best of our knowledge, the proposed work is the first to consider i) non-Markovian causal settings, where there may be unobserved confounding, ii) arbitrary distributions that arise from multiple domains, and iii) a relaxed version of disentanglement. Specifically, we introduce graphical criteria that allow for disentanglement under various conditions. Building on these results, we develop an algorithm that returns a causal disentanglement map, highlighting which latent variables can be disentangled given the combination of data and assumptions. The theory is corroborated by experiments.
Disentangled Representation Learning in Non-Markovian Causal Systems
[ "Adam Li", "Yushu Pan", "Elias Bareinboim" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uHs6RJFDsg
@inproceedings{ zong2024mova, title={Mo{VA}: Adapting Mixture of Vision Experts to Multimodal Context}, author={Zhuofan Zong and Bingqi Ma and Dazhong Shen and Guanglu Song and Hao Shao and Dongzhi Jiang and Hongsheng Li and Yu Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uHs6RJFDsg} }
As the key component in multimodal large language models (MLLMs), the ability of the visual encoder greatly affects MLLM's understanding on diverse image content. Although some large-scale pretrained vision encoders such as vision encoders in CLIP and DINOv2 have brought promising performance, we found that there is still no single vision encoder that can dominate various image content understanding, e.g., the CLIP vision encoder leads to outstanding results on general image understanding but poor performance on document or chart content. To alleviate the bias of CLIP vision encoder, we first delve into the inherent behavior of different pre-trained vision encoders and then propose the MoVA, a powerful and novel MLLM, adaptively routing and fusing task-specific vision experts with a coarse-to-fine mechanism. In the coarse-grained stage, we design a context-aware expert routing strategy to dynamically select the most suitable vision experts according to the user instruction, input image, and expertise of vision experts. This benefits from the powerful model function understanding ability of the large language model (LLM). In the fine-grained stage, we elaborately conduct the mixture-of-vision-expert adapter (MoV-Adapter) to extract and fuse task-specific knowledge from various experts. This coarse-to-fine paradigm effectively leverages representations from experts based on multimodal context and model expertise, further enhancing the generalization ability. We conduct extensive experiments to evaluate the effectiveness of the proposed approach. Without any bells and whistles, MoVA can achieve significant performance gains over current state-of-the-art methods in a wide range of challenging multimodal benchmarks.
MoVA: Adapting Mixture of Vision Experts to Multimodal Context
[ "Zhuofan Zong", "Bingqi Ma", "Dazhong Shen", "Guanglu Song", "Hao Shao", "Dongzhi Jiang", "Hongsheng Li", "Yu Liu" ]
NeurIPS.cc/2024/Conference
2404.13046
[ "https://github.com/templex98/mova" ]
https://huggingface.co/papers/2404.13046
2
1
0
8
[ "zongzhuofan/llama3-mova-8b" ]
[]
[]
[ "zongzhuofan/llama3-mova-8b" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=uHml6eyoVF
@inproceedings{ szekely2024learning, title={Learning from higher-order correlations, efficiently: hypothesis tests, random features, and neural networks}, author={Eszter Szekely and Lorenzo Bardone and Federica Gerace and Sebastian Goldt}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uHml6eyoVF} }
Neural networks excel at discovering statistical patterns in high-dimensional data sets. In practice, higher-order cumulants, which quantify the non-Gaussian correlations between three or more variables, are particularly important for the performance of neural networks. But how efficient are neural networks at extracting features from higher-order cumulants? We study this question in the spiked cumulant model, where the statistician needs to recover a privileged direction or "spike'' from the order-$p\ge 4$ cumulants of $d$-dimensional inputs. We first discuss the fundamental statistical and computational limits of recovering the spike by analysing the number of samples $n$ required to strongly distinguish between inputs from the spiked cumulant model and isotropic Gaussian inputs. Existing literature established the presence of a wide statistical-to-computational gap in this problem. We deepen this line of work by finding an exact formula for the likelihood ratio norm which proves that statistical distinguishability requires $n\gtrsim d$ samples, while distinguishing the two distributions in polynomial time requires $n \gtrsim d^2$ samples for a wide class of algorithms, i.e. those covered by the low-degree conjecture. Numerical experiments show that neural networks do indeed learn to distinguish the two distributions with quadratic sample complexity, while ``lazy'' methods like random features are not better than random guessing in this regime. Our results show that neural networks extract information from higher-order correlations in the spiked cumulant model efficiently, and reveal a large gap in the amount of data required by neural networks and random features to learn from higher-order cumulants.
Learning from higher-order correlations, efficiently: hypothesis tests, random features, and neural networks
[ "Eszter Szekely", "Lorenzo Bardone", "Federica Gerace", "Sebastian Goldt" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uHcG5Y6fdB
@inproceedings{ oko2024pretrained, title={Pretrained Transformer Efficiently Learns Low-Dimensional Target Functions In-Context}, author={Kazusato Oko and Yujin Song and Taiji Suzuki and Denny Wu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uHcG5Y6fdB} }
Transformers can efficiently learn in-context from example demonstrations. Most existing theoretical analyses studied the in-context learning (ICL) ability of transformers for linear function classes, where it is typically shown that the minimizer of the pretraining loss implements one gradient descent step on the least squares objective. However, this simplified linear setting arguably does not demonstrate the statistical efficiency of ICL, since the pretrained transformer does not outperform directly solving linear regression on the test prompt. In this paper, we study ICL of a nonlinear function class via transformer with nonlinear MLP layer: given a class of \textit{single-index} target functions $f_*(\boldsymbol{x}) = \sigma_*(\langle\boldsymbol{x},\boldsymbol{\beta}\rangle)$, where the index features $\boldsymbol{\beta}\in\mathbb{R}^d$ are drawn from a $r$-dimensional subspace, we show that a nonlinear transformer optimized by gradient descent (with a pretraining sample complexity that depends on the \textit{information exponent} of the link functions $\sigma_*$) learns $f_*$ in-context with a prompt length that only depends on the dimension of the distribution of target functions $r$; in contrast, any algorithm that directly learns $f_*$ on test prompt yields a statistical complexity that scales with the ambient dimension $d$. Our result highlights the adaptivity of the pretrained transformer to low-dimensional structures of the function class, which enables sample-efficient ICL that outperforms estimators that only have access to the in-context data.
Pretrained Transformer Efficiently Learns Low-Dimensional Target Functions In-Context
[ "Kazusato Oko", "Yujin Song", "Taiji Suzuki", "Denny Wu" ]
NeurIPS.cc/2024/Conference
2411.02544
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uFXGsiYkkX
@inproceedings{ haldar2024baku, title={{BAKU}: An Efficient Transformer for Multi-Task Policy Learning}, author={Siddhant Haldar and Zhuoran Peng and Lerrel Pinto}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uFXGsiYkkX} }
Training generalist agents capable of solving diverse tasks is challenging, often requiring large datasets of expert demonstrations. This is particularly problematic in robotics, where each data point requires physical execution of actions in the real world. Thus, there is a pressing need for architectures that can effectively leverage the available training data. In this work, we present BAKU, a simple transformer architecture that enables efficient learning of multi-task robot policies. BAKU builds upon recent advancements in offline imitation learning and meticulously combines observation trunks, action chunking, multi-sensory observations, and action heads to substantially improve upon prior work. Our experiments on 129 simulated tasks across LIBERO, Meta-World suite, and the Deepmind Control suite exhibit an overall 18% absolute improvement over RT-1 and MT-ACT, with a 36% improvement on the harder LIBERO benchmark. On 30 real-world manipulation tasks, given an average of just 17 demonstrations per task, BAKU achieves a 91% success rate. Videos of the robot are best viewed at baku-robot.github.io.
BAKU: An Efficient Transformer for Multi-Task Policy Learning
[ "Siddhant Haldar", "Zhuoran Peng", "Lerrel Pinto" ]
NeurIPS.cc/2024/Conference
2406.07539
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uDxhMgjVJB
@inproceedings{ blanchet2024automatic, title={Automatic Outlier Rectification via Optimal Transport}, author={Jose Blanchet and Jiajin Li and Markus Pelger and Greg Zanotti}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uDxhMgjVJB} }
In this paper, we propose a novel conceptual framework to detect outliers using optimal transport with a concave cost function. Conventional outlier detection approaches typically use a two-stage procedure: first, outliers are detected and removed, and then estimation is performed on the cleaned data. However, this approach does not inform outlier removal with the estimation task, leaving room for improvement. To address this limitation, we propose an automatic outlier rectification mechanism that integrates rectification and estimation within a joint optimization framework. We take the first step to utilize the optimal transport distance with a concave cost function to construct a rectification set in the space of probability distributions. Then, we select the best distribution within the rectification set to perform the estimation task. Notably, the concave cost function we introduced in this paper is the key to making our estimator effectively identify the outlier during the optimization process. We demonstrate the effectiveness of our approach over conventional approaches in simulations and empirical analyses for mean estimation, least absolute regression, and the fitting of option implied volatility surfaces.
Automatic Outlier Rectification via Optimal Transport
[ "Jose Blanchet", "Jiajin Li", "Markus Pelger", "Greg Zanotti" ]
NeurIPS.cc/2024/Conference
2403.14067
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uDD44NROOt
@inproceedings{ hoang2024sprinql, title={{SPRINQL}: Sub-optimal Demonstrations driven Offline Imitation Learning}, author={Huy Hoang and Tien Anh Mai and Pradeep Varakantham}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uDD44NROOt} }
We focus on offline imitation learning (IL), which aims to mimic an expert's behavior using demonstrations without any interaction with the environment. One of the main challenges in offline IL is the limited support of expert demonstrations, which typically cover only a small fraction of the state-action space. While it may not be feasible to obtain numerous expert demonstrations, it is often possible to gather a larger set of sub-optimal demonstrations. For example, in treatment optimization problems, there are varying levels of doctor treatments available for different chronic conditions. These range from treatment specialists and experienced general practitioners to less experienced general practitioners. Similarly, when robots are trained to imitate humans in routine tasks, they might learn from individuals with different levels of expertise and efficiency. In this paper, we propose an offline IL approach that leverages the larger set of sub-optimal demonstrations while effectively mimicking expert trajectories. Existing offline IL methods based on behavior cloning or distribution matching often face issues such as overfitting to the limited set of expert demonstrations or inadvertently imitating sub-optimal trajectories from the larger dataset. Our approach, which is based on inverse soft-Q learning, learns from both expert and sub-optimal demonstrations. It assigns higher importance (through learned weights) to aligning with expert demonstrations and lower importance to aligning with sub-optimal ones. A key contribution of our approach, called SPRINQL, is transforming the offline IL problem into a convex optimization over the space of Q functions. Through comprehensive experimental evaluations, we demonstrate that the SPRINQL algorithm achieves state-of-the-art (SOTA) performance on offline IL benchmarks. Code is available at https://github.com/hmhuy0/SPRINQL .
SPRINQL: Sub-optimal Demonstrations driven Offline Imitation Learning
[ "Huy Hoang", "Tien Anh Mai", "Pradeep Varakantham" ]
NeurIPS.cc/2024/Conference
2402.13147
[ "https://github.com/hmhuy0/SPRINQL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uCvdw0IOuU
@inproceedings{ yao2024addressing, title={Addressing Asynchronicity in Clinical Multimodal Fusion via Individualized Chest X-ray Generation}, author={Wenfang Yao and Chen Liu and Kejing Yin and William K. Cheung and Jing Qin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uCvdw0IOuU} }
Integrating multi-modal clinical data, such as electronic health records (EHR) and chest X-ray images (CXR), is particularly beneficial for clinical prediction tasks. However, in a temporal setting, multi-modal data are often inherently asynchronous. EHR can be continuously collected but CXR is generally taken with a much longer interval due to its high cost and radiation dose. When clinical prediction is needed, the last available CXR image might have been outdated, leading to suboptimal predictions. To address this challenge, we propose DDL-CXR, a method that dynamically generates an up-to-date latent representation of the individualized CXR images. Our approach leverages latent diffusion models for patient-specific generation strategically conditioned on a previous CXR image and EHR time series, providing information regarding anatomical structures and disease progressions, respectively. In this way, the interaction across modalities could be better captured by the latent CXR generation process, ultimately improving the prediction performance. Experiments using MIMIC datasets show that the proposed model could effectively address asynchronicity in multimodal fusion and consistently outperform existing methods.
Addressing Asynchronicity in Clinical Multimodal Fusion via Individualized Chest X-ray Generation
[ "Wenfang Yao", "Chen Liu", "Kejing Yin", "William K. Cheung", "Jing Qin" ]
NeurIPS.cc/2024/Conference
2410.17918
[ "https://github.com/chenliu-svg/ddl-cxr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uCgFk8nP0Z
@inproceedings{ garrido2024dushapley, title={{DU}-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation}, author={Felipe Garrido and Benjamin Heymann and Maxime Vono and Patrick Loiseau and Vianney Perchet}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uCgFk8nP0Z} }
We consider the dataset valuation problem, that is the problem of quantifying the incremental gain, to some relevant pre-defined utility of a machine learning task, of aggregating an individual dataset to others. The Shapley value is a natural tool to perform dataset valuation due to its formal axiomatic justification, which can be combined with Monte Carlo integration to overcome the computational tractability challenges. Such generic approximation methods, however, remain expensive in some cases. In this paper, we exploit the knowledge about the structure of the dataset valuation problem to devise more efficient Shapley value estimators. We propose a novel approximation, referred to as discrete uniform Shapley, which is expressed as an expectation under a discrete uniform distribution with support of reasonable size. We justify the relevancy of the proposed framework via asymptotic and non-asymptotic theoretical guarantees and illustrate its benefits via an extensive set of numerical experiments.
DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation
[ "Felipe Garrido", "Benjamin Heymann", "Maxime Vono", "Patrick Loiseau", "Vianney Perchet" ]
NeurIPS.cc/2024/Conference
2306.02071
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uCZI8gSfD4
@inproceedings{ cheng2024training, title={Training Compute-Optimal Protein Language Models}, author={Xingyi Cheng and Bo Chen and Pan Li and Jing Gong and Jie Tang and Le Song}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uCZI8gSfD4} }
We explore optimally training protein language models, an area of significant interest in biological research where guidance on best practices is limited. Most models are trained with extensive compute resources until performance gains plateau, focusing primarily on increasing model sizes rather than optimizing the efficient compute frontier that balances performance and compute budgets. Our investigation is grounded in a massive dataset consisting of 939 million protein sequences. We trained over 300 models ranging from 3.5 million to 10.7 billion parameters on 5 to 200 billion unique tokens, to investigate the relations between model sizes, training token numbers, and objectives. First, we observed the effect of diminishing returns for the Causal Language Model (CLM) and that of overfitting for Masked Language Model (MLM) when repeating the commonly used Uniref database. To address this, we included metagenomic protein sequences in the training set to increase the diversity and avoid the plateau or overfitting effects. Second, we obtained the scaling laws of CLM and MLM on Transformer, tailored to the specific characteristics of protein sequence data. Third, we observe a transfer scaling phenomenon from CLM to MLM, further demonstrating the effectiveness of transfer through scaling behaviors based on estimated Effectively Transferred Tokens. Finally, to validate our scaling laws, we compare the large-scale versions of ESM-2 and PROGEN2 on downstream tasks, encompassing evaluations of protein generation as well as structure- and function-related tasks, all within less or equivalent pre-training compute budgets.
Training Compute-Optimal Protein Language Models
[ "Xingyi Cheng", "Bo Chen", "Pan Li", "Jing Gong", "Jie Tang", "Le Song" ]
NeurIPS.cc/2024/Conference
2411.02142
[ "https://github.com/cxysteven/scalingproteinlm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=uBVCPAMDGk
@inproceedings{ golan2024enhancing, title={Enhancing Consistency-Based Image Generation via Adversarialy-Trained Classification and Energy-Based Discrimination}, author={Shelly Golan and Roy Ganz and Michael Elad}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uBVCPAMDGk} }
The recently introduced Consistency models pose an efficient alternative to diffusion algorithms, enabling rapid and good quality image synthesis. These methods overcome the slowness of diffusion models by directly mapping noise to data, while maintaining a (relatively) simpler training. Consistency models enable a fast one- or few-step generation, but they typically fall somewhat short in sample quality when compared to their diffusion origins. In this work we propose a novel and highly effective technique for post-processing Consistency-based generated images, enhancing their perceptual quality. Our approach utilizes a joint classifier-discriminator model, in which both portions are trained adversarially. While the classifier aims to grade an image based on its assignment to a designated class, the discriminator portion of the very same network leverages the softmax values to assess the proximity of the input image to the targeted data manifold, thereby serving as an Energy-based Model. By employing example-specific projected gradient iterations under the guidance of this joint machine, we refine synthesized images and achieve an improved FID scores on the ImageNet 64x64 dataset for both Consistency-Training and Consistency-Distillation techniques.
Enhancing Consistency-Based Image Generation via Adversarialy-Trained Classification and Energy-Based Discrimination
[ "Shelly Golan", "Roy Ganz", "Michael Elad" ]
NeurIPS.cc/2024/Conference
2405.16260
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uAzhODjALU
@inproceedings{ wang2024the, title={The Mamba in the Llama: Distilling and Accelerating Hybrid Models}, author={Junxiong Wang and Daniele Paliotta and Avner May and Alexander M Rush and Tri Dao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=uAzhODjALU} }
Linear RNN architectures, like Mamba, can be competitive with Transformer models in language modeling while having advantageous deployment characteristics. Given the focus on training large-scale Transformer models, we consider the challenge of converting these pretrained models for deployment. We demonstrate that it is feasible to distill large Transformers into linear RNNs by reusing the linear projection weights from attention layers with academic GPU resources. The resulting hybrid model, which incorporates a quarter of the attention layers, achieves performance comparable to the original Transformer in chat benchmarks and outperforms open-source hybrid Mamba models trained from scratch with trillions of tokens in both chat benchmarks and general benchmarks. Moreover, we introduce a hardware-aware speculative decoding algorithm that accelerates the inference speed of Mamba and hybrid models. Overall we show how, with limited computation resources, we can remove many of the original attention layers and generate from the resulting model more efficiently. Our top-performing model, distilled from Llama3-8B-Instruct, achieves a 29.61 length-controlled win rate on AlpacaEval 2 against GPT-4 and 7.35 on MT-Bench, surpassing the best 8B scale instruction-tuned linear RNN model.
The Mamba in the Llama: Distilling and Accelerating Hybrid Models
[ "Junxiong Wang", "Daniele Paliotta", "Avner May", "Alexander M Rush", "Tri Dao" ]
NeurIPS.cc/2024/Conference
2408.15237
[ "https://github.com/jxiw/mambainllama" ]
https://huggingface.co/papers/2408.15237
5
37
3
5
[ "JunxiongWang/mamba_0_875_dpo_ep3", "JunxiongWang/Mamba2InLlama_1", "JunxiongWang/mamba_0_75_dpo_ep3", "JunxiongWang/mamba_0_5_dpo_ep3", "JunxiongWang/mamba_0_5_dpo_ep1", "JunxiongWang/mamba_0_75_dpo_ep1", "JunxiongWang/mamba_0_875_dpo_ep1", "JunxiongWang/MambaInLlama_0_50", "JunxiongWang/Mamba2InLlama_0_50", "JunxiongWang/MambaInLlama_0_75", "JunxiongWang/Mamba2InLlama_0_75", "JunxiongWang/Mamba2InLlama_0_875", "JunxiongWang/MambaInLlama_0_875", "JunxiongWang/Llama3.2-Mamba2-3B-distill", "JunxiongWang/Llama3.2-Mamba2-3B-dpo", "JunxiongWang/Llama3.1-Mamba2-8B-distill", "JunxiongWang/Llama3.2-Mamba-3B-distill", "JunxiongWang/Llama3.1-Mamba-8B-distill", "JunxiongWang/Llama3.1-Mamba2-8B-dpo", "JunxiongWang/Llama3.1-Mamba-8B-dpo", "JunxiongWang/Llama3.2-Mamba-3B-dpo" ]
[ "JunxiongWang/sftdatasetv3" ]
[]
[ "JunxiongWang/mamba_0_875_dpo_ep3", "JunxiongWang/Mamba2InLlama_1", "JunxiongWang/mamba_0_75_dpo_ep3", "JunxiongWang/mamba_0_5_dpo_ep3", "JunxiongWang/mamba_0_5_dpo_ep1", "JunxiongWang/mamba_0_75_dpo_ep1", "JunxiongWang/mamba_0_875_dpo_ep1", "JunxiongWang/MambaInLlama_0_50", "JunxiongWang/Mamba2InLlama_0_50", "JunxiongWang/MambaInLlama_0_75", "JunxiongWang/Mamba2InLlama_0_75", "JunxiongWang/Mamba2InLlama_0_875", "JunxiongWang/MambaInLlama_0_875", "JunxiongWang/Llama3.2-Mamba2-3B-distill", "JunxiongWang/Llama3.2-Mamba2-3B-dpo", "JunxiongWang/Llama3.1-Mamba2-8B-distill", "JunxiongWang/Llama3.2-Mamba-3B-distill", "JunxiongWang/Llama3.1-Mamba-8B-distill", "JunxiongWang/Llama3.1-Mamba2-8B-dpo", "JunxiongWang/Llama3.1-Mamba-8B-dpo", "JunxiongWang/Llama3.2-Mamba-3B-dpo" ]
[ "JunxiongWang/sftdatasetv3" ]
[]
1
poster
null
https://openreview.net/forum?id=u9ShP64FJV
@inproceedings{ liu2024protecting, title={Protecting Your {LLM}s with Information Bottleneck}, author={Zichuan Liu and Zefan Wang and Linjie Xu and Jinyu Wang and Lei Song and Tianchun Wang and Chunlin Chen and Wei Cheng and Jiang Bian}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u9ShP64FJV} }
The advent of large language models (LLMs) has revolutionized the field of natural language processing, yet they might be attacked to produce harmful content. Despite efforts to ethically align LLMs, these are often fragile and can be circumvented by jailbreaking attacks through optimized or manual adversarial prompts. To address this, we introduce the Information Bottleneck Protector (IBProtector), a defense mechanism grounded in the information bottleneck principle, and we modify the objective to avoid trivial solutions. The IBProtector selectively compresses and perturbs prompts, facilitated by a lightweight and trainable extractor, preserving only essential information for the target LLMs to respond with the expected answer. Moreover, we further consider a situation where the gradient is not visible to be compatible with any LLM. Our empirical evaluations show that IBProtector outperforms current defense methods in mitigating jailbreak attempts, without overly affecting response quality or inference speed. Its effectiveness and adaptability across various attack methods and target LLMs underscore the potential of IBProtector as a novel, transferable defense that bolsters the security of LLMs without requiring modifications to the underlying models.
Protecting Your LLMs with Information Bottleneck
[ "Zichuan Liu", "Zefan Wang", "Linjie Xu", "Jinyu Wang", "Lei Song", "Tianchun Wang", "Chunlin Chen", "Wei Cheng", "Jiang Bian" ]
NeurIPS.cc/2024/Conference
2404.13968
[ "https://github.com/zichuan-liu/ib4llms" ]
https://huggingface.co/papers/2404.13968
0
1
0
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=u7okTt4ZyE
@inproceedings{ cui2024taming, title={Taming Diffusion Prior for Image Super-Resolution with Domain Shift {SDE}s}, author={Qinpeng Cui and Yi'xuan Liu and Xinyi Zhang and Qiqi Bao and Qingmin Liao and liwang Amd and Lu Tian and Zicheng Liu and Zhongdao Wang and Emad Barsoum}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u7okTt4ZyE} }
Diffusion-based image super-resolution (SR) models have attracted substantial interest due to their powerful image restoration capabilities. However, prevailing diffusion models often struggle to strike an optimal balance between efficiency and performance. Typically, they either neglect to exploit the potential of existing extensive pretrained models, limiting their generative capacity, or they necessitate a dozens of forward passes starting from random noises, compromising inference efficiency. In this paper, we present DoSSR, a $\textbf{Do}$main $\textbf{S}$hift diffusion-based SR model that capitalizes on the generative powers of pretrained diffusion models while significantly enhancing efficiency by initiating the diffusion process with low-resolution (LR) images. At the core of our approach is a domain shift equation that integrates seamlessly with existing diffusion models. This integration not only improves the use of diffusion prior but also boosts inference efficiency. Moreover, we advance our method by transitioning the discrete shift process to a continuous formulation, termed as DoS-SDEs. This advancement leads to the fast and customized solvers that further enhance sampling efficiency. Empirical results demonstrate that our proposed method achieves state-of-the-art performance on synthetic and real-world datasets, while notably requiring $\textbf{\emph{only 5 sampling steps}}$. Compared to previous diffusion prior based methods, our approach achieves a remarkable speedup of 5-7 times, demonstrating its superior efficiency.
Taming Diffusion Prior for Image Super-Resolution with Domain Shift SDEs
[ "Qinpeng Cui", "Yi'xuan Liu", "Xinyi Zhang", "Qiqi Bao", "Qingmin Liao", "liwang Amd", "Lu Tian", "Zicheng Liu", "Zhongdao Wang", "Emad Barsoum" ]
NeurIPS.cc/2024/Conference
2409.17778
[ "https://github.com/qinpengcui/dossr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u7JRmrGutT
@inproceedings{ jain2024graph, title={Graph Edit Distance with General Costs Using Neural Set Divergence}, author={Eeshaan Jain and Indradyumna Roy and Saswat Meher and Soumen Chakrabarti and Abir De}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u7JRmrGutT} }
Graph Edit Distance (GED) measures the (dis-)similarity between two given graphs in terms of the minimum-cost edit sequence, which transforms one graph to the other. GED is related to other notions of graph similarity, such as graph and subgraph isomorphism, maximum common subgraph, etc. However, the computation of exact GED is NP-Hard, which has recently motivated the design of neural models for GED estimation. However, they do not explicitly account for edit operations with different costs. In response, we propose $\texttt{GraphEdX}$, a neural GED estimator that can work with general costs specified for the four edit operations, viz., edge deletion, edge addition, node deletion, and node addition. We first present GED as a quadratic assignment problem (QAP) that incorporates these four costs. Then, we represent each graph as a set of node and edge embeddings and use them to design a family of neural set divergence surrogates. We replace the QAP terms corresponding to each operation with their surrogates. Computing such neural set divergence requires aligning nodes and edges of the two graphs. We learn these alignments using a Gumbel-Sinkhorn permutation generator, additionally ensuring that the node and edge alignments are consistent with each other. Moreover, these alignments are cognizant of both the presence and absence of edges between node pairs. Through extensive experiments on several datasets, along with a variety of edit cost settings, we show that $\texttt{GraphEdX}$ consistently outperforms state-of-the-art methods and heuristics in terms of prediction error. The code is available at https://github.com/structlearning/GraphEdX.
Graph Edit Distance with General Costs Using Neural Set Divergence
[ "Eeshaan Jain", "Indradyumna Roy", "Saswat Meher", "Soumen Chakrabarti", "Abir De" ]
NeurIPS.cc/2024/Conference
2409.17687
[ "https://github.com/structlearning/graphedx" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u6XxyuD3Ro
@inproceedings{ pasteris2024online, title={Online Convex Optimisation: The Optimal Switching Regret for all Segmentations Simultaneously}, author={Stephen Pasteris and Chris Hicks and Vasilios Mavroudis and Mark Herbster}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u6XxyuD3Ro} }
We consider the classic problem of online convex optimisation. Whereas the notion of static regret is relevant for stationary problems, the notion of switching regret is more appropriate for non-stationary problems. A switching regret is defined relative to any segmentation of the trial sequence, and is equal to the sum of the static regrets of each segment. In this paper we show that, perhaps surprisingly, we can achieve the asymptotically optimal switching regret on every possible segmentation simultaneously. Our algorithm for doing so is very efficient: having a space and per-trial time complexity that is logarithmic in the time-horizon. Our algorithm also obtains novel bounds on its dynamic regret: being adaptive to variations in the rate of change of the comparator sequence.
Online Convex Optimisation: The Optimal Switching Regret for all Segmentations Simultaneously
[ "Stephen Pasteris", "Chris Hicks", "Vasilios Mavroudis", "Mark Herbster" ]
NeurIPS.cc/2024/Conference
2405.20824
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=u6FuiKzT1K
@inproceedings{ chen2024leveraging, title={Leveraging Contrastive Learning for Enhanced Node Representations in Tokenized Graph Transformers}, author={Jinsong Chen and Hanpeng Liu and John E. Hopcroft and Kun He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u6FuiKzT1K} }
While tokenized graph Transformers have demonstrated strong performance in node classification tasks, their reliance on a limited subset of nodes with high similarity scores for constructing token sequences overlooks valuable information from other nodes, hindering their ability to fully harness graph information for learning optimal node representations. To address this limitation, we propose a novel graph Transformer called GCFormer. Unlike previous approaches, GCFormer develops a hybrid token generator to create two types of token sequences, positive and negative, to capture diverse graph information. And a tailored Transformer-based backbone is adopted to learn meaningful node representations from these generated token sequences. Additionally, GCFormer introduces contrastive learning to extract valuable information from both positive and negative token sequences, enhancing the quality of learned node representations. Extensive experimental results across various datasets, including homophily and heterophily graphs, demonstrate the superiority of GCFormer in node classification, when compared to representative graph neural networks (GNNs) and graph Transformers.
Leveraging Contrastive Learning for Enhanced Node Representations in Tokenized Graph Transformers
[ "Jinsong Chen", "Hanpeng Liu", "John E. Hopcroft", "Kun He" ]
NeurIPS.cc/2024/Conference
2406.19258
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u5enPCwaLt
@inproceedings{ bellot2024towards, title={Towards Estimating Bounds on the Effect of Policies under Unobserved Confounding}, author={Alexis Bellot and Silvia Chiappa}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u5enPCwaLt} }
As many practical fields transition to provide personalized decisions, data is increasingly relevant to support the evaluation of candidate plans and policies (e.g., guidelines for the treatment of disease, government directives, etc.). In the machine learning literature, significant efforts have been put into developing machinery to predict the effectiveness of policies efficiently. The challenge is that, in practice, the effectiveness of a candidate policy is not always identifiable, i.e., not uniquely estimable from the combination of the available data and assumptions about the domain at hand (e.g., encoded in a causal graph). In this paper, we develop graphical characterizations and estimation tools to bound the effect of policies given a causal graph and observational data collected in non-identifiable settings. Specifically, our contributions are two-fold: (1) we derive analytical bounds for general probabilistic and conditional policies that are tighter than existing results, (2) we develop an estimation framework to estimate bounds from finite samples, applicable in higher-dimensional spaces and continuously-valued data. We further show that the resulting estimators have favourable statistical properties such as fast convergence and robustness to model misspecification.
Towards Estimating Bounds on the Effect of Policies under Unobserved Confounding
[ "Alexis Bellot", "Silvia Chiappa" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u5BkOgWWZW
@inproceedings{ zhong2024explaining, title={Explaining Datasets in Words: Statistical Models with Natural Language Parameters}, author={Ruiqi Zhong and Heng Wang and Dan Klein and Jacob Steinhardt}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u5BkOgWWZW} }
To make sense of massive data, we often first fit simplified models and then interpret the parameters; for example, we cluster the text embeddings and then interpret the mean parameters of each cluster. However, these parameters are often high-dimensional and hard to interpret. To make model parameters directly interpretable, we introduce a family of statistical models---including clustering, time series, and classification models---parameterized by *natural language predicates*. For example, a cluster of text about COVID could be parameterized by the predicate ``*discusses COVID*''. To learn these statistical models effectively, we develop a model-agnostic algorithm that optimizes continuous relaxations of predicate parameters with gradient descent and discretizes them by prompting language models (LMs). Finally, we apply our framework to a wide range of problems: taxonomizing user chat dialogues, characterizing how they evolve across time, finding categories where one language model is better than the other, clustering math problems based on subareas, and explaining visual features in memorable images. Our framework is highly versatile, applicable to both textual and visual domains, can be easily steered to focus on specific properties (e.g. subareas), and explains sophisticated concepts that classical methods (e.g. n-gram analysis) struggle to produce.
Explaining Datasets in Words: Statistical Models with Natural Language Parameters
[ "Ruiqi Zhong", "Heng Wang", "Dan Klein", "Jacob Steinhardt" ]
NeurIPS.cc/2024/Conference
2409.08466
[ "https://github.com/ruiqi-zhong/nlparam" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u3mZzd0Pdx
@inproceedings{ wang2024lower, title={Lower Bounds of Uniform Stability in Gradient-Based Bilevel Algorithms for Hyperparameter Optimization}, author={Rongzhen Wang and Chenyu Zheng and Guoqiang Wu and Xu Min and Xiaolu Zhang and JUN ZHOU and Chongxuan Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u3mZzd0Pdx} }
Gradient-based bilevel programming leverages unrolling differentiation (UD) or implicit function theorem (IFT) to solve hyperparameter optimization (HO) problems, and is proven effective and scalable in practice. To understand their generalization behavior, existing works establish upper bounds on the uniform stability of these algorithms, while their tightness is still unclear. To this end, this paper attempts to establish stability lower bounds for UD-based and IFT-based algorithms. A central technical challenge arises from the dependency of each outer-level update on the concurrent stage of inner optimization in bilevel programming. To address this problem, we introduce lower-bounded expansion properties to characterize the instability in update rules which can serve as general tools for lower-bound analysis. These properties guarantee the hyperparameter divergence at the outer level and the Lipschitz constant of inner output at the inner level in the context of HO. Guided by these insights, we construct a quadratic example that yields tight lower bounds for the UD-based algorithm and meaningful bounds for a representative IFT-based algorithm. Our tight result indicates that uniform stability has reached its limit in stability analysis for the UD-based algorithm.
Lower Bounds of Uniform Stability in Gradient-Based Bilevel Algorithms for Hyperparameter Optimization
[ "Rongzhen Wang", "Chenyu Zheng", "Guoqiang Wu", "Xu Min", "Xiaolu Zhang", "JUN ZHOU", "Chongxuan Li" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u2gzfXRLaN
@inproceedings{ montasser2024transformationinvariant, title={Transformation-Invariant Learning and Theoretical Guarantees for {OOD} Generalization}, author={Omar Montasser and Han Shao and Emmanuel Abbe}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u2gzfXRLaN} }
Learning with identical train and test distributions has been extensively investigated both practically and theoretically. Much remains to be understood, however, in statistical learning under distribution shifts. This paper focuses on a distribution shift setting where train and test distributions can be related by classes of (data) transformation maps. We initiate a theoretical study for this framework, investigating learning scenarios where the target class of transformations is either known or unknown. We establish learning rules and algorithmic reductions to Empirical Risk Minimization (ERM), accompanied with learning guarantees. We obtain upper bounds on the sample complexity in terms of the VC dimension of the class composing predictors with transformations, which we show in many cases is not much larger than the VC dimension of the class of predictors. We highlight that the learning rules we derive offer a game-theoretic viewpoint on distribution shift: a learner searching for predictors and an adversary searching for transformation maps to respectively minimize and maximize the worst-case loss.
Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization
[ "Omar Montasser", "Han Shao", "Emmanuel Abbe" ]
NeurIPS.cc/2024/Conference
2410.23461
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u1mNGLYN74
@inproceedings{ shen2024draco, title={{DRACO}: A Denoising-Reconstruction Autoencoder for Cryo-{EM}}, author={YingJun Shen and Haizhao Dai and Qihe Chen and Yan Zeng and Jiakai Zhang and Yuan Pei and Jingyi Yu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u1mNGLYN74} }
Foundation models in computer vision have demonstrated exceptional performance in zero-shot and few-shot tasks by extracting multi-purpose features from large-scale datasets through self-supervised pre-training methods. However, these models often overlook the severe corruption in cryogenic electron microscopy (cryo-EM) images by high-level noises. We introduce DRACO, a Denoising-Reconstruction Autoencoder for CryO-EM, inspired by the Noise2Noise (N2N) approach. By processing cryo-EM movies into odd and even images and treating them as independent noisy observations, we apply a denoising-reconstruction hybrid training scheme. We mask both images to create denoising and reconstruction tasks. For DRACO's pre-training, the quality of the dataset is essential, we hence build a high-quality, diverse dataset from an uncurated public database, including over 270,000 movies or micrographs. After pre-training, DRACO naturally serves as a generalizable cryo-EM image denoiser and a foundation model for various cryo-EM downstream tasks. DRACO demonstrates the best performance in denoising, micrograph curation, and particle picking tasks compared to state-of-the-art baselines.
DRACO: A Denoising-Reconstruction Autoencoder for Cryo-EM
[ "YingJun Shen", "Haizhao Dai", "Qihe Chen", "Yan Zeng", "Jiakai Zhang", "Yuan Pei", "Jingyi Yu" ]
NeurIPS.cc/2024/Conference
2410.11373
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=u1Z3HWz4VJ
@inproceedings{ jiang2024ramp, title={{RAMP}: Boosting Adversarial Robustness Against Multiple \$l\_p\$ Perturbations for Universal Robustness}, author={Enyi Jiang and Gagandeep Singh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=u1Z3HWz4VJ} }
Most existing works focus on improving robustness against adversarial attacks bounded by a single $l_p$ norm using adversarial training (AT). However, these AT models' multiple-norm robustness (union accuracy) is still low, which is crucial since in the real-world an adversary is not necessarily bounded by a single norm. The tradeoffs among robustness against multiple $l_p$ perturbations and accuracy/robustness make obtaining good union and clean accuracy challenging. We design a logit pairing loss to improve the union accuracy by analyzing the tradeoffs from the lens of distribution shifts. We connect natural training (NT) with AT via gradient projection, to incorporate useful information from NT into AT, where we empirically and theoretically show it moderates the accuracy/robustness tradeoff. We propose a novel training framework \textbf{RAMP}, to boost the robustness against multiple $l_p$ perturbations. \textbf{RAMP} can be easily adapted for robust fine-tuning and full AT. For robust fine-tuning, \textbf{RAMP} obtains a union accuracy up to $53.3\%$ on CIFAR-10, and $29.1\%$ on ImageNet. For training from scratch, \textbf{RAMP} achieves a union accuracy of $44.6\%$ and good clean accuracy of $81.2\%$ on ResNet-18 against AutoAttack on CIFAR-10. Beyond multi-norm robustness \textbf{RAMP}-trained models achieve superior \textit{universal robustness}, effectively generalizing against a range of unseen adversaries and natural corruptions.
RAMP: Boosting Adversarial Robustness Against Multiple l_p Perturbations for Universal Robustness
[ "Enyi Jiang", "Gagandeep Singh" ]
NeurIPS.cc/2024/Conference
[ "https://github.com/uiuc-focal-lab/ramp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tz83Nyb71l
@inproceedings{ wang2024yolov, title={{YOLO}v10: Real-Time End-to-End Object Detection}, author={Ao Wang and Hui Chen and Lihao Liu and Kai CHEN and Zijia Lin and Jungong Han and Guiguang Ding}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tz83Nyb71l} }
Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for YOLOs, achieving notable progress. However, the reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs and adversely impacts the inference latency. Besides, the design of various components in YOLOs lacks the comprehensive and thorough inspection, resulting in noticeable computational redundancy and limiting the model's capability. It renders the suboptimal efficiency, along with considerable potential for performance improvements. In this work, we aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and the model architecture. To this end, we first present the consistent dual assignments for NMS-free training of YOLOs, which brings the competitive performance and low inference latency simultaneously. Moreover, we introduce the holistic efficiency-accuracy driven model design strategy for YOLOs. We comprehensively optimize various components of YOLOs from both the efficiency and accuracy perspectives, which greatly reduces the computational overhead and enhances the capability. The outcome of our effort is a new generation of YOLO series for real-time end-to-end object detection, dubbed YOLOv10. Extensive experiments show that YOLOv10 achieves the state-of-the-art performance and efficiency across various model scales. For example, our YOLOv10-S is 1.8$\times$ faster than RT-DETR-R18 under the similar AP on COCO, meanwhile enjoying 2.8$\times$ smaller number of parameters and FLOPs. Compared with YOLOv9-C, YOLOv10-B has 46\% less latency and 25\% fewer parameters for the same performance. Code and models are available at https://github.com/THU-MIG/yolov10.
YOLOv10: Real-Time End-to-End Object Detection
[ "Ao Wang", "Hui Chen", "Lihao Liu", "Kai CHEN", "Zijia Lin", "Jungong Han", "Guiguang Ding" ]
NeurIPS.cc/2024/Conference
2405.14458
[ "https://github.com/THU-MIG/yolov10" ]
https://huggingface.co/papers/2405.14458
0
6
0
7
[ "kadirnar/Yolov10", "jameslahm/yolov10x", "jameslahm/yolov10n", "jameslahm/yolov10s", "jameslahm/yolov10m", "jameslahm/yolov10l", "jameslahm/yolov10b", "kadirnar/yolov10m", "kairess/baby-face-detection-yolov10", "m3/yolov10s-transformers", "hyokwan/yolov10_hkcode", "hyokwan/yolov10_vita500", "tacmatic/yolov10-finetuned", "vanillo/fireyolov10n", "vanillo/yolov10n", "mamamamamaasdasd/yolov10", "lemon3853/yolov10n", "kadirnar/yolov10b", "kadirnar/yolov10n", "kadirnar/yolov10l", "kadirnar/yolov10x", "kadirnar/yolov10s", "LimaLimao/Yolo-IFMT", "nielsr/yolov10n", "nielsr/yolov10l", "BriannaHa/yolov10-finetuned-DnDdice", "Sj210033/yolov10-dummy-finetuned", "Sj210033/yolov10-finetuned", "Koshti10/yolov10x-trained-Kitti-2D-detection", "Koshti10/yolov10n-trained-Kitti-2D-detection", "Koshti10/yolov10x-finetuned-Kitti-2D-detection", "Koshti10/yolov10n-finetuned-Kitti-2D-detection", "Koshti10/yolov10n_vanilla_final", "Koshti10/yolov10n_pt_final", "Koshti10/yolov10x_pt_final", "Koshti10/yolov10x_vanilla_final", "minhah/yolov10-finetuned-scalp", "minhah/yolov10-finetuned-scalp-new" ]
[]
[ "SkalskiP/YOLO-ARENA", "jameslahm/YOLOv10", "freddyaboulton/webrtc-yolov10n", "xqt/Segment-Anything-2-Assist", "MSaadTariq/YoloV10", "Sovenok-Hacker/Yolov10", "yasserrmd/DailySnap", "1ngsm0del/Pill_name_detection_YOLOv10", "xiaoming32236046/yolov10_CTC", "Ean7/YOLOv10", "jeff86/Yolov10", "StevenChen16/YOLOv10", "silver-A/awesome-yolov10", "kasper-boy/Evolving-YOLO-V8-V9-V10", "1ngsm0del/Drug_Recongnition_YOLOv10_Show_Input_Output_Name", "mbar0075/YOLO-Playground", "kevin159/color-detection" ]
[ "kadirnar/Yolov10", "jameslahm/yolov10x", "jameslahm/yolov10n", "jameslahm/yolov10s", "jameslahm/yolov10m", "jameslahm/yolov10l", "jameslahm/yolov10b", "kadirnar/yolov10m", "kairess/baby-face-detection-yolov10", "m3/yolov10s-transformers", "hyokwan/yolov10_hkcode", "hyokwan/yolov10_vita500", "tacmatic/yolov10-finetuned", "vanillo/fireyolov10n", "vanillo/yolov10n", "mamamamamaasdasd/yolov10", "lemon3853/yolov10n", "kadirnar/yolov10b", "kadirnar/yolov10n", "kadirnar/yolov10l", "kadirnar/yolov10x", "kadirnar/yolov10s", "LimaLimao/Yolo-IFMT", "nielsr/yolov10n", "nielsr/yolov10l", "BriannaHa/yolov10-finetuned-DnDdice", "Sj210033/yolov10-dummy-finetuned", "Sj210033/yolov10-finetuned", "Koshti10/yolov10x-trained-Kitti-2D-detection", "Koshti10/yolov10n-trained-Kitti-2D-detection", "Koshti10/yolov10x-finetuned-Kitti-2D-detection", "Koshti10/yolov10n-finetuned-Kitti-2D-detection", "Koshti10/yolov10n_vanilla_final", "Koshti10/yolov10n_pt_final", "Koshti10/yolov10x_pt_final", "Koshti10/yolov10x_vanilla_final", "minhah/yolov10-finetuned-scalp", "minhah/yolov10-finetuned-scalp-new" ]
[]
[ "SkalskiP/YOLO-ARENA", "jameslahm/YOLOv10", "freddyaboulton/webrtc-yolov10n", "xqt/Segment-Anything-2-Assist", "MSaadTariq/YoloV10", "Sovenok-Hacker/Yolov10", "yasserrmd/DailySnap", "1ngsm0del/Pill_name_detection_YOLOv10", "xiaoming32236046/yolov10_CTC", "Ean7/YOLOv10", "jeff86/Yolov10", "StevenChen16/YOLOv10", "silver-A/awesome-yolov10", "kasper-boy/Evolving-YOLO-V8-V9-V10", "1ngsm0del/Drug_Recongnition_YOLOv10_Show_Input_Output_Name", "mbar0075/YOLO-Playground", "kevin159/color-detection" ]
1
poster
null
https://openreview.net/forum?id=tyPcIETPWM
@inproceedings{ givens2024conditional, title={Conditional Outcome Equivalence: A Quantile Alternative to {CATE}}, author={Josh Givens and Henry Reeve and Song Liu and Katarzyna Reluga}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tyPcIETPWM} }
The conditional quantile treatment effect (CQTE) can provide insight into the effect of a treatment beyond the conditional average treatment effect (CATE). This ability to provide information over multiple quantiles of the response makes the CQTE especially valuable in cases where the effect of a treatment is not well-modelled by a location shift, even conditionally on the covariates. Nevertheless, the estimation of the CQTE is challenging and often depends upon the smoothness of the individual quantiles as a function of the covariates rather than smoothness of the CQTE itself. This is in stark contrast to the CATE where it is possible to obtain high-quality estimates which have less dependency upon the smoothness of the nuisance parameters when the CATE itself is smooth. Moreover, relative smoothness of the CQTE lacks the interpretability of smoothness of the CATE making it less clear whether it is a reasonable assumption to make. We combine the desirable properties of the CATE and CQTE by considering a new estimand, the conditional quantile comparator (CQC). The CQC not only retains information about the whole treatment distribution, similar to the CQTE, but also having more natural examples of smoothness and is able to leverage simplicity in an auxiliary estimand. We provide finite sample bounds on the error of our estimator, demonstrating its ability to exploit simplicity. We validate our theory in numerical simulations which show that our method produces more accurate estimates than baselines. Finally, we apply our methodology to a study on the effect of employment incentives on earnings across different age groups. We see that our method is able to reveal heterogeneity of the effect across different quantiles.
Conditional Outcome Equivalence: A Quantile Alternative to CATE
[ "Josh Givens", "Henry Reeve", "Song Liu", "Katarzyna Reluga" ]
NeurIPS.cc/2024/Conference
2410.12454
[ "https://github.com/joshgivens/ConditionalOutcomeEquivalence" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=twpPD9UMUN
@inproceedings{ ma2024look, title={Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering}, author={Jie Ma and Min Hu and Pinghui Wang and Wangchun Sun and Lingyun Song and Hongbin Pei and Jun Liu and Youtian Du}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=twpPD9UMUN} }
Audio-Visual Question Answering (AVQA) is a complex multi-modal reasoning task, demanding intelligent systems to accurately respond to natural language queries based on audio-video input pairs. Nevertheless, prevalent AVQA approaches are prone to overlearning dataset biases, resulting in poor robustness. Furthermore, current datasets may not provide a precise diagnostic for these methods. To tackle these challenges, firstly, we propose a novel dataset, *MUSIC-AVQA-R*, crafted in two steps: rephrasing questions within the test split of a public dataset (*MUSIC-AVQA*) and subsequently introducing distribution shifts to split questions. The former leads to a large, diverse test space, while the latter results in a comprehensive robustness evaluation on rare, frequent, and overall questions. Secondly, we propose a robust architecture that utilizes a multifaceted cycle collaborative debiasing strategy to overcome bias learning. Experimental results show that this architecture achieves state-of-the-art performance on MUSIC-AVQA-R, notably obtaining a significant improvement of 9.32\%. Extensive ablation experiments are conducted on the two datasets mentioned to analyze the component effectiveness within the debiasing strategy. Additionally, we highlight the limited robustness of existing multi-modal QA methods through the evaluation on our dataset. We also conduct experiments combining various baselines with our proposed strategy on two datasets to verify its plug-and-play capability. Our dataset and code are available at <https://github.com/reml-group/MUSIC-AVQA-R>.
Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering
[ "Jie Ma", "Min Hu", "Pinghui Wang", "Wangchun Sun", "Lingyun Song", "Hongbin Pei", "Jun Liu", "Youtian Du" ]
NeurIPS.cc/2024/Conference
2404.12020
[ "https://github.com/reml-group/music-avqa-r" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=twYE75Mnkt
@inproceedings{ larsen2024derandomizing, title={Derandomizing Multi-Distribution Learning}, author={Kasper Green Larsen and Omar Montasser and Nikita Zhivotovskiy}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=twYE75Mnkt} }
Multi-distribution or collaborative learning involves learning a single predictor that works well across multiple data distributions, using samples from each during training. Recent research on multi-distribution learning, focusing on binary loss and finite VC dimension classes, has shown near-optimal sample complexity that is achieved with oracle efficient algorithms. That is, these algorithms are computationally efficient given an efficient ERM for the class. Unlike in classical PAC learning, where the optimal sample complexity is achieved with deterministic predictors, current multi-distribution learning algorithms output randomized predictors. This raises the question: can these algorithms be derandomized to produce a deterministic predictor for multiple distributions? Through a reduction to discrepancy minimization, we show that derandomizing multi-distribution learning is computationally hard, even when ERM is computationally efficient. On the positive side, we identify a structural condition enabling an efficient black-box reduction, converting existing randomized multi-distribution predictors into deterministic ones.
Derandomizing Multi-Distribution Learning
[ "Kasper Green Larsen", "Omar Montasser", "Nikita Zhivotovskiy" ]
NeurIPS.cc/2024/Conference
2409.17567
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tvQ3XCKWbB
@inproceedings{ zhang2024enriching, title={Enriching Disentanglement: From Logical Definitions to Quantitative Metrics}, author={Yivan Zhang and Masashi Sugiyama}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tvQ3XCKWbB} }
Disentangling the explanatory factors in complex data is a promising approach for generalizable and data-efficient representation learning. While a variety of quantitative metrics for learning and evaluating disentangled representations have been proposed, it remains unclear what properties these metrics truly quantify. In this work, we establish algebraic relationships between logical definitions and quantitative metrics to derive theoretically grounded disentanglement metrics. Concretely, we introduce a compositional approach for converting a higher-order predicate into a real-valued quantity by replacing (i) equality with a strict premetric, (ii) the Heyting algebra of binary truth values with a quantale of continuous values, and (iii) quantifiers with aggregators. The metrics induced by logical definitions have strong theoretical guarantees, and some of them are easily differentiable and can be used as learning objectives directly. Finally, we empirically demonstrate the effectiveness of the proposed metrics by isolating different aspects of disentangled representations.
Enriching Disentanglement: From Logical Definitions to Quantitative Metrics
[ "Yivan Zhang", "Masashi Sugiyama" ]
NeurIPS.cc/2024/Conference
2305.11512
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tuiqq1G8I5
@inproceedings{ murti2024discedit, title={Dis{CE}dit: Model Editing by Identifying Discriminative Components}, author={Chaitanya Murti and Chiranjib Bhattacharyya}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tuiqq1G8I5} }
Model editing is a growing area of research that is particularly valuable in contexts where modifying key model components, like neurons or filters, can significantly impact the model’s performance. The key challenge lies in identifying important components useful to the model’s predictions. We apply model editing to address two active areas of research, Structured Pruning, and Selective Class Forgetting. In this work, we adopt a distributional approach to the problem of identifying important components, leveraging the recently proposed discriminative filters hypothesis, which states that well-trained (convolutional) models possess discriminative filters that are essential to prediction. To do so, we define discriminative ability in terms of the Bayes error rate associated with the feature distributions, which is equivalent to computing the Total Variation (TV) distance between the distributions. However, computing the TV distance is intractable, motivating us to derive novel witness function-based lower bounds on the TV distance that require no assumptions on the underlying distributions; using this bound generalizes prior work such as Murti et al. [39] that relied on unrealistic Gaussianity assumptions on the feature distributions. With these bounds, we are able to discover critical subnetworks responsible for classwise predictions, and derive DISCEDIT-SP and DISCEDIT-U , algorithms for structured pruning requiring no access to the training data and loss function, and selective forgetting respectively. We apply DISCEDIT-U to selective class forgetting on models trained on CIFAR10 and CIFAR100, and we show that on average, we can reduce accuracy on a single class by over 80% with a minimal reduction in test accuracy on the remaining classes. Similarly, on Structured pruning problems, we obtain 40.8% sparsity on ResNet50 on Imagenet, with only a 2.6% drop in accuracy with minimal fine-tuning.
DisCEdit: Model Editing by Identifying Discriminative Components
[ "Chaitanya Murti", "Chiranjib Bhattacharyya" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tu1oC7zHGW
@inproceedings{ zhang2024unveiling, title={Unveiling the Tapestry of Consistency in Large Vision-Language Models}, author={Yuan Zhang and Fei xiao and Tao Huang and Chun-Kai Fan and Hongyuan Dong and Jiawen Li and Jiacong Wang and Kuan Cheng and Shanghang Zhang and Haoyuan Guo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tu1oC7zHGW} }
Large vision-language models (LVLMs) have recently achieved rapid progress, exhibiting great perception and reasoning abilities concerning visual information. However, when faced with prompts in different sizes of solution spaces, LVLMs fail to always give consistent answers regarding the same knowledge point. This inconsistency of answers between different solution spaces is prevalent in LVLMs and erodes trust. To this end, we provide a multi-modal benchmark ConBench, to intuitively analyze how LVLMs perform when the solution space of a prompt revolves around a knowledge point. Based on the ConBench tool, we are the first to reveal the tapestry and get the following findings: (1) In the discriminate realm, the larger the solution space of the prompt, the lower the accuracy of the answers. (2) Establish the relationship between the discriminative and generative realms: the accuracy of the discriminative question type exhibits a strong positive correlation with its Consistency with the caption. (3) Compared to open-source models, closed-source models exhibit a pronounced bias advantage in terms of Consistency. Eventually, we ameliorate the consistency of LVLMs by trigger-based diagnostic refinement, indirectly improving the performance of their caption. We hope this paper will accelerate the research community in better evaluating their models and encourage future advancements in the consistency domain.
Unveiling the Tapestry of Consistency in Large Vision-Language Models
[ "Yuan Zhang", "Fei xiao", "Tao Huang", "Chun-Kai Fan", "Hongyuan Dong", "Jiawen Li", "Jiacong Wang", "Kuan Cheng", "Shanghang Zhang", "Haoyuan Guo" ]
NeurIPS.cc/2024/Conference
2405.14156
[ "https://github.com/foundation-multimodal-models/conbench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ttUXtV2YrA
@inproceedings{ zhu2024revisiting, title={Revisiting the Integration of Convolution and Attention for Vision Backbone}, author={Lei Zhu and Xinjiang Wang and Wayne Zhang and Rynson W. H. Lau}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ttUXtV2YrA} }
Convolutions (Convs) and multi-head self-attentions (MHSAs) are typically considered alternatives to each other for building vision backbones. Although some works try to integrate both, they apply the two operators simultaneously at the finest pixel granularity. With Convs responsible for per-pixel feature extraction already, the question is whether we still need to include the heavy MHSAs at such a fine-grained level. In fact, this is the root cause of the scalability issue w.r.t. the input resolution for vision transformers. To address this important problem, we propose in this work to use MSHAs and Convs in parallel \textbf{at different granularity levels} instead. Specifically, in each layer, we use two different ways to represent an image: a fine-grained regular grid and a coarse-grained set of semantic slots. We apply different operations to these two representations: Convs to the grid for local features, and MHSAs to the slots for global features. A pair of fully differentiable soft clustering and dispatching modules is introduced to bridge the grid and set representations, thus enabling local-global fusion. Through extensive experiments on various vision tasks, we empirically verify the potential of the proposed integration scheme, named \textit{GLMix}: by offloading the burden of fine-grained features to light-weight Convs, it is sufficient to use MHSAs in a few (e.g., 64) semantic slots to match the performance of recent state-of-the-art backbones, while being more efficient. Our visualization results also demonstrate that the soft clustering module produces a meaningful semantic grouping effect with only IN1k classification supervision, which may induce better interpretability and inspire new weakly-supervised semantic segmentation approaches. Code will be available at \url{https://github.com/rayleizhu/GLMix}.
Revisiting the Integration of Convolution and Attention for Vision Backbone
[ "Lei Zhu", "Xinjiang Wang", "Wayne Zhang", "Rynson W. H. Lau" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ttLcbEkaj6
@inproceedings{ lim2024airsketch, title={AirSketch: Generative Motion to Sketch}, author={Hui Xian Grace Lim and Xuanming Cui and Yogesh S Rawat and Ser-Nam Lim}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ttLcbEkaj6} }
Illustration is a fundamental mode of human expression and communication. Certain types of motion that accompany speech can provide this illustrative mode of communication. While Augmented and Virtual Reality technologies (AR/VR) have introduced tools for producing drawings with hand motions (air drawing), they typically require costly hardware and additional digital markers, thereby limiting their accessibility and portability. Furthermore, air drawing demands considerable skill to achieve aesthetic results. To address these challenges, we introduce the concept of AirSketch, aimed at generating faithful and visually coherent sketches directly from hand motions, eliminating the need for complicated headsets or markers. We devise a simple augmentation-based self-supervised training procedure, enabling a controllable image diffusion model to learn to translate from highly noisy hand tracking images to clean, aesthetically pleasing sketches, while preserving the essential visual cues from the original tracking data. We present two air drawing datasets to study this problem. Our findings demonstrate that beyond producing photo-realistic images from precise spatial inputs, controllable image diffusion can effectively produce a refined, clear sketch from a noisy input. Our work serves as an initial step towards marker-less air drawing and reveals distinct applications of controllable diffusion models to AirSketch and AR/VR in general.
AirSketch: Generative Motion to Sketch
[ "Hui Xian Grace Lim", "Xuanming Cui", "Yogesh S Rawat", "Ser-Nam Lim" ]
NeurIPS.cc/2024/Conference
2407.08906
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tsIKrvexBd
@inproceedings{ wu2024leveraging, title={Leveraging Tumor Heterogeneity: Heterogeneous Graph Representation Learning for Cancer Survival Prediction in Whole Slide Images}, author={Junxian Wu and Xinyi Ke and Xiaoming Jiang and Huanwen Wu and Youyong Kong and Lizhi Shao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tsIKrvexBd} }
Survival prediction is a significant challenge in cancer management. Tumor micro-environment is a highly sophisticated ecosystem consist of cancer cells, immune cells, endothelial cells, fibroblasts, nerves and extracellular matrix. The intratumor heterogeneity and the interaction across multiple tissue types profoundly impacts the prognosis. However, current methods often neglect the fact that the contribution to prognosis differs with tissue types. In this paper, we propose ProtoSurv, a novel heterogeneous graph model for WSI survival prediction. The learning process of ProtoSurv is not only driven by data but also incorporates pathological domain knowledge, including the awareness of tissue heterogeneity, the emphasis on prior knowledge of prognostic-related tissues, and the depiction of spatial interaction across multiple tissues. We validate ProtoSurv across five different cancer types from TCGA (i.e., BRCA, LGG, LUAD, COAD and PAAD), and demonstrate the superiority of our method over the state-of-the-art methods.
Leveraging Tumor Heterogeneity: Heterogeneous Graph Representation Learning for Cancer Survival Prediction in Whole Slide Images
[ "Junxian Wu", "Xinyi Ke", "Xiaoming Jiang", "Huanwen Wu", "Youyong Kong", "Lizhi Shao" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tnh4LK72yj
@inproceedings{ yi2024get, title={Get Rid of Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework}, author={Zhongchao Yi and Zhengyang Zhou and Qihe Huang and Yanjiang Chen and Liheng Yu and Xu Wang and Yang Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tnh4LK72yj} }
Spatiotemporal learning has become a pivotal technique to enable urban intelligence. Traditional spatiotemporal models mostly focus on a specific task by assuming a same distribution between training and testing sets. However, given that urban systems are usually dynamic, multi-sourced with imbalanced data distributions, current specific task-specific models fail to generalize to new urban conditions and adapt to new domains without explicitly modeling interdependencies across various dimensions and types of urban data. To this end, we argue that there is an essential to propose a Continuous Multi-task Spatio-Temporal learning framework (CMuST) to empower collective urban intelligence, which reforms the urban spatiotemporal learning from single-domain to cooperatively multi-dimensional and multi-task learning. Specifically, CMuST proposes a new multi-dimensional spatiotemporal interaction network (MSTI) to allow cross-interactions between context and main observations as well as self-interactions within spatial and temporal aspects to be exposed, which is also the core for capturing task-level commonality and personalization. To ensure continuous task learning, a novel Rolling Adaptation training scheme (RoAda) is devised, which not only preserves task uniqueness by constructing data summarization-driven task prompts, but also harnesses correlated patterns among tasks by iterative model behavior modeling. We further establish a benchmark of three cities for multi-task spatiotemporal learning, and empirically demonstrate the superiority of CMuST via extensive evaluations on these datasets. The impressive improvements on both few-shot streaming data and new domain tasks against existing SOAT methods are achieved. Code is available at https://github.com/DILab-USTCSZ/CMuST.
Get Rid of Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework
[ "Zhongchao Yi", "Zhengyang Zhou", "Qihe Huang", "Yanjiang Chen", "Liheng Yu", "Xu Wang", "Yang Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=tnQbciDjVf
@inproceedings{ guo2024transagent, title={TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration}, author={Yiwei Guo and Shaobin Zhuang and Kunchang Li and Yu Qiao and Yali Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tnQbciDjVf} }
Vision-language foundation models (such as CLIP) have recently shown their power in transfer learning, owing to large-scale image-text pre-training. However, target domain data in the downstream tasks can be highly different from the pre-training phase, which makes it hard for such a single model to generalize well. Alternatively, there exists a wide range of expert models that contain diversified vision and/or language knowledge pre-trained on different modalities, tasks, networks, and datasets. Unfortunately, these models are "isolated agents" with heterogeneous structures, and how to integrate their knowledge for generalizing CLIP-like models has not been fully explored. To bridge this gap, we propose a general and concise TransAgent framework, which transports the knowledge of the isolated agents in a unified manner, and effectively guides CLIP to generalize with multi-source knowledge distillation. With such a distinct framework, we flexibly collaborate with 11 heterogeneous agents to empower vision-language foundation models, without further cost in the inference phase. Finally, our TransAgent achieves state-of-the-art performance on 11 visual recognition datasets. Under the same low-shot setting, it outperforms the popular CoOp with around 10\% on average, and 20\% on EuroSAT which contains large domain shifts.
TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration
[ "Yiwei Guo", "Shaobin Zhuang", "Kunchang Li", "Yu Qiao", "Yali Wang" ]
NeurIPS.cc/2024/Conference
2410.12183
[ "https://github.com/markywg/transagent" ]
https://huggingface.co/papers/2410.12183
3
3
2
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=tmX1AUmkl6
@inproceedings{ liao2024evaluation, title={Evaluation of Text-to-Video Generation Models: A Dynamics Perspective}, author={Mingxiang Liao and Hannan Lu and Qixiang Ye and Wangmeng Zuo and Fang Wan and Tianyu Wang and Yuzhong Zhao and Jingdong Wang and Xinyu Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tmX1AUmkl6} }
Comprehensive and constructive evaluation protocols play an important role when developing sophisticated text-to-video (T2V) generation models. Existing evaluation protocols primarily focus on temporal consistency and content continuity, yet largely ignore dynamics of video content. Such dynamics is an essential dimension measuring the visual vividness and the honesty of video content to text prompts. In this study, we propose an effective evaluation protocol, termed DEVIL, which centers on the dynamics dimension to evaluate T2V generation models, as well as improving existing evaluation metrics. In practice, we define a set of dynamics scores corresponding to multiple temporal granularities, and a new benchmark of text prompts under multiple dynamics grades. Upon the text prompt benchmark, we assess the generation capacity of T2V models, characterized by metrics of dynamics ranges and T2V alignment. Moreover, we analyze the relevance of existing metrics to dynamics metrics, improving them from the perspective of dynamics. Experiments show that DEVIL evaluation metrics enjoy up to about 90\% consistency with human ratings, demonstrating the potential to advance T2V generation models.
Evaluation of Text-to-Video Generation Models: A Dynamics Perspective
[ "Mingxiang Liao", "Hannan Lu", "Qixiang Ye", "Wangmeng Zuo", "Fang Wan", "Tianyu Wang", "Yuzhong Zhao", "Jingdong Wang", "Xinyu Zhang" ]
NeurIPS.cc/2024/Conference
2407.01094
[ "https://github.com/mingxiangl/devil" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tmQH8prqLc
@inproceedings{ jiang2024adaptive, title={Adaptive Variance Reduction for Stochastic Optimization under Weaker Assumptions}, author={Wei Jiang and Sifan Yang and Yibo Wang and Lijun Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tmQH8prqLc} }
This paper explores adaptive variance reduction methods for stochastic optimization based on the STORM technique. Existing adaptive extensions of STORM rely on strong assumptions like bounded gradients and bounded function values, or suffer an additional $\mathcal{O}(\log T)$ term in the convergence rate. To address these limitations, we introduce a novel adaptive STORM method that achieves an optimal convergence rate of $\mathcal{O}(T^{-1/3})$ for non-convex functions with our newly designed learning rate strategy. Compared with existing approaches, our method requires weaker assumptions and attains the optimal convergence rate without the additional $\mathcal{O}(\log T)$ term. We also extend the proposed technique to stochastic compositional optimization, obtaining the same optimal rate of $\mathcal{O}(T^{-1/3})$. Furthermore, we investigate the non-convex finite-sum problem and develop another innovative adaptive variance reduction method that achieves an optimal convergence rate of $\mathcal{O}(n^{1/4} T^{-1/2} )$, where $n$ represents the number of component functions. Numerical experiments across various tasks validate the effectiveness of our method.
Adaptive Variance Reduction for Stochastic Optimization under Weaker Assumptions
[ "Wei Jiang", "Sifan Yang", "Yibo Wang", "Lijun Zhang" ]
NeurIPS.cc/2024/Conference
2406.01959
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tk0uaRynhH
@inproceedings{ kerrigan2024dynamic, title={Dynamic Conditional Optimal Transport through Simulation-Free Flows}, author={Gavin Kerrigan and Giosue Migliorini and Padhraic Smyth}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tk0uaRynhH} }
We study the geometry of conditional optimal transport (COT) and prove a dynamic formulation which generalizes the Benamou-Brenier Theorem. Equipped with these tools, we propose a simulation-free flow-based method for conditional generative modeling. Our method couples an arbitrary source distribution to a specified target distribution through a triangular COT plan, and a conditional generative model is obtained by approximating the geodesic path of measures induced by this COT plan. Our theory and methods are applicable in infinite-dimensional settings, making them well suited for a wide class of Bayesian inverse problems. Empirically, we demonstrate that our method is competitive on several challenging conditional generation tasks, including an infinite-dimensional inverse problem.
Dynamic Conditional Optimal Transport through Simulation-Free Flows
[ "Gavin Kerrigan", "Giosue Migliorini", "Padhraic Smyth" ]
NeurIPS.cc/2024/Conference
2404.04240
[ "https://github.com/gavinkerrigan/cot_fm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tj8nsfxi5r
@inproceedings{ wang2024from, title={From News to Forecast: Integrating Event Analysis in {LLM}-Based Time Series Forecasting with Reflection}, author={Xinlei Wang and Maike Feng and Jing Qiu and Jinjin Gu and Junhua Zhao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tj8nsfxi5r} }
This paper introduces a novel approach that leverages Large Language Models (LLMs) and Generative Agents to enhance time series forecasting by reasoning across both text and time series data. With language as a medium, our method adaptively integrates social events into forecasting models, aligning news content with time series fluctuations to provide richer insights. Specifically, we utilize LLM-based agents to iteratively filter out irrelevant news and employ human-like reasoning to evaluate predictions. This enables the model to analyze complex events, such as unexpected incidents and shifts in social behavior, and continuously refine the selection logic of news and the robustness of the agent's output. By integrating selected news events with time series data, we fine-tune a pre-trained LLM to predict sequences of digits in time series. The results demonstrate significant improvements in forecasting accuracy, suggesting a potential paradigm shift in time series forecasting through the effective utilization of unstructured news data.
From News to Forecast: Integrating Event Analysis in LLM-Based Time Series Forecasting with Reflection
[ "Xinlei Wang", "Maike Feng", "Jing Qiu", "Jinjin Gu", "Junhua Zhao" ]
NeurIPS.cc/2024/Conference
2409.17515
[ "https://github.com/ameliawong1996/From_News_to_Forecast" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=thUf6ZBlPp
@inproceedings{ cai2024eigenvi, title={Eigen{VI}: score-based variational inference with orthogonal function expansions}, author={Diana Cai and Chirag Modi and Charles Margossian and Robert M. Gower and David Blei and Lawrence K. Saul}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=thUf6ZBlPp} }
We develop EigenVI, an eigenvalue-based approach for black-box variational inference (BBVI). EigenVI constructs its variational approximations from orthogonal function expansions. For distributions over $\mathbb{R}^D$, the lowest order term in these expansions provides a Gaussian variational approximation, while higher-order terms provide a systematic way to model non-Gaussianity. These approximations are flexible enough to model complex distributions (multimodal, asymmetric), but they are simple enough that one can calculate their low-order moments and draw samples from them. EigenVI can also model other types of random variables (e.g., nonnegative, bounded) by constructing variational approximations from different families of orthogonal functions. Within these families, EigenVI computes the variational approximation that best matches the score function of the target distribution by minimizing a stochastic estimate of the Fisher divergence. Notably, this optimization reduces to solving a minimum eigenvalue problem, so that EigenVI effectively sidesteps the iterative gradient-based optimizations that are required for many other BBVI algorithms. (Gradient-based methods can be sensitive to learning rates, termination criteria, and other tunable hyperparameters.) We use EigenVI to approximate a variety of target distributions, including a benchmark suite of Bayesian models from posteriordb. On these distributions, we find that EigenVI is more accurate than existing methods for Gaussian BBVI.
EigenVI: score-based variational inference with orthogonal function expansions
[ "Diana Cai", "Chirag Modi", "Charles Margossian", "Robert M. Gower", "David Blei", "Lawrence K. Saul" ]
NeurIPS.cc/2024/Conference
2410.24054
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=teVxVdy8R2
@inproceedings{ guo2024prediction, title={Prediction with Action: Visual Policy Learning via Joint Denoising Process}, author={Yanjiang Guo and Yucheng Hu and Jianke Zhang and Yen-Jen Wang and Xiaoyu Chen and Chaochao Lu and Jianyu Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=teVxVdy8R2} }
Diffusion models have demonstrated remarkable capabilities in image generation tasks, including image editing and video creation, representing a good understanding of the physical world. On the other line, diffusion models have also shown promise in robotic control tasks by denoising actions, known as diffusion policy. Although the diffusion generative model and diffusion policy exhibit distinct capabilities—image prediction and robotic action, respectively—they technically follow similar denoising process. In robotic tasks, the ability to predict future images and generate actions is highly correlated since they share the same underlying dynamics of the physical world. Building on this insight, we introduce \textbf{PAD}, a novel visual policy learning framework that unifies image \textbf{P}rediction and robot \textbf{A}ction within a joint \textbf{D}enoising process. Specifically, PAD utilizes Diffusion Transformers (DiT) to seamlessly integrate images and robot states, enabling the simultaneous prediction of future images and robot actions. Additionally, PAD supports co-training on both robotic demonstrations and large-scale video datasets and can be easily extended to other robotic modalities, such as depth images. PAD outperforms previous methods, achieving a significant 38.9\% relative improvement on the full Metaworld benchmark, by utilizing a single text-conditioned visual policy within a data-efficient imitation learning setting. Furthermore, PAD demonstrates superior generalization to unseen tasks in real-world robot manipulation settings with 28.0\% success rate increase compared to the strongest baseline. Videos of PAD can be found at https://sites.google.com/view/pad-paper
Prediction with Action: Visual Policy Learning via Joint Denoising Process
[ "Yanjiang Guo", "Yucheng Hu", "Jianke Zhang", "Yen-Jen Wang", "Xiaoyu Chen", "Chaochao Lu", "Jianyu Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=te6VagJf6G
@inproceedings{ weir2024learning, title={Learning to Reason via Program Generation, Emulation, and Search}, author={Nathaniel Weir and Muhammad Khalifa and Linlu Qiu and Orion Weller and Peter Clark}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=te6VagJf6G} }
Program synthesis with language models (LMs) has unlocked a large set of reasoning abilities; code-tuned LMs have proven adept at generating programs that solve a wide variety of algorithmic symbolic manipulation tasks (e.g. word concatenation). However, not all reasoning tasks are easily expressible as code, e.g. tasks involving commonsense reasoning, moral decision-making, and sarcasm understanding. Our goal is to extend a LM’s program synthesis skills to such tasks and evaluate the results via pseudo-programs, namely Python programs where some leaf function calls are left undefined. To that end, we propose, Code Generation and Emulated EXecution (COGEX). COGEX works by (1) training LMs to generate pseudo-programs and (2) teaching them to emulate their generated program’s execution, including those leaf functions, allowing the LM’s knowledge to fill in the execution gaps; and (3) using them to search over many programs to find an optimal one. To adapt the COGEX model to a new task, we introduce a method for performing program search to find a single program whose pseudo-execution yields optimal performance when applied to all the instances of a given dataset. We show that our approach yields large improvements compared to standard in-context learning approaches on a battery of tasks, both algorithmic and soft reasoning. This result thus demonstrates that code synthesis can be applied to a much broader class of problems than previously considered.
Learning to Reason via Program Generation, Emulation, and Search
[ "Nathaniel Weir", "Muhammad Khalifa", "Linlu Qiu", "Orion Weller", "Peter Clark" ]
NeurIPS.cc/2024/Conference
2405.16337
[ "https://github.com/nweir127/cogex" ]
https://huggingface.co/papers/2405.16337
3
0
0
5
[]
[ "mkhalifa/CoGEX" ]
[]
[]
[ "mkhalifa/CoGEX" ]
[]
1
poster
null
https://openreview.net/forum?id=tdZLKY9usl
@inproceedings{ duan2024reimagining, title={Reimagining Mutual Information for Enhanced Defense against Data Leakage in Collaborative Inference}, author={Lin Duan and Jingwei Sun and Jinyuan Jia and Yiran Chen and Maria Gorlatova}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tdZLKY9usl} }
Edge-cloud collaborative inference empowers resource-limited IoT devices to support deep learning applications without disclosing their raw data to the cloud server, thus protecting user's data. Nevertheless, prior research has shown that collaborative inference still results in the exposure of input and predictions from edge devices. To defend against such data leakage in collaborative inference, we introduce InfoScissors, a defense strategy designed to reduce the mutual information between a model's intermediate outcomes and the device's input and predictions. We evaluate our defense on several datasets in the context of diverse attacks. Besides the empirical comparison, we provide a theoretical analysis of the inadequacies of recent defense strategies that also utilize mutual information, particularly focusing on those based on the Variational Information Bottleneck (VIB) approach. We illustrate the superiority of our method and offer a theoretical analysis of it.
Reimagining Mutual Information for Enhanced Defense against Data Leakage in Collaborative Inference
[ "Lin Duan", "Jingwei Sun", "Jinyuan Jia", "Yiran Chen", "Maria Gorlatova" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tb1MlJCY5g
@inproceedings{ pang2024kalm, title={{KALM}: Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts}, author={Jing-Cheng Pang and Si-Hang Yang and Kaiyuan Li and Jiaji Zhang and Xiong-Hui Chen and Nan Tang and Yang Yu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tb1MlJCY5g} }
Reinforcement learning (RL) traditionally trains agents using interaction data, which limits their capabilities to the scope of the training data. To create more knowledgeable agents, leveraging knowledge from large language models (LLMs) has shown a promising way. Despite various attempts to combine LLMs with RL, there is commonly a semantic gap between action signals and LLM tokens, which hinders their integration. This paper introduces a novel approach, KALM (Knowledgeable Agents from Language Model Rollouts), to learn knowledgeable agents by bridging this gap. KALM extracts knowledge from LLMs in the form of imaginary rollouts, which agents can learn through offline RL. To overcome the limitation that LLMs are inherently text-based and may be incompatible with numerical environmental data, KALM fine-tunes the LLM to perform bidirectional translation between textual goals and rollouts. This process enables the LLM to understand the environment better, facilitating the generation of meaningful rollouts. Experiments on robotic manipulation tasks demonstrate that KALM allows agents to rephrase complex goals and tackle novel tasks requiring new optimal behaviors. KALM achieves a 46% success rate in completing 1400 various novel goals, significantly outperforming the 26% success rate of baseline methods. Project homepage: https://kalmneurips2024.github.io.
KALM: Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts
[ "Jing-Cheng Pang", "Si-Hang Yang", "Kaiyuan Li", "Jiaji Zhang", "Xiong-Hui Chen", "Nan Tang", "Yang Yu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tacb2bFZcm
@inproceedings{ zhou2024ups, title={{UPS}: Unified Projection Sharing for Lightweight Single-Image Super-resolution and Beyond}, author={Kun Zhou and Xinyu Lin and Zhonghang LIU and Xiaoguang Han and Jiangbo Lu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=tacb2bFZcm} }
To date, transformer-based frameworks have demonstrated impressive results in single-image super-resolution (SISR). However, under practical lightweight scenarios, the complex interaction of deep image feature extraction and similarity modeling limits the performance of these methods, since they require simultaneous layer-specific optimization of both two tasks. In this work, we introduce a novel Unified Projection Sharing algorithm(UPS) to decouple the feature extraction and similarity modeling, achieving notable performance. To do this, we establish a unified projection space defined by a learnable projection matrix, for similarity calculation across all self-attention layers. As a result, deep image feature extraction remains a per-layer optimization manner, while similarity modeling is carried out by projecting these image features onto the shared projection space. Extensive experiments demonstrate that our proposed UPS achieves state-of-the-art performance relative to leading lightweight SISR methods, as verified by various popular benchmarks. Moreover, our unified optimized projection space exhibits encouraging robustness performance for unseen data (degraded and depth images). Finally, UPS also demonstrates promising results across various image restoration tasks, including real-world and classic SISR, image denoising, and image deblocking.
UPS: Unified Projection Sharing for Lightweight Single-Image Super-resolution and Beyond
[ "Kun Zhou", "Xinyu Lin", "Zhonghang LIU", "Xiaoguang Han", "Jiangbo Lu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster