bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=zzOOqD6R1b
@inproceedings{ greenblatt2024stresstesting, title={Stress-Testing Capability Elicitation With Password-Locked Models}, author={Ryan Greenblatt and Fabien Roger and Dmitrii Krasheninnikov and David Krueger}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zzOOqD6R1b} }
To determine the safety of large language models (LLMs), AI developers must be able to assess their dangerous capabilities. But simple prompting strategies often fail to elicit an LLM’s full capabilities. One way to elicit capabilities more robustly is to fine-tune the LLM to complete the task. In this paper, we investigate the conditions under which fine-tuning-based elicitation suffices to elicit capabilities. To do this, we introduce password-locked models, LLMs fine-tuned such that some of their capabilities are deliberately hidden. Specifically, these LLMs are trained to exhibit these capabilities only when a password is present in the prompt, and to imitate a much weaker LLM otherwise. Password-locked models enable a novel method of evaluating capabilities elicitation methods, by testing whether these password-locked capabilities can be elicited without using the password. We find that a few high-quality demonstrations are often sufficient to fully elicit password-locked capabilities. More surprisingly, fine-tuning can elicit other capabilities that have been locked using the same password, or even different passwords. Furthermore, when only evaluations, and not demonstrations, are available, approaches like reinforcement learning are still often able to elicit capabilities. Overall, our findings suggest that fine-tuning is an effective method of eliciting hidden capabilities of current models but may be unreliable when high-quality demonstrations are not available, e.g., as may be the case when models’ (hidden) capabilities exceed those of human demonstrators.
Stress-Testing Capability Elicitation With Password-Locked Models
[ "Ryan Greenblatt", "Fabien Roger", "Dmitrii Krasheninnikov", "David Krueger" ]
NeurIPS.cc/2024/Conference
2405.19550
[ "https://github.com/FabienRoger/sandbagging" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zxSWIdyW3A
@inproceedings{ wang2024cooperative, title={Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging}, author={Jiamian Wang and Zongliang Wu and Yulun Zhang and Xin Yuan and Tao Lin and ZHIQIANG TAO}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zxSWIdyW3A} }
Existing reconstruction models in snapshot compressive imaging systems (SCI) are trained with a single well-calibrated hardware instance, making their perfor- mance vulnerable to hardware shifts and limited in adapting to multiple hardware configurations. To facilitate cross-hardware learning, previous efforts attempt to directly collect multi-hardware data and perform centralized training, which is impractical due to severe user data privacy concerns and hardware heterogeneity across different platforms/institutions. In this study, we explicitly consider data privacy and heterogeneity in cooperatively optimizing SCI systems by proposing a Federated Hardware-Prompt learning (FedHP) framework. Rather than mitigating the client drift by rectifying the gradients, which only takes effect on the learning manifold but fails to solve the heterogeneity rooted in the input data space, FedHP learns a hardware-conditioned prompter to align inconsistent data distribution across clients, serving as an indicator of the data inconsistency among different hardware (e.g., coded apertures). Extensive experimental results demonstrate that the proposed FedHP coordinates the pre-trained model to multiple hardware con- figurations, outperforming prevalent FL frameworks for 0.35dB under challenging heterogeneous settings. Moreover, a Snapshot Spectral Heterogeneous Dataset has been built upon multiple practical SCI systems. Data and code are aveilable at https://github.com/Jiamian-Wang/FedHP-Snapshot-Compressive-Imaging.git
Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging
[ "Jiamian Wang", "Zongliang Wu", "Yulun Zhang", "Xin Yuan", "Tao Lin", "ZHIQIANG TAO" ]
NeurIPS.cc/2024/Conference
2306.01176
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zw2K6LfFI9
@inproceedings{ ni2024peria, title={{PERIA}: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation}, author={Fei Ni and Jianye HAO and Shiguang Wu and Longxin Kou and Yifu Yuan and Zibin Dong and Jinyi Liu and MingZhi Li and Yuzheng Zhuang and YAN ZHENG}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zw2K6LfFI9} }
Long-horizon manipulation tasks with general instructions often implicitly encapsulate multiple sub-tasks, posing significant challenges in instruction following. While language planning is a common approach to decompose general instructions into stepwise sub-instructions, text-only guidance may lack expressiveness and lead to potential ambiguity. Considering that humans often imagine and visualize sub-instructions reasoning out before acting, the imagined subgoal images can provide more intuitive guidance and enhance the reliability of decomposition. Inspired by this, we propose **PERIA**(**PE**rceive, **R**eason, **I**magine, **A**ct), a novel framework that integrates holistic language planning and vision planning for long-horizon manipulation tasks with complex instructions, leveraging both logical and intuitive aspects of task decomposition. Specifically, we first perform a lightweight multimodal alignment on the encoding side to empower the MLLM to perceive visual details and language instructions. The MLLM is then jointly instruction-tuned with a pretrained image-editing model to unlock capabilities of simultaneous reasoning of language instructions and generation of imagined subgoals. Furthermore, we introduce a consistency alignment loss to encourage coherent subgoal images and align with their corresponding instructions, mitigating potential hallucinations and semantic conflicts between the two planning manners. Comprehensive evaluations across three task domains demonstrate that PERIA, benefiting from holistic language and vision planning, significantly outperforms competitive baselines in both instruction following accuracy and task success rate on complex manipulation tasks.
PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation
[ "Fei Ni", "Jianye HAO", "Shiguang Wu", "Longxin Kou", "Yifu Yuan", "Zibin Dong", "Jinyi Liu", "MingZhi Li", "Yuzheng Zhuang", "YAN ZHENG" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zv9gYC3xgF
@inproceedings{ xu2024toward, title={Toward Global Convergence of Gradient {EM} for Over-Paramterized Gaussian Mixture Models}, author={Weihang Xu and Maryam Fazel and Simon Shaolei Du}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zv9gYC3xgF} }
We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM) in the over-parameterized setting, where a general GMM with $n>1$ components learns from data that are generated by a single ground truth Gaussian distribution. While results for the special case of 2-Gaussian mixtures are well-known, a general global convergence analysis for arbitrary $n$ remains unresolved and faces several new technical barriers since the convergence becomes sub-linear and non-monotonic. To address these challenges, we construct a novel likelihood-based convergence analysis framework and rigorously prove that gradient EM converges globally with a sublinear rate $O(1/\sqrt{t})$. This is the first global convergence result for Gaussian mixtures with more than $2$ components. The sublinear convergence rate is due to the algorithmic nature of learning over-parameterized GMM with gradient EM. We also identify a new emerging technical challenge for learning general over-parameterized GMM: the existence of bad local regions that can trap gradient EM for an exponential number of steps.
Toward Global Convergence of Gradient EM for Over-Paramterized Gaussian Mixture Models
[ "Weihang Xu", "Maryam Fazel", "Simon Shaolei Du" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zv4UISZzp5
@inproceedings{ lin2024idgen, title={{IDG}en: Item Discrimination Induced Prompt Generation for {LLM} Evaluation}, author={Fan Lin and Shuyi Xie and Yong Dai and Wenlin Yao and TianJiao Lang and Yu Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zv4UISZzp5} }
As Large Language Models (LLMs) become more capable of handling increasingly complex tasks, the evaluation set must keep pace with these advancements to ensure it remains sufficiently discriminative. Item Discrimination (ID) theory, which is widely used in educational assessment, measures the ability of individual test items to differentiate between high and low performers. Inspired by this theory, we propose an ID-induced prompt synthesis framework for evaluating LLMs so that the evaluation set continually updates and refines according to model abilities. Our data synthesis framework prioritizes both breadth and specificity. It can generate prompts that comprehensively evaluate the capabilities of LLMs while revealing meaningful performance differences between models, allowing for effective discrimination of their relative strengths and weaknesses across various tasks and domains. To produce high-quality data, we incorporate a self-correct mechanism into our generalization framework and develop two models to predict prompt discrimination and difficulty score to facilitate our data synthesis framework, contributing valuable tools to evaluation data synthesis research. We apply our generated data to evaluate five SOTA models. Our data achieves an average score of 51.92, accompanied by a variance of 10.06. By contrast, previous works (i.e., SELF-INSTRUCT and WizardLM) obtain an average score exceeding 67, with a variance below 3.2. The results demonstrate that the data generated by our framework is more challenging and discriminative compared to previous works. We will release a dataset of over 3,000 carefully crafted prompts to facilitate evaluation research of LLMs.
IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation
[ "Fan Lin", "Shuyi Xie", "Yong Dai", "Wenlin Yao", "TianJiao Lang", "Yu Zhang" ]
NeurIPS.cc/2024/Conference
2409.18892
[ "https://github.com/DUTlf/IDGen" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zuwpeRkJNH
@inproceedings{ yuan2024procedureaware, title={Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation}, author={Kun yuan and Vinkle Srivastav and Nassir Navab and Nicolas Padoy}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zuwpeRkJNH} }
Surgical video-language pretraining (VLP) faces unique challenges due to the knowledge domain gap and the scarcity of multi-modal data. This study aims to bridge the gap by addressing issues regarding textual information loss in surgical lecture videos and the spatial-temporal challenges of surgical VLP. To tackle these issues, we propose a hierarchical knowledge augmentation approach and a novel Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) framework. The proposed knowledge augmentation approach uses large language models (LLM) to refine and enrich surgical concepts, thus providing comprehensive language supervision and reducing the risk of overfitting. The PeskaVLP framework combines language supervision with visual self-supervision, constructing hard negative samples and employing a Dynamic Time Warping (DTW) based loss function to effectively comprehend the cross-modal procedural alignment. Extensive experiments on multiple public surgical scene understanding and cross-modal retrieval datasets show that our proposed method significantly improves zero-shot transferring performance and offers a generalist visual repre- sentation for further advancements in surgical scene understanding. The source code will be available at https://github.com/CAMMA-public/PeskaVLP.
Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation
[ "Kun yuan", "Vinkle Srivastav", "Nassir Navab", "Nicolas Padoy" ]
NeurIPS.cc/2024/Conference
2410.00263
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=zuwLGhgxtQ
@inproceedings{ he2024a, title={A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers}, author={Ye He and Alireza Mousavi-Hosseini and Krishna Balasubramanian and Murat A Erdogdu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zuwLGhgxtQ} }
We study the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees i.e., samplers that require only $\mathcal{O}(\log(1/\varepsilon))$ versus $\Omega(\text{poly}(1/\varepsilon))$ iterations to output a sample which is $\varepsilon$-close to the target in $\chi^2$-divergence. Our results are presented for proximal samplers that are based on Gaussian versus stable oracles. We show that proximal samplers based on the Gaussian oracle have a fundamental barrier in that they necessarily achieve only low-accuracy guarantees when sampling from a class of heavy-tailed targets. In contrast, proximal samplers based on the stable oracle exhibit high-accuracy guarantees, thereby overcoming the aforementioned limitation. We also prove lower bounds for samplers under the stable oracle and show that our upper bounds cannot be fundamentally improved.
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers
[ "Ye He", "Alireza Mousavi-Hosseini", "Krishna Balasubramanian", "Murat A Erdogdu" ]
NeurIPS.cc/2024/Conference
2405.16736
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ztwl4ubnXV
@inproceedings{ delaney2024oxonfair, title={OxonFair: A Flexible Toolkit for Algorithmic Fairness}, author={Eoin D. Delaney and Zihao Fu and Sandra Wachter and Brent Mittelstadt and Chris Russell}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ztwl4ubnXV} }
We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide range of overfitting challenges. (iii) Our approach can optimize any measure based on True Positives, False Positive, False Negatives, and True Negatives. This makes it easily extensible and much more expressive than existing toolkits. It supports all 9 and all 10 of the decision-based group metrics of two popular review articles. (iv) We jointly optimize a performance objective alongside fairness constraints. This minimizes degradation while enforcing fairness, and even improves the performance of inadequately tuned unfair baselines. OxonFair is compatible with standard ML toolkits, including sklearn, Autogluon, and PyTorch and is available at https://github.com/oxfordinternetinstitute/oxonfair.
OxonFair: A Flexible Toolkit for Algorithmic Fairness
[ "Eoin D. Delaney", "Zihao Fu", "Sandra Wachter", "Brent Mittelstadt", "Chris Russell" ]
NeurIPS.cc/2024/Conference
2407.13710
[ "https://github.com/oxfordinternetinstitute/oxonfair" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zsXbGJJ7Oo
@inproceedings{ liu2024gd, title={G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training}, author={Che Liu and Cheng Ouyang and Sibo Cheng and Anand Shah and Wenjia Bai and Rossella Arcucci}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zsXbGJJ7Oo} }
Medical imaging tasks require an understanding of subtle and localized visual features due to the inherently detailed and area-specific nature of pathological patterns, which are crucial for clinical diagnosis. Although recent advances in medical vision-language pre-training (VLP) enable models to learn clinically relevant visual features by leveraging both medical images and their associated radiology reports, current medical VLP methods primarily focus on aligning images with entire reports. This focus hinders the learning of dense (pixel-level) visual features and is suboptimal for dense prediction tasks (e.g., medical image segmentation). To address this challenge, we propose a novel medical VLP framework, named **Global to Dense level representation learning (G2D)**, which aims to learn global and dense visual features simultaneously using only image-text pairs without extra annotations. In particular, G2D designs a **Pseudo Segmentation (PS)** task, which enables the model to learn dense visual features during VLP. Notably, generating PS masks can be performed on the fly during VLP, which does not incur extra trainable parameters. With this simple yet effective idea, G2D achieves superior performance across 5 medical imaging tasks and 25 diseases. Particularly, in the segmentation task which requires dense visual features, **G2D surpasses existing models even with just 1% of the training data for finetuning, compared to 100% used by other models**. The code can be found in https://github.com/cheliu-computation/G2D-NeurIPS24/tree/main.
G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training
[ "Che Liu", "Cheng Ouyang", "Sibo Cheng", "Anand Shah", "Wenjia Bai", "Rossella Arcucci" ]
NeurIPS.cc/2024/Conference
2312.01522
[ "https://github.com/cheliu-computation/g2d-neurips24" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zqLAMwVLkt
@inproceedings{ qiao2024generative, title={Generative Semi-supervised Graph Anomaly Detection}, author={Hezhe Qiao and Qingsong Wen and Xiaoli Li and Ee-Peng Lim and Guansong Pang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zqLAMwVLkt} }
This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the extensively explored unsupervised setting with a fully unlabeled graph. We reveal that having access to the normal nodes, even just a small percentage of normal nodes, helps enhance the detection performance of existing unsupervised GAD methods when they are adapted to the semi-supervised setting. However, their utilization of these normal nodes is limited. In this paper, we propose a novel Generative GAD approach (namely GGAD) for the semi-supervised scenario to better exploit the normal nodes. The key idea is to generate pseudo anomaly nodes, referred to as 'outlier nodes', for providing effective negative node samples in training a discriminative one-class classifier. The main challenge here lies in the lack of ground truth information about real anomaly nodes. To address this challenge, GGAD is designed to leverage two important priors about the anomaly nodes -- asymmetric local affinity and egocentric closeness -- to generate reliable outlier nodes that assimilate anomaly nodes in both graph structure and feature representations. Comprehensive experiments on six real-world GAD datasets are performed to establish a benchmark for semi-supervised GAD and show that GGAD substantially outperforms state-of-the-art unsupervised and semi-supervised GAD methods with varying numbers of training normal nodes.
Generative Semi-supervised Graph Anomaly Detection
[ "Hezhe Qiao", "Qingsong Wen", "Xiaoli Li", "Ee-Peng Lim", "Guansong Pang" ]
NeurIPS.cc/2024/Conference
2402.11887
[ "https://github.com/mala-lab/ggad" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zpw6NmhvKU
@inproceedings{ hsu2024rashomongb, title={Rashomon{GB}: Analyzing the Rashomon Effect and Mitigating Predictive Multiplicity in Gradient Boosting}, author={Hsiang Hsu and Ivan Brugere and Shubham Sharma and Freddy Lecue and Chun-Fu Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zpw6NmhvKU} }
The Rashomon effect is a mixed blessing in responsible machine learning. It enhances the prospects of finding models that perform well in accuracy while adhering to ethical standards, such as fairness or interpretability. Conversely, it poses a risk to the credibility of machine decisions through predictive multiplicity. While recent studies have explored the Rashomon effect across various machine learning algorithms, its impact on gradient boosting---an algorithm widely applied to tabular datasets---remains unclear. This paper addresses this gap by systematically analyzing the Rashomon effect and predictive multiplicity in gradient boosting algorithms. We provide rigorous theoretical derivations to examine the Rashomon effect in the context of gradient boosting and offer an information-theoretic characterization of the Rashomon set. Additionally, we introduce a novel inference technique called RashomonGB to efficiently inspect the Rashomon effect in practice. On more than 20 datasets, our empirical results show that RashomonGB outperforms existing baselines in terms of improving the estimation of predictive multiplicity metrics and model selection with group fairness constraints. Lastly, we propose a framework to mitigate predictive multiplicity in gradient boosting and empirically demonstrate its effectiveness.
RashomonGB: Analyzing the Rashomon Effect and Mitigating Predictive Multiplicity in Gradient Boosting
[ "Hsiang Hsu", "Ivan Brugere", "Shubham Sharma", "Freddy Lecue", "Chun-Fu Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zn6s6VQYb0
@inproceedings{ duan2024graphcroc, title={GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction}, author={Shijin Duan and Ruyi Ding and Jiaxing He and Aidong Adam Ding and Yunsi Fei and Xiaolin Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zn6s6VQYb0} }
Graph-structured data is integral to many applications, prompting the development of various graph representation methods. Graph autoencoders (GAEs), in particular, reconstruct graph structures from node embeddings. Current GAE models primarily utilize self-correlation to represent graph structures and focus on node-level tasks, often overlooking multi-graph scenarios. Our theoretical analysis indicates that self-correlation generally falls short in accurately representing specific graph features such as islands, symmetrical structures, and directional edges, particularly in smaller or multiple graph contexts.To address these limitations, we introduce a cross-correlation mechanism that significantly enhances the GAE representational capabilities. Additionally, we propose the GraphCroc, a new GAE that supports flexible encoder architectures tailored for various downstream tasks and ensures robust structural reconstruction, through a mirrored encoding-decoding process. This model also tackles the challenge of representation bias during optimization by implementing a loss-balancing strategy. Both theoretical analysis and numerical evaluations demonstrate that our methodology significantly outperforms existing self-correlation-based GAEs in graph structure reconstruction.
GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction
[ "Shijin Duan", "Ruyi Ding", "Jiaxing He", "Aidong Adam Ding", "Yunsi Fei", "Xiaolin Xu" ]
NeurIPS.cc/2024/Conference
2410.03396
[ "https://github.com/sjduan/graphcroc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zm1LcgRpHm
@inproceedings{ grover2024segment, title={Segment, Shuffle, and Stitch: A Simple Layer for Improving Time-Series Representations}, author={Shivam Grover and Amin Jalali and Ali Etemad}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zm1LcgRpHm} }
Existing approaches for learning representations of time-series keep the temporal arrangement of the time-steps intact with the presumption that the original order is the most optimal for learning. However, non-adjacent sections of real-world time-series may have strong dependencies. Accordingly, we raise the question: Is there an alternative arrangement for time-series which could enable more effective representation learning? To address this, we propose a simple plug-and-play neural network layer called Segment, Shuffle, and Stitch (S3) designed to improve representation learning in time-series models. S3 works by creating non-overlapping segments from the original sequence and shuffling them in a learned manner that is optimal for the task at hand. It then re-attaches the shuffled segments back together and performs a learned weighted sum with the original input to capture both the newly shuffled sequence along with the original sequence. S3 is modular and can be stacked to achieve different levels of granularity, and can be added to many forms of neural architectures including CNNs or Transformers with negligible computation overhead. Through extensive experiments on several datasets and state-of-the-art baselines, we show that incorporating S3 results in significant improvements for the tasks of time-series classification, forecasting, and anomaly detection, improving performance on certain datasets by up to 68\%. We also show that S3 makes the learning more stable with a smoother training loss curve and loss landscape compared to the original baseline. The code is available at https://github.com/shivam-grover/S3-TimeSeries.
Segment, Shuffle, and Stitch: A Simple Layer for Improving Time-Series Representations
[ "Shivam Grover", "Amin Jalali", "Ali Etemad" ]
NeurIPS.cc/2024/Conference
2405.20082
[ "https://github.com/shivam-grover/s3-timeseries" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zlgfRk2CQa
@inproceedings{ bear2024rethinking, title={Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints}, author={Jay Bear and Adam Prugel-Bennett and Jonathon Hare}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zlgfRk2CQa} }
Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set. We benchmark on the traveling salesperson problem to evaluate the capabilities of the modified system in an NP-hard problem where DT fails to learn.
Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints
[ "Jay Bear", "Adam Prugel-Bennett", "Jonathon Hare" ]
NeurIPS.cc/2024/Conference
2410.23451
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zkhyrxlwqH
@inproceedings{ song2024unsupervised, title={Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization}, author={Sanghyeob Song and Jaihyun Lew and Hyemi Jang and Sungroh Yoon}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zkhyrxlwqH} }
Estimating the homography between two images is crucial for mid- or high-level vision tasks, such as image stitching and fusion. However, using supervised learning methods is often challenging or costly due to the difficulty of collecting ground-truth data. In response, unsupervised learning approaches have emerged. Most early methods, though, assume that the given image pairs are from the same camera or have minor lighting differences. Consequently, while these methods perform effectively under such conditions, they generally fail when input image pairs come from different domains, referred to as multimodal image pairs. To address these limitations, we propose AltO, an unsupervised learning framework for estimating homography in multimodal image pairs. Our method employs a two-phase alternating optimization framework, similar to Expectation-Maximization (EM), where one phase reduces the geometry gap and the other addresses the modality gap. To handle these gaps, we use Barlow Twins loss for the modality gap and propose an extended version, Geometry Barlow Twins, for the geometry gap. As a result, we demonstrate that our method, AltO, can be trained on multimodal datasets without any ground-truth data. It not only outperforms other unsupervised methods but is also compatible with various architectures of homography estimators. The source code can be found at: https://github.com/songsang7/AltO
Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization
[ "Sanghyeob Song", "Jaihyun Lew", "Hyemi Jang", "Sungroh Yoon" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zkfCa4oESF
@inproceedings{ chen2024tpr, title={{TPR}: Topology-Preserving Reservoirs for Generalized Zero-Shot Learning}, author={Hui Chen and Yanbin Liu and Yongqiang Ma and Nanning Zheng and Xin Yu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zkfCa4oESF} }
Pre-trained vision-language models (VLMs) such as CLIP have shown excellent performance for zero-shot classification. Based on CLIP, recent methods design various learnable prompts to evaluate the zero-shot generalization capability on a base-to-novel setting. This setting assumes test samples are already divided into either base or novel classes, limiting its application to realistic scenarios. In this paper, we focus on a more challenging and practical setting: generalized zero-shot learning (GZSL), i.e., testing with no information about the base/novel division. To address this challenging zero-shot problem, we introduce two unique designs that enable us to classify an image without the need of knowing whether it comes from seen or unseen classes. Firstly, most existing methods only adopt a single latent space to align visual and linguistic features, which has a limited ability to represent complex visual-linguistic patterns, especially for fine-grained tasks. Instead, we propose a dual-space feature alignment module that effectively augments the latent space with a novel attribute space induced by a well-devised attribute reservoir. In particular, the attribute reservoir consists of a static vocabulary and learnable tokens complementing each other for flexible control over feature granularity. Secondly, finetuning CLIP models (e.g., prompt learning) on seen base classes usually sacrifices the model's original generalization capability on unseen novel classes. To mitigate this issue, we present a new topology-preserving objective that can enforce feature topology structures of the combined base and novel classes to resemble the topology of CLIP. In this manner, our model will inherit the generalization ability of CLIP through maintaining the pairwise class angles in the attribute space. Extensive experiments on twelve object recognition datasets demonstrate that our model, termed Topology-Preserving Reservoir (TPR), outperforms strong baselines including both prompt learning and conventional generative-based zero-shot methods.
TPR: Topology-Preserving Reservoirs for Generalized Zero-Shot Learning
[ "Hui Chen", "Yanbin Liu", "Yongqiang Ma", "Nanning Zheng", "Xin Yu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ziehA15y8k
@inproceedings{ lyu2024enhancing, title={Enhancing Robustness of Graph Neural Networks on Social Media with Explainable Inverse Reinforcement Learning}, author={Yuefei Lyu and Chaozhuo Li and Sihong Xie and Xi Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ziehA15y8k} }
Adversarial attacks against graph neural networks (GNNs) through perturbations of the graph structure are increasingly common in social network tasks like rumor detection. Social media platforms capture diverse attack sequence samples through both machine and manual screening processes. Investigating effective ways to leverage these adversarial samples to enhance robustness is imperative. We improve the maximum entropy inverse reinforcement learning (IRL) method with the mixture-of-experts approach to address multi-source graph adversarial attacks. This method reconstructs the attack policy, integrating various attack models and providing feature-level explanations, subsequently generating additional adversarial samples to fortify the robustness of detection models. We develop precise sample guidance and a bidirectional update mechanism to reduce the deviation caused by imprecise feature representation and negative sampling within the large action space of social graphs, while also accelerating policy learning. We take rumor detector as an example targeted GNN model on real-world rumor datasets. By utilizing a small subset of samples generated by various graph adversarial attack methods, we reconstruct the attack policy, closely approximating the performance of the original attack method. We validate that samples generated by the learned policy enhance model robustness through adversarial training and data augmentation.
Enhancing Robustness of Graph Neural Networks on Social Media with Explainable Inverse Reinforcement Learning
[ "Yuefei Lyu", "Chaozhuo Li", "Sihong Xie", "Xi Zhang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=ziYC4FHRNr
@inproceedings{ modell2024entrywise, title={Entrywise error bounds for low-rank approximations of kernel matrices}, author={Alexander Modell}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ziYC4FHRNr} }
In this paper, we derive *entrywise* error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition (or singular value decomposition). While this approximation is well-known to be optimal with respect to the spectral and Frobenius norm error, little is known about the statistical behaviour of individual entries. Our error bounds fill this gap. A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues, which takes inspiration from the field of Random Matrix Theory. Finally, we validate our theory with an empirical study of a collection of synthetic and real-world datasets.
Entrywise error bounds for low-rank approximations of kernel matrices
[ "Alexander Modell" ]
NeurIPS.cc/2024/Conference
2405.14494
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zgh0ChWocO
@inproceedings{ yang2024learning, title={Learning the Optimal Policy for Balancing Short-Term and Long-Term Rewards}, author={Qinwei Yang and Xueqing Liu and Yan Zeng and Ruocheng Guo and Yang Liu and Peng Wu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zgh0ChWocO} }
Learning the optimal policy to balance multiple short-term and long-term rewards has extensive applications across various domains. Yet, there is a noticeable scarcity of research addressing policy learning strategies in this context. In this paper, we aim to learn the optimal policy capable of effectively balancing multiple short-term and long-term rewards, especially in scenarios where the long-term outcomes are often missing due to data collection challenges over extended periods. Towards this goal, the conventional linear weighting method, which aggregates multiple rewards into a single surrogate reward through weighted summation, can only achieve sub-optimal policies when multiple rewards are related. Motivated by this, we propose a novel decomposition-based policy learning (DPPL) method that converts the whole problem into subproblems. The DPPL method is capable of obtaining optimal policies even when multiple rewards are interrelated. Nevertheless, the DPPL method requires a set of preference vectors specified in advance, posing challenges in practical applications where selecting suitable preferences is non-trivial. To mitigate this, we further theoretically transform the optimization problem in DPPL into an $\varepsilon$-constraint problem, where $\varepsilon$ represents the minimum acceptable levels of other rewards while maximizing one reward. This transformation provides intuitive into the selection of preference vectors. Extensive experiments are conducted on the proposed method and the results validate the effectiveness of the method.
Learning the Optimal Policy for Balancing Short-Term and Long-Term Rewards
[ "Qinwei Yang", "Xueqing Liu", "Yan Zeng", "Ruocheng Guo", "Yang Liu", "Peng Wu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zeaBrGv7Ll
@inproceedings{ tang2024seeclear, title={SeeClear: Semantic Distillation Enhances Pixel Condensation for Video Super-Resolution}, author={Qi Tang and Yao Zhao and Meiqin Liu and Chao Yao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zeaBrGv7Ll} }
Diffusion-based Video Super-Resolution (VSR) is renowned for generating perceptually realistic videos, yet it grapples with maintaining detail consistency across frames due to stochastic fluctuations. The traditional approach of pixel-level alignment is ineffective for diffusion-processed frames because of iterative disruptions. To overcome this, we introduce SeeClear--a novel VSR framework leveraging conditional video generation, orchestrated by instance-centric and channel-wise semantic controls. This framework integrates a Semantic Distiller and a Pixel Condenser, which synergize to extract and upscale semantic details from low-resolution frames. The Instance-Centric Alignment Module (InCAM) utilizes video-clip-wise tokens to dynamically relate pixels within and across frames, enhancing coherency. Additionally, the Channel-wise Texture Aggregation Memory (CaTeGory) infuses extrinsic knowledge, capitalizing on long-standing semantic textures. Our method also innovates the blurring diffusion process with the ResShift mechanism, finely balancing between sharpness and diffusion effects. Comprehensive experiments confirm our framework's advantage over state-of-the-art diffusion-based VSR techniques.
SeeClear: Semantic Distillation Enhances Pixel Condensation for Video Super-Resolution
[ "Qi Tang", "Yao Zhao", "Meiqin Liu", "Chao Yao" ]
NeurIPS.cc/2024/Conference
2410.05799
[ "https://github.com/tang1705/seeclear-neurips24" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zeYyq0GpXO
@inproceedings{ dong2024exploring, title={Exploring Context Window of Large Language Models via Decomposed Positional Vectors}, author={zican Dong and Junyi Li and Xin Men and Xin Zhao and Bingning Wang and Zhen Tian and weipeng chen and Ji-Rong Wen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zeYyq0GpXO} }
Transformer-based large language models (LLMs) typically have a limited context window, resulting in significant performance degradation when processing text beyond the length of the context window. Extensive studies have been proposed to extend the context window and achieve length extrapolation of LLMs, but there is still a lack of in-depth interpretation of these approaches. In this study, we explore the positional information within and beyond the context window for deciphering the underlying mechanism of LLMs. By using a mean-based decomposition method, we disentangle positional vectors from hidden states of LLMs and analyze their formation and effect on attention. Furthermore, when texts exceed the context window, we analyze the change of positional vectors in two settings, i.e., direct extrapolation and context window extension. Based on our findings, we design two training-free context window extension methods, positional vector replacement and attention window extension. Experimental results show that our methods can effectively extend the context window length.
Exploring Context Window of Large Language Models via Decomposed Positional Vectors
[ "zican Dong", "Junyi Li", "Xin Men", "Xin Zhao", "Bingning Wang", "Zhen Tian", "weipeng chen", "Ji-Rong Wen" ]
NeurIPS.cc/2024/Conference
2405.18009
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=zcEPOB9rCR
@inproceedings{ luo2024bridging, title={Bridging Geometric States via Geometric Diffusion Bridge}, author={Shengjie Luo and Yixian Xu and Di He and Shuxin Zheng and Tie-Yan Liu and Liwei Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zcEPOB9rCR} }
The accurate prediction of geometric state evolution in complex systems is critical for advancing scientific domains such as quantum chemistry and material modeling. Traditional experimental and computational methods face challenges in terms of environmental constraints and computational demands, while current deep learning approaches still fall short in terms of precision and generality. In this work, we introduce the Geometric Diffusion Bridge (GDB), a novel generative modeling framework that accurately bridges initial and target geometric states. GDB leverages a probabilistic approach to evolve geometric state distributions, employing an equivariant diffusion bridge derived by a modified version of Doob's $h$-transform for connecting geometric states. This tailored diffusion process is anchored by initial and target geometric states as fixed endpoints and governed by equivariant transition kernels. Moreover, trajectory data can be seamlessly leveraged in our GDB framework by using a chain of equivariant diffusion bridges, providing a more detailed and accurate characterization of evolution dynamics. Theoretically, we conduct a thorough examination to confirm our framework's ability to preserve joint distributions of geometric states and capability to completely model the underlying dynamics inducing trajectory distributions with negligible error. Experimental evaluations across various real-world scenarios show that GDB surpasses existing state-of-the-art approaches, opening up a new pathway for accurately bridging geometric states and tackling crucial scientific challenges with improved accuracy and applicability.
Bridging Geometric States via Geometric Diffusion Bridge
[ "Shengjie Luo", "Yixian Xu", "Di He", "Shuxin Zheng", "Tie-Yan Liu", "Liwei Wang" ]
NeurIPS.cc/2024/Conference
2410.24220
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zb8jLAh2VN
@inproceedings{ zhang2024inference, title={Inference of Neural Dynamics Using Switching Recurrent Neural Networks}, author={Yongxu Zhang and Shreya Saxena}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zb8jLAh2VN} }
Neural population activity often exhibits distinct dynamical features across time, which may correspond to distinct internal processes or behavior. Linear methods and variations thereof, such as Hidden Markov Model (HMM) and Switching Linear Dynamical System (SLDS), are often employed to identify discrete states with evolving neural dynamics. However, these techniques may not be able to capture the underlying nonlinear dynamics associated with neural propagation. Recurrent Neural Networks (RNNs) are commonly used to model neural dynamics thanks to their nonlinear characteristics. In our work, we develop Switching Recurrent Neural Networks (SRNN), RNNs with weights that switch across time, to reconstruct switching dynamics of neural time-series data. We apply these models to simulated data as well as cortical neural activity across mice and monkeys, which allows us to automatically detect discrete states that lead to the identification of varying neural dynamics. In a monkey reaching dataset with electrophysiology recordings, a mouse self-initiated lever pull dataset with widefield calcium recordings, and a mouse self-initiated decision making dataset with widefield calcium recording, SRNNs are able to automatically identify discrete states with distinct nonlinear neural dynamics. The inferred switches are aligned with the behavior, and the reconstructions show that the recovered neural dynamics are distinct across different stages of the behavior. We show that the neural dynamics have behaviorally-relevant switches across time and we are able to use SRNNs to successfully capture these switches and the corresponding dynamical features.
Inference of Neural Dynamics Using Switching Recurrent Neural Networks
[ "Yongxu Zhang", "Shreya Saxena" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zaXuMqOAF4
@inproceedings{ ma2024mesaextrapolation, title={Mesa-Extrapolation: A Weave Position Encoding Method for Enhanced Extrapolation in {LLM}s}, author={Xin Ma and Yang Liu and Jingjing Liu and Xiaoxu Ma}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zaXuMqOAF4} }
Large language models (LLMs), although having revolutionized many fields, still suffer from the challenging extrapolation problem, where the inference ability of LLMs sharply declines beyond their max training lengths. In this work, we conduct a theoretical analysis to better understand why No Position Encoding (NoPE) fails outside its effective range, as well as examining the power of Position Encoding (PE) in this context. Our findings reveal that with meticulous weave position, PE can indeed be extended beyond effective range. Our theorems establish that LLMs equipped with weave PE can achieve improved extrapolation performance without additional cost. Furthermore, we introduce a novel weave PE method, Mesa-Extrapolation, which utilizes a chunk-based triangular attention matrix and applies Stair PE to manage the final chunk. This method not only retains competitive performance but also offers substantial benefits such as significantly reduced memory demand and faster inference speed. Extensive experiments validate the effectiveness of Mesa-Extrapolation, demonstrating its potential as a scalable solution to enhancing LLMs’ applicative reach.
Mesa-Extrapolation: A Weave Position Encoding Method for Enhanced Extrapolation in LLMs
[ "Xin Ma", "Yang Liu", "Jingjing Liu", "Xiaoxu Ma" ]
NeurIPS.cc/2024/Conference
2410.15859
[ "https://github.com/soacker/mesa-extrapolation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=za9Jx8yqUA
@inproceedings{ mazzaglia2024genrl, title={Gen{RL}: Multimodal-foundation world models for generalization in embodied agents}, author={Pietro Mazzaglia and Tim Verbelen and Bart Dhoedt and Aaron Courville and Sai Rajeswar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=za9Jx8yqUA} }
Learning generalist embodied agents, able to solve multitudes of tasks in different domains is a long-standing problem. Reinforcement learning (RL) is hard to scale up as it requires a complex reward design for each task. In contrast, language can specify tasks in a more natural way. Current foundation vision-language models (VLMs) generally require fine-tuning or other adaptations to be adopted in embodied contexts, due to the significant domain gap. However, the lack of multimodal data in such domains represents an obstacle to developing foundation models for embodied applications. In this work, we overcome these problems by presenting multimodal-foundation world models, able to connect and align the representation of foundation VLMs with the latent space of generative world models for RL, without any language annotations. The resulting agent learning framework, GenRL, allows one to specify tasks through vision and/or language prompts, ground them in the embodied domain’s dynamics, and learn the corresponding behaviors in imagination. As assessed through large-scale multi-task benchmarking in locomotion and manipulation domains, GenRL enables multi-task generalization from language and visual prompts. Furthermore, by introducing a data-free policy learning strategy, our approach lays the groundwork for foundational policy learning using generative world models. Website, code and data: https://mazpie.github.io/genrl/
GenRL: Multimodal-foundation world models for generalization in embodied agents
[ "Pietro Mazzaglia", "Tim Verbelen", "Bart Dhoedt", "Aaron Courville", "Sai Rajeswar" ]
NeurIPS.cc/2024/Conference
2406.18043
[ "https://github.com/mazpie/genrl" ]
https://huggingface.co/papers/2406.18043
1
1
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=zZVqZRXSao
@inproceedings{ wang2024semantic, title={Semantic Feature Learning for Universal Unsupervised Cross-Domain Retrieval}, author={Lixu Wang and Xinyu Du and Qi Zhu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zZVqZRXSao} }
Cross-domain retrieval (CDR) is finding increasingly broad applications across various domains. However, existing efforts have several major limitations, with the most critical being their reliance on accurate supervision. Recent studies thus focus on achieving unsupervised CDR, but they typically assume that the category spaces across domains are identical, an assumption that is often unrealistic in real-world scenarios. This is because only through dedicated and comprehensive analysis can the category composition of a data domain be obtained, which contradicts the premise of unsupervised scenarios. Therefore, in this work, we introduce the problem of **U**niversal **U**nsupervised **C**ross-**D**omain **R**etrieval (U^2CDR) for the first time and design a two-stage semantic feature learning framework to address it. In the first stage, a cross-domain unified prototypical structure is established under the guidance of an instance-prototype-mixed contrastive loss and a semantic-enhanced loss, to counteract category space differences. In the second stage, through a modified adversarial training mechanism, we ensure minimal changes for the established prototypical structure during domain alignment, enabling more accurate nearest-neighbor searching. Extensive experiments across multiple datasets and scenarios, including close-set, partial, and open-set CDR, demonstrate that our approach significantly outperforms existing state-of-the-art CDR methods and other related methods in solving U^2CDR challenges.
Semantic Feature Learning for Universal Unsupervised Cross-Domain Retrieval
[ "Lixu Wang", "Xinyu Du", "Qi Zhu" ]
NeurIPS.cc/2024/Conference
2403.05690
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zXfhHJnMB2
@inproceedings{ kostic2024neural, title={Neural Conditional Probability for Uncertainty Quantification}, author={Vladimir R Kostic and gregoire pacreau and Giacomo Turri and Pietro Novelli and Karim Lounici and Massimiliano Pontil}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zXfhHJnMB2} }
We introduce Neural Conditional Probability (NCP), an operator-theoretic approach to learning conditional distributions with a focus on statistical inference tasks. NCP can be used to build conditional confidence regions and extract key statistics such as conditional quantiles, mean, and covariance. It offers streamlined learning via a single unconditional training phase, allowing efficient inference without the need for retraining even when conditioning changes. By leveraging the approximation capabilities of neural networks, NCP efficiently handles a wide variety of complex probability distributions. We provide theoretical guarantees that ensure both optimization consistency and statistical accuracy. In experiments, we show that NCP with a 2-hidden-layer network matches or outperforms leading methods. This demonstrates that a a minimalistic architecture with a theoretically grounded loss can achieve competitive results, even in the face of more complex architectures.
Neural Conditional Probability for Uncertainty Quantification
[ "Vladimir R Kostic", "gregoire pacreau", "Giacomo Turri", "Pietro Novelli", "Karim Lounici", "Massimiliano Pontil" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zWuHSIALBh
@inproceedings{ lin2024flame, title={{FLAME} : Factuality-Aware Alignment for Large Language Models}, author={Sheng-Chieh Lin and Luyu Gao and Barlas Oguz and Wenhan Xiong and Jimmy Lin and Wen-tau Yih and Xilun Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zWuHSIALBh} }
Alignment is a procedure to fine-tune pre-trained large language models (LLMs) to follow natural language instructions and serve as helpful AI assistants. We have observed, however, that the conventional alignment process fails to enhance the factual accuracy of LLMs, and often leads to the generation of more false facts (i.e., *hallucination*). In this paper, we study how to make the LLM alignment process more factual, by first identifying factors that lead to hallucination in both alignment steps: supervised fine-tuning (SFT) and reinforcement learning (RL). In particular, we find that training the LLM on new or unfamiliar knowledge can encourage hallucination. This makes SFT less factual as it trains on human-labeled data that may be novel to the LLM. Furthermore, reward functions used in standard RL often inadequately capture factuality and favor longer and more detailed responses, which inadvertently promote hallucination. Based on these observations, we propose *FactuaLity-aware AlignMEnt*, comprised of *factuality-aware SFT* and *factuality-aware RL* through direct preference optimization. Experiments show that our proposed *FLAME* guides LLMs to output more factual responses while maintaining their instruction-following capability.
FLAME : Factuality-Aware Alignment for Large Language Models
[ "Sheng-Chieh Lin", "Luyu Gao", "Barlas Oguz", "Wenhan Xiong", "Jimmy Lin", "Wen-tau Yih", "Xilun Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zWnW4zqkuM
@inproceedings{ jin2024instructgi, title={InstructG2I: Synthesizing Images from Multimodal Attributed Graphs}, author={Bowen Jin and Ziqi Pang and Bingjun Guo and Yu-Xiong Wang and Jiaxuan You and Jiawei Han}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zWnW4zqkuM} }
In this paper, we approach an overlooked yet critical task Graph2Image: generating images from multimodal attributed graphs (MMAGs). This task poses significant challenges due to the explosion in graph size, dependencies among graph entities, and the need for controllability in graph conditions. To address these challenges, we propose a graph context-conditioned diffusion model called InstructG2I. InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling by combining personalized page rank and re-ranking based on vision-language features. Then, a graph QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process of diffusion. Finally, we propose graph classifier-free guidance, enabling controllable generation by varying the strength of graph guidance and multiple connected edges to a node. Extensive experiments conducted on three datasets from different domains demonstrate the effectiveness and controllability of our approach. The code is available at https://github.com/PeterGriffinJin/InstructG2I.
InstructG2I: Synthesizing Images from Multimodal Attributed Graphs
[ "Bowen Jin", "Ziqi Pang", "Bingjun Guo", "Yu-Xiong Wang", "Jiaxuan You", "Jiawei Han" ]
NeurIPS.cc/2024/Conference
2410.07157
[ "https://github.com/PeterGriffinJin/InstructG2I" ]
https://huggingface.co/papers/2410.07157
1
0
0
6
[ "PeterJinGo/VirtualArtist" ]
[]
[]
[ "PeterJinGo/VirtualArtist" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=zVrQeoPIoQ
@inproceedings{ he2024rethinking, title={Rethinking No-reference Image Exposure Assessment from Holism to Pixel: Models, Datasets and Benchmarks}, author={Shuai He and Shuntian Zheng and Anlong Ming and Banyu Wu and Huadong Ma}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zVrQeoPIoQ} }
The past decade has witnessed an increasing demand for enhancing image quality through exposure, and as a crucial prerequisite in this endeavor, Image Exposure Assessment (IEA) is now being accorded serious attention. However, IEA encounters two persistent challenges that remain unresolved over the long term: the accuracy and generalizability of No-reference IEA are inadequate for practical applications; the scope of IEA is confined to qualitative and quantitative analysis of the entire image or subimage, such as providing only a score to evaluate the exposure level, thereby lacking intuitive and precise fine-grained evaluation for complex exposure conditions. The objective of this paper is to address the persistent bottleneck challenges from three perspectives: model, dataset, and benchmark. 1) Model-level: we propose a Pixel-level IEA Network (P-IEANet) that utilizes Haar discrete wavelet transform (DWT) to analyze, decompose, and assess exposure from both lightness and structural perspectives, capable of generating pixel-level assessment results under no-reference scenarios. 2) Dataset-level: we elaborately build an exposure-oriented dataset, IEA40K, containing 40K images, covering 17 typical lighting scenarios, 27 devices, and 50+ scenes, with each image densely annotated by more than 10 experts with pixel-level labels. 3) Benchmark-level: we develop a comprehensive benchmark of 19 methods based on IEA40K. Our P-IEANet not only achieves state-of-the-art (SOTA) performance on all metrics but also seamlessly integrates with existing exposure correction and lighting enhancement methods. To our knowledge, this is the first work that explicitly emphasizes assessing complex image exposure problems at a pixel level, providing a significant boost to the IEA and exposure-related community. The code and dataset are available in \href{https://github.com/mRobotit/Pixel-level-No-reference-Image-Exposure-Assessment}{\textcolor{red} {here}}.
Rethinking No-reference Image Exposure Assessment from Holism to Pixel: Models, Datasets and Benchmarks
[ "Shuai He", "Shuntian Zheng", "Anlong Ming", "Banyu Wu", "Huadong Ma" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zV2GDsZb5a
@inproceedings{ jin2024neural, title={Neural Gaffer: Relighting Any Object via Diffusion}, author={Haian Jin and Yuan Li and Fujun Luan and Yuanbo Xiangli and Sai Bi and Kai Zhang and Zexiang Xu and Jin Sun and Noah Snavely}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zV2GDsZb5a} }
Single-image relighting is a challenging task that involves reasoning about the complex interplay between geometry, materials, and lighting. Many prior methods either support only specific categories of images, such as portraits, or require special capture conditions, like using a flashlight. Alternatively, some methods explicitly decompose a scene into intrinsic components, such as normals and BRDFs, which can be inaccurate or under-expressive. In this work, we propose a novel end-to-end 2D relighting diffusion model, called Neural Gaffer, that takes a single image of any object and can synthesize an accurate, high-quality relit image under any novel environmental lighting condition, simply by conditioning an image generator on a target environment map, without an explicit scene decomposition. Our method builds on a pre-trained diffusion model, and fine-tunes it on a synthetic relighting dataset, revealing and harnessing the inherent understanding of lighting present in the diffusion model. We evaluate our model on both synthetic and in-the-wild Internet imagery and demonstrate its advantages in terms of generalization and accuracy. Moreover, by combining with other generative methods, our model enables many downstream 2D tasks, such as text-based relighting and object insertion. Our model can also operate as a strong relighting prior for 3D tasks, such as relighting a radiance field.
Neural Gaffer: Relighting Any Object via Diffusion
[ "Haian Jin", "Yuan Li", "Fujun Luan", "Yuanbo Xiangli", "Sai Bi", "Kai Zhang", "Zexiang Xu", "Jin Sun", "Noah Snavely" ]
NeurIPS.cc/2024/Conference
2406.07520
[ "" ]
https://huggingface.co/papers/2406.07520
6
5
2
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=zTu0QEpvtZ
@inproceedings{ yi2024towards, title={Towards Understanding the Working Mechanism of Text-to-Image Diffusion Model}, author={Mingyang Yi and Aoxue Li and Yi Xin and Zhenguo Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zTu0QEpvtZ} }
Recently, the strong latent Diffusion Probabilistic Model (DPM) has been applied to high-quality Text-to-Image (T2I) generation (e.g., Stable Diffusion), by injecting the encoded target text prompt into the gradually denoised diffusion image generator. Despite the success of DPM in practice, the mechanism behind it remains to be explored. To fill this blank, we begin by examining the intermediate statuses during the gradual denoising generation process in DPM. The empirical observations indicate, the shape of image is reconstructed after the first few denoising steps, and then the image is filled with details (e.g., texture). The phenomenon is because the low-frequency signal (shape relevant) of the noisy image is not corrupted until the final stage in the forward process (initial stage of generation) of adding noise in DPM. Inspired by the observations, we proceed to explore the influence of each token in the text prompt during the two stages. After a series of experiments of T2I generations conditioned on a set of text prompts. We conclude that in the earlier generation stage, the image is mostly decided by the special token [\texttt{EOS}] in the text prompt, and the information in the text prompt is already conveyed in this stage. After that, the diffusion model completes the details of generated images by information from themselves. Finally, we propose to apply this observation to accelerate the process of T2I generation by properly removing text guidance, which finally accelerates the sampling up to 25\%+.
Towards Understanding the Working Mechanism of Text-to-Image Diffusion Model
[ "Mingyang Yi", "Aoxue Li", "Yi Xin", "Zhenguo Li" ]
NeurIPS.cc/2024/Conference
2405.15330
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zO55ovdLJw
@inproceedings{ hu2024deep, title={Deep Correlated Prompting for Visual Recognition with Missing Modalities}, author={Lianyu Hu and Tongkai Shi and Wei Feng and Fanhua Shang and Liang Wan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zO55ovdLJw} }
Large-scale multimodal models have shown excellent performance over a series of tasks powered by the large corpus of paired multimodal training data. Generally, they are always assumed to receive modality-complete inputs. However, this simple assumption may not always hold in the real world due to privacy constraints or collection difficulty, where models pretrained on modality-complete data easily demonstrate degraded performance on missing-modality cases. To handle this issue, we refer to prompt learning to adapt large pretrained multimodal models to handle missing-modality scenarios by regarding different missing cases as different types of input. Instead of only prepending independent prompts to the intermediate layers, we present to leverage the correlations between prompts and input features and excavate the relationships between different layers of prompts to carefully design the instructions. We also incorporate the complementary semantics of different modalities to guide the prompting design for each modality. Extensive experiments on three commonly-used datasets consistently demonstrate the superiority of our method compared to the previous approaches upon different missing scenarios. Plentiful ablations are further given to show the generalizability and reliability of our method upon different modality-missing ratios and types.
Deep Correlated Prompting for Visual Recognition with Missing Modalities
[ "Lianyu Hu", "Tongkai Shi", "Wei Feng", "Fanhua Shang", "Liang Wan" ]
NeurIPS.cc/2024/Conference
2410.06558
[ "https://github.com/hulianyuyy/deep_correlated_prompting" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zNiJZUAlxg
@inproceedings{ yao2024resad, title={Res{AD}: A Simple Framework for Class Generalizable Anomaly Detection}, author={Xincheng Yao and Zixin Chen and Chao Gao and Guangtao Zhai and Chongyang Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zNiJZUAlxg} }
This paper explores the problem of class-generalizable anomaly detection, where the objective is to train one unified AD model that can generalize to detect anomalies in diverse classes from different domains without any retraining or fine-tuning on the target data. Because normal feature representations vary significantly across classes, this will cause the widely studied one-for-one AD models to be poorly classgeneralizable (i.e., performance drops dramatically when used for new classes). In this work, we propose a simple but effective framework (called ResAD) that can be directly applied to detect anomalies in new classes. Our main insight is to learn the residual feature distribution rather than the initial feature distribution. In this way, we can significantly reduce feature variations. Even in new classes, the distribution of normal residual features would not remarkably shift from the learned distribution. Therefore, the learned model can be directly adapted to new classes. ResAD consists of three components: (1) a Feature Converter that converts initial features into residual features; (2) a simple and shallow Feature Constraintor that constrains normal residual features into a spatial hypersphere for further reducing feature variations and maintaining consistency in feature scales among different classes; (3) a Feature Distribution Estimator that estimates the normal residual feature distribution, anomalies can be recognized as out-of-distribution. Despite the simplicity, ResAD can achieve remarkable anomaly detection results when directly used in new classes. The code is available at https://github.com/xcyao00/ResAD.
ResAD: A Simple Framework for Class Generalizable Anomaly Detection
[ "Xincheng Yao", "Zixin Chen", "Chao Gao", "Guangtao Zhai", "Chongyang Zhang" ]
NeurIPS.cc/2024/Conference
2410.20047
[ "https://github.com/xcyao00/resad" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=zNIhPZnqhh
@inproceedings{ zheng2024continuous, title={Continuous Spatiotemporal Events Decoupling through Spike-based Bayesian Computation}, author={Yajing Zheng and Jiyuan Zhang and Tiejun Huang and Zhaofei Yu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zNIhPZnqhh} }
Numerous studies have demonstrated that the cognitive processes of the human brain can be modeled using the Bayesian theorem for probabilistic inference of the external world. Spiking neural networks (SNNs), capable of performing Bayesian computation with greater physiological interpretability, offer a novel approach to distributed information processing in the cortex. However, applying these models to real-world scenarios to harness the advantages of brain-like computation remains a challenge. Recently, bio-inspired sensors with high dynamic range and ultra-high temporal resolution have been widely used in extreme vision scenarios. Event streams, generated by various types of motion, represent spatiotemporal data. Inferring motion targets from these streams without prior knowledge remains a difficult task. The Bayesian inference-based Expectation-Maximization (EM) framework has proven effective for motion segmentation in event streams, allowing for decoupling without prior information about the motion or its source. This work demonstrates that Bayesian computation based on spiking neural networks can decouple event streams of different motions. The Winner-Take-All (WTA) circuits in the constructed network implement an equivalent E-step, while STDP achieves an equivalent optimization in M-step. Through theoretical analysis and experiments, we show that STDP-based learning can maximize the contrast of warped events under mixed motion models. Experimental results show that the constructed spiking network can effectively segment the motion contained in event streams.
Continuous Spatiotemporal Events Decoupling through Spike-based Bayesian Computation
[ "Yajing Zheng", "Jiyuan Zhang", "Tiejun Huang", "Zhaofei Yu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zMNd0JuceF
@inproceedings{ zheng2024improved, title={Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses}, author={Xiaosen Zheng and Tianyu Pang and Chao Du and Qian Liu and Jing Jiang and Min Lin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zMNd0JuceF} }
Recently, Anil et al. (2024) show that many-shot (up to hundreds of) demonstrations can jailbreak state-of-the-art LLMs by exploiting their long-context capability. Nevertheless, is it possible to use few-shot demonstrations to efficiently jailbreak LLMs within limited context sizes? While the vanilla few-shot jailbreaking may be inefficient, we propose improved techniques such as injecting special system tokens like [/INST] and employing demo-level random search from a collected demo pool. These simple techniques result in surprisingly effective jailbreaking against aligned LLMs (even with advanced defenses). For example, our method achieves >80% (mostly >95%) ASRs on Llama-2-7B and Llama-3-8B without multiple restarts, even if the models are enhanced by strong defenses such as perplexity detection and/or SmoothLLM, which is challenging for suffix-based jailbreaking. In addition, we conduct comprehensive and elaborate (e.g., making sure to use correct system prompts) evaluations against other aligned LLMs and advanced defenses, where our method consistently achieves nearly 100% ASRs. Our code is available at https://github.com/sail-sg/I-FSJ.
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
[ "Xiaosen Zheng", "Tianyu Pang", "Chao Du", "Qian Liu", "Jing Jiang", "Min Lin" ]
NeurIPS.cc/2024/Conference
2406.01288
[ "https://github.com/sail-sg/i-fsj" ]
https://huggingface.co/papers/2406.01288
3
1
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=zLU21oQjD5
@inproceedings{ tong2024dartmath, title={{DART}-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving}, author={Yuxuan Tong and Xiwen Zhang and Rui Wang and Ruidong Wu and Junxian He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zLU21oQjD5} }
Solving mathematical problems requires advanced reasoning abilities and presents notable challenges for large language models. Previous works usually synthesize data from proprietary models to augment existing datasets, followed by instruction tuning to achieve top-tier results. However, our analysis of these datasets reveals severe biases towards easy queries, with frequent failures to generate any correct response for the most challenging queries. Hypothesizing that difficult queries are crucial to learning complex reasoning, we propose *Difficulty-Aware Rejection Tuning* (`DART`), a method that allocates difficult queries more trials during the synthesis phase, enabling more extensive training on difficult samples. Utilizing `DART`, we have created new datasets for mathematical problem-solving that focus more on difficult queries and are substantially smaller than previous ones. Remarkably, our synthesis process solely relies on a 7B-sized open-weight model, without reliance on the commonly used proprietary GPT-4. We fine-tune various base models on our datasets ranging from 7B to 70B in size, resulting in a series of strong models called `DART-Math`. In comprehensive in-domain and out-of-domain evaluation on 6 mathematical benchmarks, `DART-Math` outperforms vanilla rejection tuning significantly, being superior or comparable to previous arts, despite using much smaller datasets and no proprietary models. Furthermore, our results position our synthetic datasets as the most effective and cost-efficient publicly available resources for advancing mathematical problem-solving. Our datasets, models and code are publicly available at https://github.com/hkust-nlp/dart-math.
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving
[ "Yuxuan Tong", "Xiwen Zhang", "Rui Wang", "Ruidong Wu", "Junxian He" ]
NeurIPS.cc/2024/Conference
2407.13690
[ "https://github.com/hkust-nlp/dart-math" ]
https://huggingface.co/papers/2407.13690
1
1
2
5
[ "hkust-nlp/dart-math-dsmath-7b-prop2diff", "hkust-nlp/dart-math-mistral-7b-prop2diff", "hkust-nlp/dart-math-mistral-7b-uniform", "hkust-nlp/dart-math-llama3-8b-prop2diff", "hkust-nlp/dart-math-llama3-8b-uniform", "hkust-nlp/dart-math-llama3-70b-prop2diff", "hkust-nlp/dart-math-dsmath-7b-uniform", "hkust-nlp/dart-math-llama3-70b-uniform", "RichardErkhov/hkust-nlp_-_dart-math-llama3-8b-prop2diff-gguf", "RichardErkhov/hkust-nlp_-_dart-math-dsmath-7b-prop2diff-gguf" ]
[ "hkust-nlp/dart-math-hard", "hkust-nlp/dart-math-uniform", "hkust-nlp/dart-math-pool-math", "hkust-nlp/dart-math-pool-gsm8k-query-info", "hkust-nlp/dart-math-pool-gsm8k", "hkust-nlp/vrt-baseline", "hkust-nlp/dart-math-pool-math-query-info" ]
[]
[ "hkust-nlp/dart-math-dsmath-7b-prop2diff", "hkust-nlp/dart-math-mistral-7b-prop2diff", "hkust-nlp/dart-math-mistral-7b-uniform", "hkust-nlp/dart-math-llama3-8b-prop2diff", "hkust-nlp/dart-math-llama3-8b-uniform", "hkust-nlp/dart-math-llama3-70b-prop2diff", "hkust-nlp/dart-math-dsmath-7b-uniform", "hkust-nlp/dart-math-llama3-70b-uniform", "RichardErkhov/hkust-nlp_-_dart-math-llama3-8b-prop2diff-gguf", "RichardErkhov/hkust-nlp_-_dart-math-dsmath-7b-prop2diff-gguf" ]
[ "hkust-nlp/dart-math-hard", "hkust-nlp/dart-math-uniform", "hkust-nlp/dart-math-pool-math", "hkust-nlp/dart-math-pool-gsm8k-query-info", "hkust-nlp/dart-math-pool-gsm8k", "hkust-nlp/vrt-baseline", "hkust-nlp/dart-math-pool-math-query-info" ]
[]
1
poster
null
https://openreview.net/forum?id=zLClygeRK8
@inproceedings{ sakhi2024logarithmic, title={Logarithmic Smoothing for Pessimistic Off-Policy Evaluation, Selection and Learning}, author={Otmane Sakhi and Imad Aouali and Pierre Alquier and Nicolas Chopin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zLClygeRK8} }
This work investigates the offline formulation of the contextual bandit problem, where the goal is to leverage past interactions collected under a behavior policy to evaluate, select, and learn new, potentially better-performing, policies. Motivated by critical applications, we move beyond point estimators. Instead, we adopt the principle of _pessimism_ where we construct upper bounds that assess a policy's worst-case performance, enabling us to confidently select and learn improved policies. Precisely, we introduce novel, fully empirical concentration bounds for a broad class of importance weighting risk estimators. These bounds are general enough to cover most existing estimators and pave the way for the development of new ones. In particular, our pursuit of the tightest bound within this class motivates a novel estimator (LS), that _logarithmically smoothes_ large importance weights. The bound for LS is provably tighter than its competitors, and naturally results in improved policy selection and learning strategies. Extensive policy evaluation, selection, and learning experiments highlight the versatility and favorable performance of LS.
Logarithmic Smoothing for Pessimistic Off-Policy Evaluation, Selection and Learning
[ "Otmane Sakhi", "Imad Aouali", "Pierre Alquier", "Nicolas Chopin" ]
NeurIPS.cc/2024/Conference
2405.14335
[ "https://github.com/otmhi/offpolicy_ls" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=zLBlin2zvW
@inproceedings{ rajamanoharan2024improving, title={Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders}, author={Senthooran Rajamanoharan and Arthur Conmy and Lewis Smith and Tom Lieberum and Vikrant Varma and Janos Kramar and Rohin Shah and Neel Nanda}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zLBlin2zvW} }
Recent work has found that sparse autoencoders (SAEs) are an effective technique for unsupervised discovery of interpretable features in language models' (LMs) activations, by finding sparse, linear reconstructions of those activations. We introduce the Gated Sparse Autoencoder (Gated SAE), which achieves a Pareto improvement over training with prevailing methods. In SAEs, the L1 penalty used to encourage sparsity introduces many undesirable biases, such as shrinkage -- systematic underestimation of feature activations. The key insight of Gated SAEs is to separate the functionality of (a) determining which directions to use and (b) estimating the magnitudes of those directions: this enables us to apply the L1 penalty only to the former, limiting the scope of undesirable side effects. Through training SAEs on LMs of up to 7B parameters we find that, in typical hyper-parameter ranges, Gated SAEs solve shrinkage, are similarly interpretable, and require half as many firing features to achieve comparable reconstruction fidelity.
Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders
[ "Senthooran Rajamanoharan", "Arthur Conmy", "Lewis Smith", "Tom Lieberum", "Vikrant Varma", "Janos Kramar", "Rohin Shah", "Neel Nanda" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zJremsKVyh
@inproceedings{ manela2024marginal, title={Marginal Causal Flows for Validation and Inference}, author={Daniel de Vassimon Manela and Laura Battaglia and Robin J. Evans}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zJremsKVyh} }
Investigating the marginal causal effect of an intervention on an outcome from complex data remains challenging due to the inflexibility of employed models and the lack of complexity in causal benchmark datasets, which often fail to reproduce intricate real-world data patterns. In this paper we introduce Frugal Flows, a likelihood-based machine learning model that uses normalising flows to flexibly learn the data-generating process, while also directly targeting the marginal causal quantities inferred from observational data. We provide a novel algorithm for fitting a model to observational data with a parametrically specified causal distribution, and propose that these models are exceptionally well suited for synthetic data generation to validate causal methods. Unlike existing data generation methods, Frugal Flows generate synthetic data that closely resembles the empirical dataset, while also automatically and exactly satisfying a user-defined average treatment effect. To our knowledge, Frugal Flows are the first generative model to both learn flexible data representations and also \textit{exactly} parameterise quantities such as the average treatment effect and the degree of unobserved confounding. We demonstrate the above with experiments on both simulated and real-world datasets.
Marginal Causal Flows for Validation and Inference
[ "Daniel de Vassimon Manela", "Laura Battaglia", "Robin J. Evans" ]
NeurIPS.cc/2024/Conference
2411.01295
[ "https://github.com/llaurabatt/frugal-flows" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zJNSbgl4UA
@inproceedings{ zhang2024slicing, title={Slicing Vision Transformer for Flexibile Inference}, author={Yitian Zhang and Huseyin Coskun and Xu Ma and Huan Wang and Ke Ma and Stephen Xi Chen and Derek Hao Hu and Yun Fu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zJNSbgl4UA} }
Vision Transformers (ViT) is known for its scalability. In this work, we target to scale down a ViT to fit in an environment with dynamic-changing resource constraints. We observe that smaller ViTs are intrinsically the sub-networks of a larger ViT with different widths. Thus, we propose a general framework, named Scala, to enable a single network to represent multiple smaller ViTs with flexible inference capability, which aligns with the inherent design of ViT to vary from widths. Concretely, Scala activates several subnets during training, introduces Isolated Activation to disentangle the smallest sub-network from other subnets, and leverages Scale Coordination to ensure each sub-network receives simplified, steady, and accurate learning objectives. Comprehensive empirical validations on different tasks demonstrate that with only one-shot training, Scala learns slimmable representation without modifying the original ViT structure and matches the performance of Separate Training. Compared with the prior art, Scala achieves an average improvement of 1.6% on ImageNet-1K with fewer parameters.
Slicing Vision Transformer for Flexibile Inference
[ "Yitian Zhang", "Huseyin Coskun", "Xu Ma", "Huan Wang", "Ke Ma", "Stephen Xi Chen", "Derek Hao Hu", "Yun Fu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zIr2QjU4hl
@inproceedings{ uehara2024bridging, title={Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models}, author={Masatoshi Uehara and Yulai Zhao and Ehsan Hajiramezanali and Gabriele Scalia and G{\"o}kcen Eraslan and Avantika Lal and Sergey Levine and Tommaso Biancalani}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zIr2QjU4hl} }
AI-driven design problems, such as DNA/protein sequence design, are commonly tackled from two angles: generative modeling, which efficiently captures the feasible design space (e.g., natural images or biological sequences), and model-based optimization, which utilizes reward models for extrapolation. To combine the strengths of both approaches, we adopt a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL. Although prior work has explored similar avenues, they primarily focus on scenarios where accurate reward models are accessible. In contrast, we concentrate on an offline setting where a reward model is unknown, and we must learn from static offline datasets, a common scenario in scientific domains. In offline scenarios, existing approaches tend to suffer from overoptimization, as they may be misled by the reward model in out-of-distribution regions. To address this, we introduce a conservative fine-tuning approach, BRAID, by optimizing a conservative reward model, which includes additional penalization outside of offline data distributions. Through empirical and theoretical analysis, we demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models while avoiding the generation of invalid designs through pre-trained diffusion models.
Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models
[ "Masatoshi Uehara", "Yulai Zhao", "Ehsan Hajiramezanali", "Gabriele Scalia", "Gökcen Eraslan", "Avantika Lal", "Sergey Levine", "Tommaso Biancalani" ]
NeurIPS.cc/2024/Conference
2405.19673
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zGN0YWy2he
@inproceedings{ wang2024scene, title={Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation}, author={Yunnan Wang and Ziqiang Li and Wenyao Zhang and Zequn Zhang and Baao Xie and Xihui Liu and Wenjun Zeng and Xin Jin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zGN0YWy2he} }
There has been exciting progress in generating images from natural language or layout conditions. However, these methods struggle to faithfully reproduce complex scenes due to the insufficient modeling of multiple objects and their relationships. To address this issue, we leverage the scene graph, a powerful structured representation, for complex image generation. Different from the previous works that directly use scene graphs for generation, we employ the generative capabilities of variational autoencoders and diffusion models in a generalizable manner, compositing diverse disentangled visual clues from scene graphs. Specifically, we first propose a Semantics-Layout Variational AutoEncoder (SL-VAE) to jointly derive (layouts, semantics) from the input scene graph, which allows a more diverse and reasonable generation in a one-to-many mapping. We then develop a Compositional Masked Attention (CMA) integrated with a diffusion model, incorporating (layouts, semantics) with fine-grained attributes as generation guidance. To further achieve graph manipulation while keeping the visual content consistent, we introduce a Multi-Layered Sampler (MLS) for an "isolated" image editing effect. Extensive experiments demonstrate that our method outperforms recent competitors based on text, layout, or scene graph, in terms of generation rationality and controllability.
Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation
[ "Yunnan Wang", "Ziqiang Li", "Wenyao Zhang", "Zequn Zhang", "Baao Xie", "Xihui Liu", "Wenjun Zeng", "Xin Jin" ]
NeurIPS.cc/2024/Conference
2410.00447
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=zDaD8zv8tG
@inproceedings{ huang2024a, title={A teacher-teacher framework for clinical language representation learning}, author={Feiqing Huang and Shenghan Zhang and Sara Morini Sweet and Tianxi Cai}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zDaD8zv8tG} }
In recent years, there has been a proliferation of ready-to-use large language models (LLMs) designed for various applications, both general-purpose and domain-specific. Instead of advocating for the development of a new model or continuous pretraining of an existing one, this paper introduces a pragmatic teacher-teacher framework to facilitate mutual learning between two pre-existing models. By leveraging two teacher models possessing complementary knowledge, we introduce a LIghtweight kNowledge alignmEnt (LINE) module aimed at harmonizing their knowledge within a unified representation space. This framework is particularly valuable in clinical settings, where stringent regulations and privacy considerations dictate the handling of detailed clinical notes. Our trained LINE module excels in capturing critical information from clinical notes, leveraging highly de-identified data. Validation and downstream tasks further demonstrate the effectiveness of the proposed framework.
A teacher-teacher framework for clinical language representation learning
[ "Feiqing Huang", "Shenghan Zhang", "Sara Morini Sweet", "Tianxi Cai" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zBMKodNgKX
@inproceedings{ li2024fedne, title={Fed{NE}: Surrogate-Assisted Federated Neighbor Embedding for Dimensionality Reduction}, author={Ziwei Li and Xiaoqi Wang and Hong-You Chen and Han Wei Shen and Wei-Lun Chao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zBMKodNgKX} }
Federated learning (FL) has rapidly evolved as a promising paradigm that enables collaborative model training across distributed participants without exchanging their local data. Despite its broad applications in fields such as computer vision, graph learning, and natural language processing, the development of a data projection model that can be effectively used to visualize data in the context of FL is crucial yet remains heavily under-explored. Neighbor embedding (NE) is an essential technique for visualizing complex high-dimensional data, but collaboratively learning a joint NE model is difficult. The key challenge lies in the objective function, as effective visualization algorithms like NE require computing loss functions among pairs of data. In this paper, we introduce \textsc{FedNE}, a novel approach that integrates the \textsc{FedAvg} framework with the contrastive NE technique, without any requirements of shareable data. To address the lack of inter-client repulsion which is crucial for the alignment in the global embedding space, we develop a surrogate loss function that each client learns and shares with each other. Additionally, we propose a data-mixing strategy to augment the local data, aiming to relax the problems of invisible neighbors and false neighbors constructed by the local $k$NN graphs. We conduct comprehensive experiments on both synthetic and real-world datasets. The results demonstrate that our \textsc{FedNE} can effectively preserve the neighborhood data structures and enhance the alignment in the global embedding space compared to several baseline methods.
FedNE: Surrogate-Assisted Federated Neighbor Embedding for Dimensionality Reduction
[ "Ziwei Li", "Xiaoqi Wang", "Hong-You Chen", "Han Wei Shen", "Wei-Lun Chao" ]
NeurIPS.cc/2024/Conference
2409.11509
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zBG7WogAvm
@inproceedings{ huang2024amortized, title={Amortized Bayesian Experimental Design for Decision-Making}, author={Daolang Huang and Yujia Guo and Luigi Acerbi and Samuel Kaski}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zBG7WogAvm} }
Many critical decisions, such as personalized medical diagnoses and product pricing, are made based on insights gained from designing, observing, and analyzing a series of experiments. This highlights the crucial role of experimental design, which goes beyond merely collecting information on system parameters as in traditional Bayesian experimental design (BED), but also plays a key part in facilitating downstream decision-making. Most recent BED methods use an amortized policy network to rapidly design experiments. However, the information gathered through these methods is suboptimal for down-the-line decision-making, as the experiments are not inherently designed with downstream objectives in mind. In this paper, we present an amortized decision-aware BED framework that prioritizes maximizing downstream decision utility. We introduce a novel architecture, the Transformer Neural Decision Process (TNDP), capable of instantly proposing the next experimental design, whilst inferring the downstream decision, thus effectively amortizing both tasks within a unified workflow. We demonstrate the performance of our method across several tasks, showing that it can deliver informative designs and facilitate accurate decision-making.
Amortized Bayesian Experimental Design for Decision-Making
[ "Daolang Huang", "Yujia Guo", "Luigi Acerbi", "Samuel Kaski" ]
NeurIPS.cc/2024/Conference
2411.02064
[ "https://github.com/huangdaolang/amortized-decision-aware-bed" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zAuerb1KGx
@inproceedings{ mao2024multilabel, title={Multi-Label Learning with Stronger Consistency Guarantees}, author={Anqi Mao and Mehryar Mohri and Yutao Zhong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zAuerb1KGx} }
We present a detailed study of surrogate losses and algorithms for multi-label learning, supported by $H$-consistency bounds. We first show that, for the simplest form of multi-label loss (the popular Hamming loss), the well-known consistent binary relevance surrogate suffers from a sub-optimal dependency on the number of labels in terms of $H$-consistency bounds, when using smooth losses such as logistic losses. Furthermore, this loss function fails to account for label correlations. To address these drawbacks, we introduce a novel surrogate loss, *multi-label logistic loss*, that accounts for label correlations and benefits from label-independent $H$-consistency bounds. We then broaden our analysis to cover a more extensive family of multi-label losses, including all common ones and a new extension defined based on linear-fractional functions with respect to the confusion matrix. We also extend our multi-label logistic losses to more comprehensive multi-label comp-sum losses, adapting comp-sum losses from standard classification to the multi-label learning. We prove that this family of surrogate losses benefits from $H$-consistency bounds, and thus Bayes-consistency, across any general multi-label loss. Our work thus proposes a unified surrogate loss framework benefiting from strong consistency guarantees for any multi-label loss, significantly expanding upon previous work which only established Bayes-consistency and for specific loss functions. Additionally, we adapt constrained losses from standard classification to multi-label constrained losses in a similar way, which also benefit from $H$-consistency bounds and thus Bayes-consistency for any multi-label loss. We further describe efficient gradient computation algorithms for minimizing the multi-label logistic loss.
Multi-Label Learning with Stronger Consistency Guarantees
[ "Anqi Mao", "Mehryar Mohri", "Yutao Zhong" ]
NeurIPS.cc/2024/Conference
2407.13746
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zApFYcLg6K
@inproceedings{ chaudhuri2024on, title={On Differentially Private U Statistics}, author={Kamalika Chaudhuri and Po-Ling Loh and Shourya Pandey and Purnamrita Sarkar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=zApFYcLg6K} }
We consider the problem of privately estimating a parameter $\mathbb{E}[h(X_1,\dots,X_k)]$, where $X_1$, $X_2$, $\dots$, $X_k$ are i.i.d. data from some distribution and $h$ is a permutation-invariant function. Without privacy constraints, the standard estimators for this task are U-statistics, which commonly arise in a wide range of problems, including nonparametric signed rank tests, symmetry testing, uniformity testing, and subgraph counts in random networks, and are the unique minimum variance unbiased estimators under mild conditions. Despite the recent outpouring of interest in private mean estimation, privatizing U-statistics has received little attention. While existing private mean estimation algorithms can be applied in a black-box manner to obtain confidence intervals, we show that they can lead to suboptimal private error, e.g., constant-factor inflation in the leading term, or even $\Theta(1/n)$ rather than $O(1/n^2)$ in degenerate settings. To remedy this, we propose a new thresholding-based approach that reweights different subsets of the data using _local Hájek projections_. This leads to nearly optimal private error for non-degenerate U-statistics and a strong indication of near-optimality for degenerate U-statistics.
On Differentially Private U Statistics
[ "Kamalika Chaudhuri", "Po-Ling Loh", "Shourya Pandey", "Purnamrita Sarkar" ]
NeurIPS.cc/2024/Conference
2407.04945
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z86knmjoUq
@inproceedings{ wu2024pure, title={{PURE}: Prompt Evolution with Graph {ODE} for Out-of-distribution Fluid Dynamics Modeling}, author={Hao Wu and Changhu Wang and Fan Xu and Jinbao Xue and Chong Chen and Xian-Sheng Hua and Xiao Luo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z86knmjoUq} }
This work studies the problem of out-of-distribution fluid dynamics modeling. Previous works usually design effective neural operators to learn from mesh-based data structures. However, in real-world applications, they would suffer from distribution shifts from the variance of system parameters and temporal evolution of the dynamical system. In this paper, we propose a novel approach named \underline{P}rompt Evol\underline{u}tion with G\underline{r}aph OD\underline{E} (\method{}) for out-of-distribution fluid dynamics modeling. The core of our \method{} is to learn time-evolving prompts using a graph ODE to adapt spatio-temporal forecasting models to different scenarios. In particular, our \method{} first learns from historical observations and system parameters in the frequency domain to explore multi-view context information, which could effectively initialize prompt embeddings. More importantly, we incorporate the interpolation of observation sequences into a graph ODE, which can capture the temporal evolution of prompt embeddings for model adaptation. These time-evolving prompt embeddings are then incorporated into basic forecasting models to overcome temporal distribution shifts. We also minimize the mutual information between prompt embeddings and observation embeddings to enhance the robustness of our model to different distributions. Extensive experiments on various benchmark datasets validate the superiority of the proposed \method{} in comparison to various baselines.
PURE: Prompt Evolution with Graph ODE for Out-of-distribution Fluid Dynamics Modeling
[ "Hao Wu", "Changhu Wang", "Fan Xu", "Jinbao Xue", "Chong Chen", "Xian-Sheng Hua", "Xiao Luo" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z7h7zMgyPJ
@inproceedings{ h{\o}gsgaard2024the, title={The Many Faces of Optimal Weak-to-Strong Learning}, author={Mikael M{\o}ller H{\o}gsgaard and Kasper Green Larsen and Markus Engelund Mathiasen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z7h7zMgyPJ} }
Boosting is an extremely successful idea, allowing one to combine multiple low accuracy classifiers into a much more accurate voting classifier. In this work, we present a new and surprisingly simple Boosting algorithm that obtains a provably optimal sample complexity. Sample optimal Boosting algorithms have only recently been developed, and our new algorithm has the fastest runtime among all such algorithms and is the simplest to describe: Partition your training data into 5 disjoint pieces of equal size, run AdaBoost on each, and combine the resulting classifiers via a majority vote. In addition to this theoretical contribution, we also perform the first empirical comparison of the proposed sample optimal Boosting algorithms. Our pilot empirical study suggests that our new algorithm might outperform previous algorithms on large data sets.
The Many Faces of Optimal Weak-to-Strong Learning
[ "Mikael Møller Høgsgaard", "Kasper Green Larsen", "Markus Engelund Mathiasen" ]
NeurIPS.cc/2024/Conference
2408.17148
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z6reLFqv6w
@inproceedings{ mcsharry2024learning, title={Learning diverse causally emergent representations from time series data}, author={David McSharry and Christos Kaplanis and Fernando E Rosas and Pedro A. M. Mediano}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z6reLFqv6w} }
Cognitive processes usually take place at a macroscopic scale in systems characterised by emergent properties, which make the whole ‘more than the sum of its parts.’ While recent proposals have provided quantitative, information-theoretic metrics to detect emergence in time series data, it is often highly non-trivial to identify the relevant macroscopic variables a priori. In this paper we leverage recent advances in representation learning and differentiable information estimators to put forward a data-driven method to find emergent variables. The proposed method successfully detects emergent variables and recovers the ground-truth emergence values in a synthetic dataset. Furthermore, we show the method can be extended to learn multiple independent features, extracting a diverse set of emergent quantities. We finally show that a modified method scales to real experimental data from primate brain activity, paving the ground for future analyses uncovering the emergent structure of cognitive representations in biological and artificial intelligence systems.
Learning diverse causally emergent representations from time series data
[ "David McSharry", "Christos Kaplanis", "Fernando E Rosas", "Pedro A. M. Mediano" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z6KNvOe9zQ
@inproceedings{ yang2024vision, title={Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning}, author={Chenyu Yang and Xizhou Zhu and Jinguo Zhu and Weijie Su and Junjie Wang and Xuan Dong and Wenhai Wang and Bin Li and Jie Zhou and Yu Qiao and Jifeng Dai}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z6KNvOe9zQ} }
Recently, vision model pre-training has evolved from relying on manually annotated datasets to leveraging large-scale, web-crawled image-text data. Despite these advances, there is no pre-training method that effectively exploits the interleaved image-text data, which is very prevalent on the Internet. Inspired by the recent success of compression learning in natural language processing, we propose a novel vision model pre-training method called Latent Compression Learning (LCL) for interleaved image-text data. This method performs latent compression learning by maximizing the mutual information between the inputs and outputs of a causal attention model. The training objective can be decomposed into two basic tasks: 1) contrastive learning between visual representation and preceding context, and 2) generating subsequent text based on visual representation. Our experiments demonstrate that our method not only matches the performance of CLIP on paired pre-training datasets (e.g., LAION), but can also leverage interleaved pre-training data (e.g., MMC4) to learn robust visual representations from scratch, showcasing the potential of vision model pre-training with interleaved image-text data.
Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning
[ "Chenyu Yang", "Xizhou Zhu", "Jinguo Zhu", "Weijie Su", "Junjie Wang", "Xuan Dong", "Wenhai Wang", "Bin Li", "Jie Zhou", "Yu Qiao", "Jifeng Dai" ]
NeurIPS.cc/2024/Conference
2406.07543
[ "https://github.com/opengvlab/lcl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z4eVwH484M
@inproceedings{ kim2024unveiling, title={Unveiling the Hidden: Online Vectorized {HD} Map Construction with Clip-Level Token Interaction and Propagation}, author={Nayeon Kim and Hongje Seong and Daehyun Ji and Sujin Jang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z4eVwH484M} }
Predicting and constructing road geometric information (e.g., lane lines, road markers) is a crucial task for safe autonomous driving, while such static map elements can be repeatedly occluded by various dynamic objects on the road. Recent studies have shown significantly improved vectorized high-definition (HD) map construction performance, but there has been insufficient investigation of temporal information across adjacent input frames (i.e., clips), which may lead to inconsistent and suboptimal prediction results. To tackle this, we introduce a novel paradigm of clip-level vectorized HD map construction, MapUnveiler, which explicitly unveils the occluded map elements within a clip input by relating dense image representations with efficient clip tokens. Additionally, MapUnveiler associates inter-clip information through clip token propagation, effectively utilizing long- term temporal map information. MapUnveiler runs efficiently with the proposed clip-level pipeline by avoiding redundant computation with temporal stride while building a global map relationship. Our extensive experiments demonstrate that MapUnveiler achieves state-of-the-art performance on both the nuScenes and Argoverse2 benchmark datasets. We also showcase that MapUnveiler significantly outperforms state-of-the-art approaches in a challenging setting, achieving +10.7% mAP improvement in heavily occluded driving road scenes. The project page can be found at https://mapunveiler.github.io.
Unveiling the Hidden: Online Vectorized HD Map Construction with Clip-Level Token Interaction and Propagation
[ "Nayeon Kim", "Hongje Seong", "Daehyun Ji", "Sujin Jang" ]
NeurIPS.cc/2024/Conference
2411.11002
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z4duW3KzlD
@inproceedings{ hashempoor2024gated, title={Gated Inference Network: Inference and Learning State-Space Models}, author={Hamidreza Hashempoor and Wan Choi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z4duW3KzlD} }
This paper advances temporal reasoning within dynamically changing high-dimensional noisy observations, focusing on a latent space that characterizes the nonlinear dynamics of objects in their environment. We introduce the *Gated Inference Network* (GIN), an efficient approximate Bayesian inference algorithm for state space models (SSMs) with nonlinear state transitions and emissions. GIN disentangles two latent representations: one representing the object derived from a nonlinear mapping model, and another representing the latent state describing its dynamics. This disentanglement enables direct state estimation and missing data imputation as the world evolves. To infer the latent state, we utilize a deep extended Kalman filter (EKF) approach that integrates a novel compact RNN structure to compute both the Kalman Gain (KG) and smoothing gain (SG), completing the data flow. This design results in a computational cost per step that is linearly faster than EKF but introduces issues such as the exploding gradient problem. To mitigate the exploding gradients caused by the compact RNN structure in our model, we propose a specialized learning method that ensures stable training and inference. The model is then trained end-to-end on videos depicting a diverse range of simulated and real-world physical systems, and outperforms its ounterparts —RNNs, autoregressive models, and variational approaches— in state estimation and missing data imputation tasks.
Gated Inference Network: Inference and Learning State-Space Models
[ "Hamidreza Hashempoor", "Wan Choi" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z4FaPUslma
@inproceedings{ markou2024guiding, title={Guiding Neural Collapse: Optimising Towards the Nearest Simplex Equiangular Tight Frame}, author={Evan Markou and Thalaiyasingam Ajanthan and Stephen Gould}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z4FaPUslma} }
Neural Collapse (NC) is a recently observed phenomenon in neural networks that characterises the solution space of the final classifier layer when trained until zero training loss. Specifically, NC suggests that the final classifier layer converges to a Simplex Equiangular Tight Frame (ETF), which maximally separates the weights corresponding to each class. By duality, the penultimate layer feature means also converge to the same simplex ETF. Since this simple symmetric structure is optimal, our idea is to utilise this property to improve convergence speed. Specifically, we introduce the notion of \textit{nearest simplex ETF geometry} for the penultimate layer features at any given training iteration, by formulating it as a Riemannian optimisation. Then, at each iteration, the classifier weights are implicitly set to the nearest simplex ETF by solving this inner-optimisation, which is encapsulated within a declarative node to allow backpropagation. Our experiments on synthetic and real-world architectures on classification tasks demonstrate that our approach accelerates convergence and enhances training stability.
Guiding Neural Collapse: Optimising Towards the Nearest Simplex Equiangular Tight Frame
[ "Evan Markou", "Thalaiyasingam Ajanthan", "Stephen Gould" ]
NeurIPS.cc/2024/Conference
2411.01248
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z2739hYuR3
@inproceedings{ li2024provably, title={Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation}, author={Long-Fei Li and Yu-Jie Zhang and Peng Zhao and Zhi-Hua Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z2739hYuR3} }
We study a new class of MDPs that employs multinomial logit (MNL) function approximation to ensure valid probability distributions over the state space. Despite its benefits, introducing the non-linear function raises significant challenges in both *computational* and *statistical* efficiency. The best-known result of Hwang and Oh [2023] has achieved an $\widetilde{\mathcal{O}}(\kappa^{-1}dH^2\sqrt{K})$ regret, where $\kappa$ is a problem-dependent quantity, $d$ is the feature dimension, $H$ is the episode length, and $K$ is the number of episodes. While this result attains the same rate in $K$ as linear cases, the method requires storing all historical data and suffers from an $\mathcal{O}(K)$ computation cost per episode. Moreover, the quantity $\kappa$ can be exponentially small in the worst case, leading to a significant gap for the regret compared to linear function approximation. In this work, we first address the computational and storage issue by proposing an algorithm that achieves the same regret with only $\mathcal{O}(1)$ cost. Then, we design an enhanced algorithm that leverages local information to enhance statistical efficiency. It not only maintains an $\mathcal{O}(1)$ computation and storage cost per episode but also achieves an improved regret of $\widetilde{\mathcal{O}}(dH^2\sqrt{K} + d^2H^2\kappa^{-1})$, nearly closing the gap with linear function approximation. Finally, we establish the first lower bound for MNL function approximation, justifying the optimality of our results in $d$ and $K$.
Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation
[ "Long-Fei Li", "Yu-Jie Zhang", "Peng Zhao", "Zhi-Hua Zhou" ]
NeurIPS.cc/2024/Conference
2405.17061
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z1GwaNoGnr
@inproceedings{ wang2024xmaskd, title={{XM}ask3D: Cross-modal Mask Reasoning for Open Vocabulary 3D Semantic Segmentation}, author={Ziyi Wang and Yanbo Wang and Xumin Yu and Jie Zhou and Jiwen Lu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z1GwaNoGnr} }
Existing methodologies in open vocabulary 3D semantic segmentation primarily concentrate on establishing a unified feature space encompassing 3D, 2D, and textual modalities. Nevertheless, traditional techniques such as global feature alignment or vision-language model distillation tend to impose only approximate correspondence, struggling notably with delineating fine-grained segmentation boundaries. To address this gap, we propose a more meticulous mask-level alignment between 3D features and the 2D-text embedding space through a cross-modal mask reasoning framework, XMask3D. In our approach, we developed a mask generator based on the denoising UNet from a pre-trained diffusion model, leveraging its capability for precise textual control over dense pixel representations and enhancing the open-world adaptability of the generated masks. We further integrate 3D global features as implicit conditions into the pre-trained 2D denoising UNet, enabling the generation of segmentation masks with additional 3D geometry awareness. Subsequently, the generated 2D masks are employed to align mask-level 3D representations with the vision-language feature space, thereby augmenting the open vocabulary capability of 3D geometry embeddings. Finally, we fuse complementary 2D and 3D mask features, resulting in competitive performance across multiple benchmarks for 3D open vocabulary semantic segmentation. Code is available at https://github.com/wangzy22/XMask3D.
XMask3D: Cross-modal Mask Reasoning for Open Vocabulary 3D Semantic Segmentation
[ "Ziyi Wang", "Yanbo Wang", "Xumin Yu", "Jie Zhou", "Jiwen Lu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z0I2SbjN0R
@inproceedings{ huang2024diffusionpde, title={Diffusion{PDE}: Generative {PDE}-Solving under Partial Observation}, author={Jiahe Huang and Guandao Yang and Zichen Wang and Jeong Joon Park}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=z0I2SbjN0R} }
We introduce a general framework for solving partial differential equations (PDEs) using generative diffusion models. In particular, we focus on the scenarios where we do not have the full knowledge of the scene necessary to apply classical solvers. Most existing forward or inverse PDE approaches perform poorly when the observations on the data or the underlying coefficients are incomplete, which is a common assumption for real-world measurements. In this work, we propose DiffusionPDE that can simultaneously fill in the missing information and solve a PDE by modeling the joint distribution of the solution and coefficient spaces. We show that the learned generative priors lead to a versatile framework for accurately solving a wide range of PDEs under partial observation, significantly outperforming the state-of-the-art methods for both forward and inverse directions.
DiffusionPDE: Generative PDE-Solving under Partial Observation
[ "Jiahe Huang", "Guandao Yang", "Zichen Wang", "Jeong Joon Park" ]
NeurIPS.cc/2024/Conference
2406.17763
[ "https://github.com/jhhuangchloe/DiffusionPDE" ]
https://huggingface.co/papers/2406.17763
4
23
1
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=yzviAnpvU6
@inproceedings{ wang2024relizo, title={Re{LIZO}: Sample Reusable Linear Interpolation-based Zeroth-order Optimization}, author={Xiaoxing Wang and Xiaohan Qin and Xiaokang Yang and Junchi Yan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yzviAnpvU6} }
Gradient estimation is critical in zeroth-order optimization methods, which aims to obtain the descent direction by sampling update directions and querying function evaluations. Extensive research has been conducted including smoothing and linear interpolation. The former methods smooth the objective function, causing a biased gradient estimation, while the latter often enjoys more accurate estimates, at the cost of large amounts of samples and queries at each iteration to update variables. This paper resorts to the linear interpolation strategy and proposes to reduce the complexity of gradient estimation by reusing queries in the prior iterations while maintaining the sample size unchanged. Specifically, we model the gradient estimation as a quadratically constrained linear program problem and manage to derive the analytical solution. It innovatively decouples the required sample size from the variable dimension without extra conditions required, making it able to leverage the queries in the prior iterations. Moreover, part of the intermediate variables that contribute to the gradient estimation can be directly indexed, significantly reducing the computation complexity. Experiments on both simulation functions and real scenarios (black-box adversarial attacks neural architecture search, and parameter-efficient fine-tuning for large language models), show its efficacy and efficiency. Our code is available at https://github.com/Thinklab-SJTU/ReLIZO.git.
ReLIZO: Sample Reusable Linear Interpolation-based Zeroth-order Optimization
[ "Xiaoxing Wang", "Xiaohan Qin", "Xiaokang Yang", "Junchi Yan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yySpldUsU2
@inproceedings{ nguyen2024changing, title={Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization}, author={Dang Nguyen and Paymon Haddad and Eric Gan and Baharan Mirzasoleiman}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yySpldUsU2} }
Can we modify the training data distribution to encourage the underlying optimization method toward finding solutions with superior generalization performance on in-distribution data? In this work, we approach this question for the first time by comparing the inductive bias of gradient descent (GD) with that of sharpness-aware minimization (SAM). By studying a two-layer CNN, we rigorously prove that SAM learns different features more uniformly, particularly in early epochs. That is, SAM is less susceptible to simplicity bias compared to GD. We also show that examples constraining features that are learned early are separable from the rest based on the model’s output. Based on this observation, we propose a method that (i) clusters examples based on the network output early in training, (ii) identifies a cluster of examples with similar network output, and (iii) upsamples the rest of examples only once to alleviate the simplicity bias. We show empirically that USEFUL effectively improves the generalization performance on the original data distribution when training with various gradient methods, including (S)GD and SAM. Notably, we demonstrate that our method can be combined with SAM variants and existing data augmentation strategies to achieve, to the best of our knowledge, state-of-the-art performance for training ResNet18 on CIFAR10, STL10, CINIC10, Tiny-ImageNet; ResNet34 on CIFAR100; and VGG19 and DenseNet121 on CIFAR10.
Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization
[ "Dang Nguyen", "Paymon Haddad", "Eric Gan", "Baharan Mirzasoleiman" ]
NeurIPS.cc/2024/Conference
2404.17768
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yxjWAJzUyV
@inproceedings{ gao2024rebel, title={{REBEL}: Reinforcement Learning via Regressing Relative Rewards}, author={Zhaolin Gao and Jonathan Daniel Chang and Wenhao Zhan and Owen Oertell and Gokul Swamy and Kiant{\'e} Brantley and Thorsten Joachims and J. Andrew Bagnell and Jason D. Lee and Wen Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yxjWAJzUyV} }
While originally developed for continuous control problems, Proximal Policy Optimization (PPO) has emerged as the work-horse of a variety of reinforcement learning (RL) applications, including the fine-tuning of generative models. Unfortunately, PPO requires multiple heuristics to enable stable convergence (e.g. value networks, clipping), and is notorious for its sensitivity to the precise implementation of these components. In response, we take a step back and ask what a *minimalist* RL algorithm for the era of generative models would look like. We propose REBEL, an algorithm that cleanly reduces the problem of policy optimization to regressing the *relative reward* between two completions to a prompt in terms of the policy, enabling strikingly lightweight implementation. In theory, we prove that fundamental RL algorithms like Natural Policy Gradient can be seen as variants of REBEL, which allows us to match the strongest known theoretical guarantees in terms of convergence and sample complexity in the RL literature. REBEL can also cleanly incorporate offline data and be extended to handle the intransitive preferences we frequently see in practice. Empirically, we find that REBEL provides a unified approach to language modeling and image generation with stronger or similar performance as PPO and DPO, all while being simpler to implement and more computationally efficient than PPO. When fine-tuning Llama-3-8B-Instruct, REBEL achieves strong performance in AlpacaEval 2.0, MT-Bench, and Open LLM Leaderboard. Implementation of REBEL can be found at <https://github.com/ZhaolinGao/REBEL>, and models trained by REBEL can be found at <https://huggingface.co/Cornell-AGI>.
REBEL: Reinforcement Learning via Regressing Relative Rewards
[ "Zhaolin Gao", "Jonathan Daniel Chang", "Wenhao Zhan", "Owen Oertell", "Gokul Swamy", "Kianté Brantley", "Thorsten Joachims", "J. Andrew Bagnell", "Jason D. Lee", "Wen Sun" ]
NeurIPS.cc/2024/Conference
2404.16767
[ "https://github.com/Owen-Oertell/rlcm" ]
https://huggingface.co/papers/2404.16767
3
2
0
10
[ "Cornell-AGI/REBEL-Llama-3-epoch_2", "Cornell-AGI/REBEL-Llama-3-Armo-iter_3", "Cornell-AGI/REBEL-OpenChat-3.5", "Cornell-AGI/REBEL-Llama-3", "Cornell-AGI/REBEL-Llama-3-Armo-iter_1", "Cornell-AGI/REBEL-Llama-3-Armo-iter_2" ]
[ "Cornell-AGI/Ultrafeedback-Llama-3-Armo-iter_3", "Cornell-AGI/Ultrafeedback-Llama-3-Armo-iter_1", "Cornell-AGI/Ultrafeedback-Llama-3-Armo-iter_2" ]
[]
[ "Cornell-AGI/REBEL-Llama-3-epoch_2", "Cornell-AGI/REBEL-Llama-3-Armo-iter_3", "Cornell-AGI/REBEL-OpenChat-3.5", "Cornell-AGI/REBEL-Llama-3", "Cornell-AGI/REBEL-Llama-3-Armo-iter_1", "Cornell-AGI/REBEL-Llama-3-Armo-iter_2" ]
[ "Cornell-AGI/Ultrafeedback-Llama-3-Armo-iter_3", "Cornell-AGI/Ultrafeedback-Llama-3-Armo-iter_1", "Cornell-AGI/Ultrafeedback-Llama-3-Armo-iter_2" ]
[]
1
poster
null
https://openreview.net/forum?id=yxOrSmS5wR
@inproceedings{ chen2024avcloud, title={{AV}-Cloud: Spatial Audio Rendering Through Audio-Visual Cloud Splatting}, author={Mingfei Chen and Eli Shlizerman}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yxOrSmS5wR} }
We propose a novel approach for rendering high-quality spatial audio for 3D scenes that is in synchrony with the visual stream but does not rely or explicitly conditioned on the visual rendering. We demonstrate that such an approach enables the experience of immersive virtual tourism - performing a real-time dynamic navigation within the scene, experiencing both audio and visual content. Current audio-visual rendering approaches typically rely on visual cues, such as images, and thus visual artifacts could cause inconsistency in the audio quality. Furthermore, when such approaches are incorporated with visual rendering, audio generation at each viewpoint occurs after the rendering of the image of the viewpoint and thus could lead to audio lag that affects the integration of audio and visual streams. Our proposed approach, AV-Cloud, overcomes these challenges by learning the representation of the audio-visual scene based on a set of sparse AV anchor points, that constitute the Audio-Visual Cloud, and are derived from the camera calibration. The Audio-Visual Cloud serves as an audio-visual representation from which the generation of spatial audio for arbitrary listener location can be generated. In particular, we propose a novel module Audio-Visual Cloud Splatting which decodes AV anchor points into a spatial audio transfer function for the arbitrary viewpoint of the target listener. This function, applied through the Spatial Audio Render Head module, transforms monaural input into viewpoint-specific spatial audio. As a result, AV-Cloud efficiently renders the spatial audio aligned with any visual viewpoint and eliminates the need for pre-rendered images. We show that AV-Cloud surpasses current state-of-the-art accuracy on audio reconstruction, perceptive quality, and acoustic effects on two real-world datasets. AV-Cloud also outperforms previous methods when tested on scenes "in the wild".
AV-Cloud: Spatial Audio Rendering Through Audio-Visual Cloud Splatting
[ "Mingfei Chen", "Eli Shlizerman" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ywEQkCmImh
@inproceedings{ cho2024towards, title={Towards Multi-Domain Learning for Generalizable Video Anomaly Detection}, author={MyeongAh Cho and Taeoh Kim and Minho Shim and Dongyoon Wee and Sangyoun Lee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ywEQkCmImh} }
Most of the existing Video Anomaly Detection (VAD) studies have been conducted within single-domain learning, where training and evaluation are performed on a single dataset. However, the criteria for abnormal events differ across VAD datasets, making it problematic to apply a single-domain model to other domains. In this paper, we propose a new task called Multi-Domain learning forVAD (MDVAD) to explore various real-world abnormal events using multiple datasets for a general model. MDVAD involves training on datasets from multiple domains simultaneously, and we experimentally observe that Abnormal Conflicts between domains hinder learning and generalization. The task aims to address two key objectives: (i) better distinguishing between general normal and abnormal events across multiple domains, and (ii) being aware of ambiguous abnormal conflicts. This paper is the first to tackle abnormal conflict issue and introduces a new benchmark, baselines, and evaluation protocols for MDVAD. As baselines, we propose a framework with Null(Angular)-Multiple Instance Learning and an Abnormal Conflict classifier. Through experiments on a MDVAD benchmark composed of six VAD datasets and using four different evaluation protocols, we reveal abnormal conflicts and demonstrate that the proposed baseline effectively handles these conflicts, showing robustness and adaptability across multiple domains.
Towards Multi-Domain Learning for Generalizable Video Anomaly Detection
[ "MyeongAh Cho", "Taeoh Kim", "Minho Shim", "Dongyoon Wee", "Sangyoun Lee" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yvUHnBkCzd
@inproceedings{ ghari2024personalized, title={Personalized Federated Learning with Mixture of Models for Adaptive Prediction and Model Fine-Tuning}, author={Pouya M. Ghari and Yanning Shen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yvUHnBkCzd} }
Federated learning is renowned for its efficacy in distributed model training, ensuring that users, called clients, retain data privacy by not disclosing their data to the central server that orchestrates collaborations. Most previous work on federated learning assumes that clients possess static batches of training data. However, clients may also need to make real-time predictions on streaming data in non-stationary environments. In such dynamic environments, employing pre-trained models may be inefficient, as they struggle to adapt to the constantly evolving data streams. To address this challenge, clients can fine-tune models online, leveraging their observed data to enhance performance. Despite the potential benefits of client participation in federated online model fine-tuning, existing analyses have not conclusively demonstrated its superiority over local model fine-tuning. To bridge this gap, the present paper develops a novel personalized federated learning algorithm, wherein each client constructs a personalized model by combining a locally fine-tuned model with multiple federated models learned by the server over time. Theoretical analysis and experiments on real datasets corroborate the effectiveness of this approach for real-time predictions and federated model fine-tuning.
Personalized Federated Learning with Mixture of Models for Adaptive Prediction and Model Fine-Tuning
[ "Pouya M. Ghari", "Yanning Shen" ]
NeurIPS.cc/2024/Conference
2410.21547
[ "https://github.com/pouyamghari/Fed-POE" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yppcLFeZgy
@inproceedings{ luo2024mutaplm, title={Muta{PLM}: Protein Language Modeling for Mutation Explanation and Engineering}, author={YIZHEN LUO and Zikun Nie and Massimo Hong and Suyuan Zhao and Hao Zhou and Zaiqing Nie}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yppcLFeZgy} }
Studying protein mutations within amino acid sequences holds tremendous significance in life sciences. Protein language models (PLMs) have demonstrated strong capabilities in broad biological applications. However, due to architectural design and lack of supervision, PLMs model mutations implicitly with evolutionary plausibility, which is not satisfactory to serve as explainable and engineerable tools in real-world studies. To address these issues, we present MutaPLM, a unified framework for interpreting and navigating protein mutations with protein language models. MutaPLM introduces a protein *delta* network that captures explicit protein mutation representations within a unified feature space, and a transfer learning pipeline with a chain-of-thought (CoT) strategy to harvest protein mutation knowledge from biomedical texts. We also construct MutaDescribe, the first large-scale protein mutation dataset with rich textual annotations, which provides cross-modal supervision signals. Through comprehensive experiments, we demonstrate that MutaPLM excels at providing human-understandable explanations for mutational effects and prioritizing novel mutations with desirable properties. Our code, model, and data are open-sourced at https://github.com/PharMolix/MutaPLM.
MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering
[ "YIZHEN LUO", "Zikun Nie", "Massimo Hong", "Suyuan Zhao", "Hao Zhou", "Zaiqing Nie" ]
NeurIPS.cc/2024/Conference
2410.22949
[ "https://github.com/pharmolix/mutaplm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ypggxVWIv2
@inproceedings{ duan2024gtbench, title={{GTB}ench: Uncovering the Strategic Reasoning Capabilities of {LLM}s via Game-Theoretic Evaluations}, author={Jinhao Duan and Renming Zhang and James Diffenderfer and Bhavya Kailkhura and Lichao Sun and Elias Stengel-Eskin and Mohit Bansal and Tianlong Chen and Kaidi Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ypggxVWIv2} }
As Large Language Models (LLMs) are integrated into critical real-world applications, their strategic and logical reasoning abilities are increasingly crucial. This paper evaluates LLMs' reasoning abilities in competitive environments through game-theoretic tasks, e.g., board and card games that require pure logic and strategic reasoning to compete with opponents. We first propose GTBench, a language-driven environment composing 10 widely-recognized tasks, across a comprehensive game taxonomy: complete versus incomplete information, dynamic versus static, and probabilistic versus deterministic scenarios. Then, we (1) Characterize the game-theoretic reasoning of LLMs; and (2) Perform LLM-vs.-LLM competitions as reasoning evaluation. We observe that (1) LLMs have distinct behaviors regarding various gaming scenarios; for example, LLMs fail in complete and deterministic games yet they are competitive in probabilistic gaming scenarios; (2) Most open-source LLMs, e.g., CodeLlama-34b-Instruct and Llama-2-70b-chat, are less competitive than commercial LLMs, e.g., GPT-4, in complex games, yet the recently released Llama-3-70b-Instruct makes up for this shortcoming. In addition, code-pretraining greatly benefits strategic reasoning, while advanced reasoning methods such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT) do not always help. We further characterize the game-theoretic properties of LLMs, such as equilibrium and Pareto Efficiency in repeated games. Detailed error profiles are provided for a better understanding of LLMs' behavior. We hope our research provides standardized protocols and serves as a foundation to spur further explorations in the strategic reasoning of LLMs.
GTBench: Uncovering the Strategic Reasoning Capabilities of LLMs via Game-Theoretic Evaluations
[ "Jinhao Duan", "Renming Zhang", "James Diffenderfer", "Bhavya Kailkhura", "Lichao Sun", "Elias Stengel-Eskin", "Mohit Bansal", "Tianlong Chen", "Kaidi Xu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ypaqE8UwsC
@inproceedings{ rengarajan2024federated, title={Federated Ensemble-Directed Offline Reinforcement Learning}, author={Desik Rengarajan and Nitin Ragothaman and Dileep Kalathil and Srinivas Shakkottai}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ypaqE8UwsC} }
We consider the problem of federated offline reinforcement learning (RL), a scenario under which distributed learning agents must collaboratively learn a high-quality control policy only using small pre-collected datasets generated according to different unknown behavior policies. Na\"{i}vely combining a standard offline RL approach with a standard federated learning approach to solve this problem can lead to poorly performing policies. In response, we develop the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA), which distills the collective wisdom of the clients using an ensemble learning approach. We develop the FEDORA codebase to utilize distributed compute resources on a federated learning platform. We show that FEDORA significantly outperforms other approaches, including offline RL over the combined data pool, in various complex continuous control environments and real-world datasets. Finally, we demonstrate the performance of FEDORA in the real-world on a mobile robot. We provide our code and a video of our experiments at \url{https://github.com/DesikRengarajan/FEDORA}.
Federated Ensemble-Directed Offline Reinforcement Learning
[ "Desik Rengarajan", "Nitin Ragothaman", "Dileep Kalathil", "Srinivas Shakkottai" ]
NeurIPS.cc/2024/Conference
2305.03097
[ "https://github.com/desikrengarajan/fedora" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ypPzyflbYs
@inproceedings{ stammer2024neural, title={Neural Concept Binder}, author={Wolfgang Stammer and Antonia W{\"u}st and David Steinmann and Kristian Kersting}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ypPzyflbYs} }
The challenge in object-based visual reasoning lies in generating concept representations that are both descriptive and distinct. Achieving this in an unsupervised manner requires human users to understand the model's learned concepts and, if necessary, revise incorrect ones. To address this challenge, we introduce the Neural Concept Binder (NCB), a novel framework for deriving both discrete and continuous concept representations, which we refer to as "concept-slot encodings". NCB employs two types of binding: "soft binding", which leverages the recent SysBinder mechanism to obtain object-factor encodings, and subsequent "hard binding", achieved through hierarchical clustering and retrieval-based inference. This enables obtaining expressive, discrete representations from unlabeled images. Moreover, the structured nature of NCB's concept representations allows for intuitive inspection and the straightforward integration of external knowledge, such as human input or insights from other AI models like GPT-4. Additionally, we demonstrate that incorporating the hard binding mechanism preserves model performance while enabling seamless integration into both neural and symbolic modules for complex reasoning tasks. We validate the effectiveness of NCB through evaluations on our newly introduced CLEVR-Sudoku dataset.
Neural Concept Binder
[ "Wolfgang Stammer", "Antonia Wüst", "David Steinmann", "Kristian Kersting" ]
NeurIPS.cc/2024/Conference
2406.09949
[ "https://github.com/ml-research/neuralconceptbinder" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ypFgcT147Z
@inproceedings{ wald2024decoupling, title={Decoupling Semantic Similarity from Spatial Alignment for Neural Networks.}, author={Tassilo Wald and Constantin Ulrich and Priyank Jaini and Gregor Koehler and David Zimmerer and Stefan Denner and Fabian Isensee and Michael Baumgartner and Klaus Maier-Hein}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ypFgcT147Z} }
What representation do deep neural networks learn? How similar are images to each other for neural networks? Despite the overwhelming success of deep learning methods key questions about their internal workings still remain largely unanswered, due to their internal high dimensionality and complexity. To address this, one approach is to measure the similarity of activation responses to various inputs. Representational Similarity Matrices (RSMs) distill this similarity into scalar values for each input pair. These matrices encapsulate the entire similarity structure of a system, indicating which input lead to similar responses. While the similarity between images is ambiguous, we argue that the spatial location of semantic objects does neither influence human perception nor deep learning classifiers. Thus this should be reflected in the definition of similarity between image responses for computer vision systems. Revisiting the established similarity calculations for RSMs we expose their sensitivity to spatial alignment. In this paper we propose to solve this through _semantic RSMs_, which are invariant to spatial permutation. We measure semantic similarity between input responses by formulating it as a set-matching problem. Further, we quantify the superiority of _semantic_ RSMs over _spatio-semantic_ RSMs through image retrieval and by comparing the similarity between representations to the similarity between predicted class probabilities.
Decoupling Semantic Similarity from Spatial Alignment for Neural Networks.
[ "Tassilo Wald", "Constantin Ulrich", "Priyank Jaini", "Gregor Koehler", "David Zimmerer", "Stefan Denner", "Fabian Isensee", "Michael Baumgartner", "Klaus Maier-Hein" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ypEamFKu2O
@inproceedings{ jia2024pgn, title={{PGN}: The {RNN}'s New Successor is Effective for Long-Range Time Series Forecasting}, author={Yuxin Jia and Youfang Lin and Jing Yu and Shuo Wang and Tianhao Liu and Huaiyu Wan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ypEamFKu2O} }
Due to the recurrent structure of RNN, the long information propagation path poses limitations in capturing long-term dependencies, gradient explosion/vanishing issues, and inefficient sequential execution. Based on this, we propose a novel paradigm called Parallel Gated Network (PGN) as the new successor to RNN. PGN directly captures information from previous time steps through the designed Historical Information Extraction (HIE) layer and leverages gated mechanisms to select and fuse it with the current time step information. This reduces the information propagation path to $\mathcal{O}(1)$, effectively addressing the limitations of RNN. To enhance PGN's performance in long-range time series forecasting tasks, we propose a novel temporal modeling framework called Temporal PGN (TPGN). TPGN incorporates two branches to comprehensively capture the semantic information of time series. One branch utilizes PGN to capture long-term periodic patterns while preserving their local characteristics. The other branch employs patches to capture short-term information and aggregate the global representation of the series. TPGN achieves a theoretical complexity of $\mathcal{O}(\sqrt{L})$, ensuring efficiency in its operations. Experimental results on five benchmark datasets demonstrate the state-of-the-art (SOTA) performance and high efficiency of TPGN, further confirming the effectiveness of PGN as the new successor to RNN in long-range time series forecasting. The code is available in this repository: https://github.com/Water2sea/TPGN.
PGN: The RNN's New Successor is Effective for Long-Range Time Series Forecasting
[ "Yuxin Jia", "Youfang Lin", "Jing Yu", "Shuo Wang", "Tianhao Liu", "Huaiyu Wan" ]
NeurIPS.cc/2024/Conference
2409.17703
[ "https://github.com/water2sea/tpgn" ]
https://huggingface.co/papers/2409.17703
0
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=ynJr0RW6FR
@inproceedings{ mei2024regs, title={Re{GS}: Reference-based Controllable Scene Stylization with Gaussian Splatting}, author={Yiqun Mei and Jiacong Xu and Vishal M. Patel}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ynJr0RW6FR} }
Referenced-based scene stylization that edits the appearance based on a content-aligned reference image is an emerging research area. Starting with a pretrained neural radiance field (NeRF), existing methods typically learn a novel appearance that matches the given style. Despite their effectiveness, they inherently suffer from time-consuming volume rendering, and thus are impractical for many real-time applications. In this work, we propose ReGS, which adapts 3D Gaussian Splatting (3DGS) for reference-based stylization to enable real-time stylized view synthesis. Editing the appearance of a pretrained 3DGS is challenging as it uses discrete Gaussians as 3D representation, which tightly bind appearance with geometry. Simply optimizing the appearance as prior methods do is often insufficient for modeling continuous textures in the given reference image. To address this challenge, we propose a novel texture-guided control mechanism that adaptively adjusts local responsible Gaussians to a new geometric arrangement, serving for desired texture details. The proposed process is guided by texture clues for effective appearance editing, and regularized by scene depth for preserving original geometric structure. With these novel designs, we show ReGs can produce state-of-the-art stylization results that respect the reference texture while embracing real-time rendering speed for free-view navigation.
ReGS: Reference-based Controllable Scene Stylization with Gaussian Splatting
[ "Yiqun Mei", "Jiacong Xu", "Vishal M. Patel" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yltJAlwtW9
@inproceedings{ futami2024informationtheoretic, title={Information-theoretic Generalization Analysis for Expected Calibration Error}, author={Futoshi Futami and Masahiro Fujisawa}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yltJAlwtW9} }
While the expected calibration error (ECE), which employs binning, is widely adopted to evaluate the calibration performance of machine learning models, theoretical understanding of its estimation bias is limited. In this paper, we present the first comprehensive analysis of the estimation bias in the two common binning strategies, uniform mass and uniform width binning. Our analysis establishes upper bounds on the bias, achieving an improved convergence rate. Moreover, our bounds reveal, for the first time, the optimal number of bins to minimize the estimation bias. We further extend our bias analysis to generalization error analysis based on the information-theoretic approach, deriving upper bounds that enable the numerical evaluation of how small the ECE is for unknown data. Experiments using deep learning models show that our bounds are nonvacuous thanks to this information-theoretic generalization analysis approach.
Information-theoretic Generalization Analysis for Expected Calibration Error
[ "Futoshi Futami", "Masahiro Fujisawa" ]
NeurIPS.cc/2024/Conference
2405.15709
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ylceJ2xIw5
@inproceedings{ xiong2024fair, title={Fair Wasserstein Coresets}, author={Zikai Xiong and Niccolo Dalmasso and Shubham Sharma and Freddy Lecue and Daniele Magazzeni and Vamsi K. Potluru and Tucker Balch and Manuela Veloso}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ylceJ2xIw5} }
Data distillation and coresets have emerged as popular approaches to generate a smaller representative set of samples for downstream learning tasks to handle large-scale datasets. At the same time, machine learning is being increasingly applied to decision-making processes at a societal level, making it imperative for modelers to address inherent biases towards subgroups present in the data. While current approaches focus on creating fair synthetic representative samples by optimizing local properties relative to the original samples, their impact on downstream learning processes has yet to be explored. In this work, we present fair Wasserstein coresets ($\texttt{FWC}$), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks. $\texttt{FWC}$ uses an efficient majority minimization algorithm to minimize the Wasserstein distance between the original dataset and the weighted synthetic samples while enforcing demographic parity. We show that an unconstrained version of $\texttt{FWC}$ is equivalent to Lloyd's algorithm for k-medians and k-means clustering. Experiments conducted on both synthetic and real datasets show that $\texttt{FWC}$: (i) achieves a competitive fairness-performance tradeoff in downstream models compared to existing approaches, (ii) improves downstream fairness when added to the existing training data and (iii) can be used to reduce biases in predictions from large language models (GPT-3.5 and GPT-4).
Fair Wasserstein Coresets
[ "Zikai Xiong", "Niccolo Dalmasso", "Shubham Sharma", "Freddy Lecue", "Daniele Magazzeni", "Vamsi K. Potluru", "Tucker Balch", "Manuela Veloso" ]
NeurIPS.cc/2024/Conference
2311.05436
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yktQNqtepd
@inproceedings{ zheng2024towards, title={Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection}, author={Chaoda Zheng and Feng Wang and Naiyan Wang and Shuguang Cui and Zhen Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yktQNqtepd} }
While 3D object bounding box (bbox) representation has been widely used in autonomous driving perception, it lacks the ability to capture the precise details of an object's intrinsic geometry. Recently, occupancy has emerged as a promising alternative for 3D scene perception. However, constructing a high-resolution occupancy map remains infeasible for large scenes due to computational constraints. Recognizing that foreground objects only occupy a small portion of the scene, we introduce object-centric occupancy as a supplement to object bboxes. This representation not only provides intricate details for detected objects but also enables higher voxel resolution in practical applications. We advance the development of object-centric occupancy perception from both data and algorithm perspectives. On the data side, we construct the first object-centric occupancy dataset from scratch using an automated pipeline. From the algorithmic standpoint, we introduce a novel object-centric occupancy completion network equipped with an implicit shape decoder that manages dynamic-size occupancy generation. This network accurately predicts the complete object-centric occupancy volume for inaccurate object proposals by leveraging temporal information from long sequences. Our method demonstrates robust performance in completing object shapes under noisy detection and tracking conditions. Additionally, we show that our occupancy features significantly enhance the detection results of state-of-the-art 3D object detectors, especially for incomplete or distant objects in the Waymo Open Dataset.
Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection
[ "Chaoda Zheng", "Feng Wang", "Naiyan Wang", "Shuguang Cui", "Zhen Li" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ykQnxko1cJ
@inproceedings{ sun2024cemiface, title={CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition}, author={Zhonglin Sun and Siyang Song and Ioannis Patras and Georgios Tzimiropoulos}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ykQnxko1cJ} }
Privacy issue is a main concern in developing face recognition techniques. Although synthetic face images can partially mitigate potential legal risks while maintaining effective face recognition (FR) performance, FR models trained by face images synthesized by existing generative approaches frequently suffer from performance degradation problems due to the insufficient discriminative quality of these synthesized samples. In this paper, we systematically investigate what contributes to solid face recognition model training, and reveal that face images with certain degree of similarities to their identity centers show great effectiveness in the performance of trained FR models. Inspired by this, we propose a novel diffusion-based approach (namely **Ce**nter-based Se**mi**-hard Synthetic Face Generation (**CemiFace**) which produces facial samples with various levels of similarity to the subject center, thus allowing to generate face datasets containing effective discriminative samples for training face recognition. Experimental results show that with a modest degree of similarity, training on the generated dataset can produce competitive performance compared to previous generation methods. The code will be available at:https://github.com/szlbiubiubiu/CemiFace
CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition
[ "Zhonglin Sun", "Siyang Song", "Ioannis Patras", "Georgios Tzimiropoulos" ]
NeurIPS.cc/2024/Conference
2409.18876
[ "https://github.com/szlbiubiubiu/CemiFace" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ykACV1IhjD
@inproceedings{ ichikawa2024controlling, title={Controlling Continuous Relaxation for Combinatorial Optimization}, author={Yuma Ichikawa}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ykACV1IhjD} }
Unsupervised learning (UL)-based solvers for combinatorial optimization (CO) train a neural network that generates a soft solution by directly optimizing the CO objective using a continuous relaxation strategy. These solvers offer several advantages over traditional methods and other learning-based methods, particularly for large-scale CO problems. However, UL-based solvers face two practical issues: (I) an optimization issue, where UL-based solvers are easily trapped at local optima, and (II) a rounding issue, where UL-based solvers require artificial post-learning rounding from the continuous space back to the original discrete space, undermining the robustness of the results. This study proposes a Continuous Relaxation Annealing (CRA) strategy, an effective rounding-free learning method for UL-based solvers. CRA introduces a penalty term that dynamically shifts from prioritizing continuous solutions, effectively smoothing the non-convexity of the objective function, to enforcing discreteness, eliminating artificial rounding. Experimental results demonstrate that CRA significantly enhances the performance of UL-based solvers, outperforming existing UL-based solvers and greedy algorithms in complex CO problems. Additionally, CRA effectively eliminates artificial rounding and accelerates the learning process.
Controlling Continuous Relaxation for Combinatorial Optimization
[ "Yuma Ichikawa" ]
NeurIPS.cc/2024/Conference
2309.16965
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yiXZZC5qDI
@inproceedings{ pan2024from, title={From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models}, author={Zhuoshi Pan and Yuguang Yao and Gaowen Liu and Bingquan Shen and H. Vicky Zhao and Ramana Rao Kompella and Sijia Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yiXZZC5qDI} }
While state-of-the-art diffusion models (DMs) excel in image generation, concerns regarding their security persist. Earlier research highlighted DMs' vulnerability to data poisoning attacks, but these studies placed stricter requirements than conventional methods like 'BadNets' in image classification. This is because the art necessitates modifications to the diffusion training and sampling procedures. Unlike the prior work, we investigate whether BadNets-like data poisoning methods can directly degrade the generation by DMs. In other words, if only the training dataset is contaminated (without manipulating the diffusion process), how will this affect the performance of learned DMs? In this setting, we uncover bilateral data poisoning effects that not only serve an adversarial purpose (compromising the functionality of DMs) but also offer a defensive advantage (which can be leveraged for defense in classification tasks against poisoning attacks). We show that a BadNets-like data poisoning attack remains effective in DMs for producing incorrect images (misaligned with the intended text conditions). Meanwhile, poisoned DMs exhibit an increased ratio of triggers, a phenomenon we refer to as 'trigger amplification', among the generated images. This insight can be then used to enhance the detection of poisoned training data. In addition, even under a low poisoning ratio, studying the poisoning effects of DMs is also valuable for designing robust image classifiers against such attacks. Last but not least, we establish a meaningful linkage between data poisoning and the phenomenon of data replications by exploring DMs' inherent data memorization tendencies. Code is available at https://github.com/OPTML-Group/BiBadDiff.
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models
[ "Zhuoshi Pan", "Yuguang Yao", "Gaowen Liu", "Bingquan Shen", "H. Vicky Zhao", "Ramana Rao Kompella", "Sijia Liu" ]
NeurIPS.cc/2024/Conference
2311.02373
[ "https://github.com/optml-group/bibaddiff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yhd2kHHNtB
@inproceedings{ du2024avoiding, title={Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments}, author={Wen-Bo Du and Tian Qin and Tian-Zuo Wang and Zhi-Hua Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yhd2kHHNtB} }
Machine learning (ML) has achieved remarkable success in prediction tasks. In many real-world scenarios, rather than solely predicting an outcome using an ML model, the crucial concern is how to make decisions to prevent the occurrence of undesired outcomes, known as the *avoiding undesired future (AUF)* problem. To this end, a new framework called *rehearsal learning* has been proposed recently, which works effectively in stationary environments by leveraging the influence relations among variables. In real tasks, however, the environments are usually non-stationary, where the influence relations may be *dynamic*, leading to the failure of AUF by the existing method. In this paper, we introduce a novel sequential methodology that effectively updates the estimates of dynamic influence relations, which are crucial for rehearsal learning to prevent undesired outcomes in non-stationary environments. Meanwhile, we take the cost of decision actions into account and provide the formulation of AUF problem with minimal action cost under non-stationarity. We prove that in linear Gaussian cases, the problem can be transformed into the well-studied convex quadratically constrained quadratic program (QCQP). In this way, we establish the first polynomial-time rehearsal-based approach for addressing the AUF problem. Theoretical and experimental results validate the effectiveness and efficiency of our method under certain circumstances.
Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments
[ "Wen-Bo Du", "Tian Qin", "Tian-Zuo Wang", "Zhi-Hua Zhou" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ygDl8q02gA
@inproceedings{ depavia2024optimal, title={Optimal Algorithms for Learning Partitions with Faulty Oracles}, author={Adela Frances DePavia and Olga Medrano Mart{\'\i}n del Campo and Erasmo Tani}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ygDl8q02gA} }
We consider a clustering problem where a learner seeks to partition a finite set by querying a faulty oracle. This models applications where learners crowdsource information from non-expert human workers or conduct noisy experiments to determine group structure. The learner aims to exactly recover a partition by submitting queries of the form ``are $u$ and $v$ in the same group?'' for any pair of elements $u$ and $v$ in the set. Moreover, because the learner only has access to faulty sources of information, they require an error-tolerant algorithm for this task: i.e. they must fully recover the correct partition, even if up to $\ell$ answers are incorrect, for some error-tolerance parameter $\ell$. We study the question: for any given error-tolerance $\ell$, what is the minimum number of queries needed to learn a finite set partition of $n$ elements into $k$ groups? We design algorithms for this task and prove that they achieve optimal query complexity. To analyze our algorithms, we first highlight a connection between this task and correlation clustering. We then use this connection to build a Rényi-Ulam style analytical framework for this problem, which yields matching lower bounds. Our analysis also reveals an inherent asymmetry between the query complexity necessary to be robust against false negative errors as opposed to false positive errors.
Optimal Algorithms for Learning Partitions with Faulty Oracles
[ "Adela Frances DePavia", "Olga Medrano Martín del Campo", "Erasmo Tani" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yfQwyxiSJ7
@inproceedings{ yuan2024colororiented, title={Color-Oriented Redundancy Reduction in Dataset Distillation}, author={Bowen Yuan and Zijian Wang and Mahsa Baktashmotlagh and Yadan Luo and Zi Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yfQwyxiSJ7} }
Dataset Distillation (DD) is designed to generate condensed representations of extensive image datasets, enhancing training efficiency. Despite recent advances, there remains considerable potential for improvement, particularly in addressing the notable redundancy within the color space of distilled images. In this paper, we propose a two-fold optimization strategy to minimize color redundancy at the individual image and overall dataset levels, respectively. At the image level, we employ a palette network, a specialized neural network, to dynamically allocate colors from a reduced color space to each pixel. The palette network identifies essential areas in synthetic images for model training, and consequently assigns more unique colors to them. At the dataset level, we develop a color-guided initialization strategy to minimize redundancy among images. Representative images with the least replicated color patterns are selected based on the information gain. A comprehensive performance study involving various datasets and evaluation scenarios is conducted, demonstrating the superior performance of our proposed color-aware DD compared to existing DD methods.
Color-Oriented Redundancy Reduction in Dataset Distillation
[ "Bowen Yuan", "Zijian Wang", "Mahsa Baktashmotlagh", "Yadan Luo", "Zi Huang" ]
NeurIPS.cc/2024/Conference
2411.11329
[ "https://github.com/kevinyuan0314/autopalette" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yeFx5NQmr7
@inproceedings{ shao2024learning, title={Learning 3D Garment Animation from Trajectories of A Piece of Cloth}, author={Yidi Shao and Chen Change Loy and Bo Dai}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yeFx5NQmr7} }
Garment animation is ubiquitous in various applications, such as virtual reality, gaming, and film producing. Recently, learning-based approaches obtain compelling performance in animating diverse garments under versatile scenarios. Nevertheless, to mimic the deformations of the observed garments, data-driven methods require large scale of garment data, which are both resource-wise expensive and time-consuming. In addition, forcing models to match the dynamics of observed garment animation may hinder the potentials to generalize to unseen cases. In this paper, instead of using garment-wise supervised-learning we adopt a disentangled scheme to learn how to animate observed garments: 1). learning constitutive behaviors from the observed cloth; 2). dynamically animate various garments constrained by the learned constitutive laws. Specifically, we propose Energy Unit network (EUNet) to model the constitutive relations in the format of energy. Without the priors from analytical physics models and differentiable simulation engines, EUNet is able to directly capture the constitutive behaviors from the observed piece of cloth and uniformly describes the change of energy caused by deformations, such as stretching and bending. We further apply the pre-trained EUNet to animate various garments based on energy optimizations. The disentangled scheme alleviates the need of garment data and enables us to utilize the dynamics of a piece of cloth for animating garments. Experiments show that while EUNet effectively delivers the energy gradients due to the deformations, models constrained by EUNet achieve more stable and physically plausible performance comparing with those trained in garment-wise supervised manner.
Learning 3D Garment Animation from Trajectories of A Piece of Cloth
[ "Yidi Shao", "Chen Change Loy", "Bo Dai" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ybiUVIxJth
@inproceedings{ alamdari2024policy, title={Policy Aggregation}, author={Parand A. Alamdari and Soroush Ebadian and Ariel D. Procaccia}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ybiUVIxJth} }
We consider the challenge of AI value alignment with multiple individuals that have different reward functions and optimal policies in an underlying Markov decision process. We formalize this problem as one of *policy aggregation*, where the goal is to identify a desirable collective policy. We argue that an approach informed by social choice theory is especially suitable. Our key insight is that social choice methods can be reinterpreted by identifying ordinal preferences with volumes of subsets of the *state-action occupancy polytope*. Building on this insight, we demonstrate that a variety of methods — including approval voting, Borda count, the proportional veto core, and quantile fairness — can be practically applied to policy aggregation.
Policy Aggregation
[ "Parand A. Alamdari", "Soroush Ebadian", "Ariel D. Procaccia" ]
NeurIPS.cc/2024/Conference
[ "https://github.com/Mohammadamin-Barekatain/multipolar" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ybMrn4tdn0
@inproceedings{ bhattacharjee2024auditing, title={Auditing Local Explanations is Hard}, author={Robi Bhattacharjee and Ulrike von Luxburg}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ybMrn4tdn0} }
In sensitive contexts, providers of machine learning algorithms are increasingly required to give explanations for their algorithms' decisions. However, explanation receivers might not trust the provider, who potentially could output misleading or manipulated explanations. In this work, we investigate an auditing framework in which a third-party auditor or a collective of users attempts to sanity-check explanations: they can query model decisions and the corresponding local explanations, pool all the information received, and then check for basic consistency properties. We prove upper and lower bounds on the amount of queries that are needed for an auditor to succeed within this framework. Our results show that successful auditing requires a potentially exorbitant number of queries -- particularly in high dimensional cases. Our analysis also reveals that a key property is the ``locality'' of the provided explanations --- a quantity that so far has not been paid much attention to in the explainability literature. Looking forward, our results suggest that for complex high-dimensional settings, merely providing a pointwise prediction and explanation could be insufficient, as there is no way for the users to verify that the provided explanations are not completely made-up.
Auditing Local Explanations is Hard
[ "Robi Bhattacharjee", "Ulrike von Luxburg" ]
NeurIPS.cc/2024/Conference
2407.13281
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ybLXvqJyQA
@inproceedings{ wanner2024predicting, title={Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms}, author={Marc Wanner and Laura Lewis and Chiranjib Bhattacharyya and Devdatt Dubhashi and Alexandru Gheorghiu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ybLXvqJyQA} }
A fundamental problem in quantum many-body physics is that of finding ground states of local Hamiltonians. A number of recent works gave provably efficient machine learning (ML) algorithms for learning ground states. Specifically, [Huang et al. Science 2022], introduced an approach for learning properties of the ground state of an $n$-qubit gapped local Hamiltonian $H$ from only $n^{\mathcal{O}(1)}$ data points sampled from Hamiltonians in the same phase of matter. This was subsequently improved by [Lewis et al. Nature Communications 2024], to $\mathcal{O}(\log 𝑛)$ samples when the geometry of the $n$-qubit system is known. In this work, we introduce two approaches that achieve a constant sample complexity, independent of system size $n$, for learning ground state properties. Our first algorithm consists of a simple modification of the ML model used by Lewis et al. and applies to a property of interest known beforehand. Our second algorithm, which applies even if a description of the property is not known, is a deep neural network model. While empirical results showing the performance of neural networks have been demonstrated, to our knowledge, this is the first rigorous sample complexity bound on a neural network model for predicting ground state properties. We also perform numerical experiments that confirm the improved scaling of our approach compared to earlier results.
Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms
[ "Marc Wanner", "Laura Lewis", "Chiranjib Bhattacharyya", "Devdatt Dubhashi", "Alexandru Gheorghiu" ]
NeurIPS.cc/2024/Conference
2405.18489
[ "https://github.com/marcwannerchalmers/learning_ground_states" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ybHPzL7eYT
@inproceedings{ fan2024large, title={Large Spatial Model: End-to-end Unposed Images to Semantic 3D}, author={Zhiwen Fan and Jian Zhang and Wenyan Cong and Peihao Wang and Renjie Li and Kairun Wen and Shijie Zhou and Achuta Kadambi and Zhangyang Wang and Danfei Xu and Boris Ivanovic and Marco Pavone and Yue Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=ybHPzL7eYT} }
Reconstructing and understanding 3D structures from a limited number of images is a classical problem in computer vision. Traditional approaches typically decompose this task into multiple subtasks, involving several stages of complex mappings between different data representations. For example, dense reconstruction using Structure-from-Motion (SfM) requires transforming images into key points, optimizing camera parameters, and estimating structures. Following this, accurate sparse reconstructions are necessary for further dense modeling, which is then input into task-specific neural networks. This multi-stage paradigm leads to significant processing times and engineering complexity. In this work, we introduce the Large Spatial Model (LSM), which directly processes unposed RGB images into semantic radiance fields. LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward pass and can synthesize versatile label maps by interacting through language at novel views. Built on a general Transformer-based framework, LSM predicts global geometry via pixel-aligned point maps. To improve spatial attribute regression, we adopt local context aggregation with multi-scale fusion, enhancing the accuracy of fine local details. To address the scarcity of labeled 3D semantic data and enable natural language-driven scene manipulation, we incorporate a pre-trained 2D language-based segmentation model into a 3D-consistent semantic feature field. An efficient decoder parameterizes a set of semantic anisotropic Gaussians, allowing supervised end-to-end learning. Comprehensive experiments on various tasks demonstrate that LSM unifies multiple 3D vision tasks directly from unposed images, achieving real-time semantic 3D reconstruction for the first time.
Large Spatial Model: End-to-end Unposed Images to Semantic 3D
[ "Zhiwen Fan", "Jian Zhang", "Wenyan Cong", "Peihao Wang", "Renjie Li", "Kairun Wen", "Shijie Zhou", "Achuta Kadambi", "Zhangyang Wang", "Danfei Xu", "Boris Ivanovic", "Marco Pavone", "Yue Wang" ]
NeurIPS.cc/2024/Conference
2410.18956
[ "" ]
https://huggingface.co/papers/2410.18956
1
1
0
13
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=yXpfrLMIr2
@inproceedings{ chen2024binarized, title={Binarized Diffusion Model for Image Super-Resolution}, author={Zheng Chen and Haotong Qin and Yong Guo and Xiongfei Su and Xin Yuan and Linghe Kong and Yulun Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yXpfrLMIr2} }
Advanced diffusion models (DMs) perform impressively in image super-resolution (SR), but the high memory and computational costs hinder their deployment. Binarization, an ultra-compression algorithm, offers the potential for effectively accelerating DMs. Nonetheless, due to the model structure and the multi-step iterative attribute of DMs, existing binarization methods result in significant performance degradation. In this paper, we introduce a novel binarized diffusion model, BI-DiffSR, for image SR. First, for the model structure, we design a UNet architecture optimized for binarization. We propose the consistent-pixel-downsample (CP-Down) and consistent-pixel-upsample (CP-Up) to maintain dimension consistent and facilitate the full-precision information transfer. Meanwhile, we design the channel-shuffle-fusion (CS-Fusion) to enhance feature fusion in skip connection. Second, for the activation difference across timestep, we design the timestep-aware redistribution (TaR) and activation function (TaA). The TaR and TaA dynamically adjust the distribution of activations based on different timesteps, improving the flexibility and representation alability of the binarized module. Comprehensive experiments demonstrate that our BI-DiffSR outperforms existing binarization methods. Code is released at: https://github.com/zhengchen1999/BI-DiffSR.
Binarized Diffusion Model for Image Super-Resolution
[ "Zheng Chen", "Haotong Qin", "Yong Guo", "Xiongfei Su", "Xin Yuan", "Linghe Kong", "Yulun Zhang" ]
NeurIPS.cc/2024/Conference
2406.05723
[ "https://github.com/zhengchen1999/bi-diffsr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yXW2dCTQdi
@inproceedings{ mastrogiuseppe2024controlled, title={Controlled maximal variability along with reliable performance in recurrent neural networks}, author={Chiara Mastrogiuseppe and Rub{\'e}n Moreno-Bote}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yXW2dCTQdi} }
Natural behaviors, even stereotyped ones, exhibit variability. Despite its role in exploring and learning, the function and neural basis of this variability is still not well understood. Given the coupling between neural activity and behavior, we ask what type of neural variability does not compromise behavioral performance. While previous studies typically curtail variability to allow for high task performance in neural networks, our approach takes the reversed perspective. We investigate how to generate maximal neural variability while at the same time having high network performance. To do so, we extend to neural activity the maximum occupancy principle (MOP) developed for behavior, and refer to this new neural principle as NeuroMOP. NeuroMOP posits that the goal of the nervous system is to maximize future action-state entropy, a reward-free, intrinsic motivation that entails creating all possible activity patterns while avoiding terminal or dangerous ones. We show that this goal can be achieved through a neural network controller that injects currents (actions) into a recurrent neural network of fixed random weights to maximize future cumulative action-state entropy. High activity variability can be induced while adhering to an energy constraint or while avoiding terminal states defined by specific neurons' activities, also in a context-dependent manner. The network solves these tasks by flexibly switching between stochastic and deterministic modes as needed and projecting noise onto a null space. Based on future maximum entropy production, NeuroMOP contributes to a novel theory of neural variability that reconciles stochastic and deterministic behaviors within a single framework.
Controlled maximal variability along with reliable performance in recurrent neural networks
[ "Chiara Mastrogiuseppe", "Rubén Moreno-Bote" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yWq89o19wf
@inproceedings{ lin2024usercreator, title={User-Creator Feature Polarization in Recommender Systems with Dual Influence}, author={Tao Lin and Kun Jin and Andrew Estornell and Xiaoying Zhang and Yiling Chen and Yang Liu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yWq89o19wf} }
Recommender systems serve the dual purpose of presenting relevant content to users and helping content creators reach their target audience. The dual nature of these systems naturally influences both users and creators: users' preferences are affected by the items they are recommended, while creators may be incentivized to alter their content to attract more users. We define a model, called user-creator feature dynamics, to capture the dual influence of recommender systems. We prove that a recommender system with dual influence is guaranteed to polarize, causing diversity loss in the system. We then investigate, both theoretically and empirically, approaches for mitigating polarization and promoting diversity in recommender systems. Unexpectedly, we find that common diversity-promoting approaches do not work in the presence of dual influence, while relevancy-optimizing methods like top-$k$ truncation can prevent polarization and improve diversity of the system.
User-Creator Feature Polarization in Recommender Systems with Dual Influence
[ "Tao Lin", "Kun Jin", "Andrew Estornell", "Xiaoying Zhang", "Yiling Chen", "Yang Liu" ]
NeurIPS.cc/2024/Conference
2407.14094
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yWSxjlFsmX
@inproceedings{ dai2024is, title={Is Mamba Compatible with Trajectory Optimization in Offline Reinforcement Learning?}, author={Yang Dai and Oubo Ma and Longfei Zhang and Xingxing Liang and Shengchao Hu and Mengzhu Wang and Shouling Ji and Jincai Huang and Li Shen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yWSxjlFsmX} }
Transformer-based trajectory optimization methods have demonstrated exceptional performance in offline Reinforcement Learning (offline RL). Yet, it poses challenges due to substantial parameter size and limited scalability, which is particularly critical in sequential decision-making scenarios where resources are constrained such as in robots and drones with limited computational power. Mamba, a promising new linear-time sequence model, offers performance on par with transformers while delivering substantially fewer parameters on long sequences. As it remains unclear whether Mamba is compatible with trajectory optimization, this work aims to conduct comprehensive experiments to explore the potential of Decision Mamba (dubbed DeMa) in offline RL from the aspect of data structures and essential components with the following insights: (1) Long sequences impose a significant computational burden without contributing to performance improvements since DeMa's focus on sequences diminishes approximately exponentially. Consequently, we introduce a Transformer-like DeMa as opposed to an RNN-like DeMa. (2) For the components of DeMa, we identify the hidden attention mechanism as a critical factor in its success, which can also work well with other residual structures and does not require position embedding. Extensive evaluations demonstrate that our specially designed DeMa is compatible with trajectory optimization and surpasses previous methods, outperforming Decision Transformer (DT) with higher performance while using 30\% fewer parameters in Atari, and exceeding DT with only a quarter of the parameters in MuJoCo.
Is Mamba Compatible with Trajectory Optimization in Offline Reinforcement Learning?
[ "Yang Dai", "Oubo Ma", "Longfei Zhang", "Xingxing Liang", "Shengchao Hu", "Mengzhu Wang", "Shouling Ji", "Jincai Huang", "Li Shen" ]
NeurIPS.cc/2024/Conference
2405.12094
[ "https://github.com/AndssY/DeMa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yW3tlSwusb
@inproceedings{ balcan2024accelerating, title={Accelerating {ERM} for data-driven algorithm design using output-sensitive techniques}, author={Maria Florina Balcan and Christopher Seiler and Dravyansh Sharma}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yW3tlSwusb} }
Data-driven algorithm design is a promising, learning-based approach for beyond worst-case analysis of algorithms with tunable parameters. An important open problem is the design of computationally efficient data-driven algorithms for combinatorial algorithm families with multiple parameters. As one fixes the problem instance and varies the parameters, the “dual” loss function typically has a piecewise-decomposable structure, i.e. is well-behaved except at certain sharp transition boundaries. Motivated by prior empirical work, we initiate the study of techniques to develop efficient ERM learning algorithms for data-driven algorithm design by enumerating the pieces of the sum dual loss functions for a collection of problem instances. The running time of our approach scales with the actual number of pieces that appear as opposed to worst case upper bounds on the number of pieces. Our approach involves two novel ingredients – an output-sensitive algorithm for enumerating polytopes induced by a set of hyperplanes using tools from computational geometry, and an execution graph which compactly represents all the states the algorithm could attain for all possible parameter values. We illustrate our techniques by giving algorithms for pricing problems, linkage-based clustering and dynamic-programming based sequence alignment.
Accelerating ERM for data-driven algorithm design using output-sensitive techniques
[ "Maria Florina Balcan", "Christopher Seiler", "Dravyansh Sharma" ]
NeurIPS.cc/2024/Conference
2204.03569
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yVzWlFhpRW
@inproceedings{ stolz2024excluding, title={Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking}, author={Roland Stolz and Hanna Krasowski and Jakob Thumm and Michael Eichelbeck and Philipp Gassert and Matthias Althoff}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yVzWlFhpRW} }
Continuous action spaces in reinforcement learning (RL) are commonly defined as multidimensional intervals. While intervals usually reflect the action boundaries for tasks well, they can be challenging for learning because the typically large global action space leads to frequent exploration of irrelevant actions. Yet, little task knowledge can be sufficient to identify significantly smaller state-specific sets of relevant actions. Focusing learning on these relevant actions can significantly improve training efficiency and effectiveness. In this paper, we propose to focus learning on the set of relevant actions and introduce three continuous action masking methods for exactly mapping the action space to the state-dependent set of relevant actions. Thus, our methods ensure that only relevant actions are executed, enhancing the predictability of the RL agent and enabling its use in safety-critical applications. We further derive the implications of the proposed methods on the policy gradient. Using proximal policy optimization ( PPO), we evaluate our methods on four control tasks, where the relevant action set is computed based on the system dynamics and a relevant state set. Our experiments show that the three action masking methods achieve higher final rewards and converge faster than the baseline without action masking.
Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking
[ "Roland Stolz", "Hanna Krasowski", "Jakob Thumm", "Michael Eichelbeck", "Philipp Gassert", "Matthias Althoff" ]
NeurIPS.cc/2024/Conference
2406.03704
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yVu5dnPlqA
@inproceedings{ yue2024mammoth, title={{MA}mmo{TH}2: Scaling Instructions from the Web}, author={Xiang Yue and Tianyu Zheng and Ge Zhang and Wenhu Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yVu5dnPlqA} }
Instruction tuning improves the reasoning abilities of large language models (LLMs), with data quality and scalability being the crucial factors. Most instruction tuning data come from human crowd-sourcing or GPT-4 distillation. We propose a paradigm to efficiently harvest 10 million naturally existing instruction data from the pre-training web corpus to enhance LLM reasoning. Our approach involves (1) recalling relevant documents, (2) extracting instruction-response pairs, and (3) refining the extracted pairs using open-source LLMs. Fine-tuning base LLMs on this dataset, we build MAmmoTH2 models, which significantly boost performance on reasoning benchmarks. Notably, MAmmoTH2-7B’s (Mistral) performance increases from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K without training on any in-domain data. Further training MAmmoTH2 on public instruction tuning datasets yields MAmmoTH2-Plus, achieving state-of-the-art performance on several reasoning and chatbot benchmarks. Our work demonstrates how to harvest large-scale, high-quality instruction data without costly human annotation or GPT-4 distillation, providing a new paradigm for building better instruction tuning data.
MAmmoTH2: Scaling Instructions from the Web
[ "Xiang Yue", "Tianyu Zheng", "Ge Zhang", "Wenhu Chen" ]
NeurIPS.cc/2024/Conference
2405.03548
[ "" ]
https://huggingface.co/papers/2405.03548
3
6
0
4
[ "TIGER-Lab/MAmmoTH2-8B-Plus", "TIGER-Lab/MAmmoTH2-8x7B-Plus", "TIGER-Lab/MAmmoTH2-7B-Plus", "TIGER-Lab/MAmmoTH2-8B", "QuantFactory/MAmmoTH2-7B-GGUF", "TIGER-Lab/MAmmoTH2-7B", "TIGER-Lab/MAmmoTH2-8x7B", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-7B-4bits", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-7B-8bits", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-6_5bpw_exl2", "vaclavkosar/MAmmoTH2-8B-Plus-AWQ", "QuantFactory/MAmmoTH2-7B-Plus-GGUF", "QuantFactory/MAmmoTH2-8B-Plus-GGUF", "QuantFactory/MAmmoTH2-8B-GGUF", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-8B-Plus-gguf", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-8B-gguf", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-Plus-gguf", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-7B-Plus-gguf", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf" ]
[ "TIGER-Lab/WebInstructSub", "TIGER-Lab/WebInstructFull", "TIGER-Lab/Fineweb-Instruct" ]
[ "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "TIGER-Lab/MAmmoTH2", "Granther/try-this-model", "Darok/Featherless-Feud", "emekaboris/try-this-model", "SC999/NV_Nemotron" ]
[ "TIGER-Lab/MAmmoTH2-8B-Plus", "TIGER-Lab/MAmmoTH2-8x7B-Plus", "TIGER-Lab/MAmmoTH2-7B-Plus", "TIGER-Lab/MAmmoTH2-8B", "QuantFactory/MAmmoTH2-7B-GGUF", "TIGER-Lab/MAmmoTH2-7B", "TIGER-Lab/MAmmoTH2-8x7B", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-7B-4bits", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-7B-8bits", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-Plus-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8x7B-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-2_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-3_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-6_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-2_2bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-7B-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-4_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-3_75bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-3_5bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-4_25bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-5_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-8_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-6_0bpw_exl2", "Zoyd/TIGER-Lab_MAmmoTH2-8B-Plus-6_5bpw_exl2", "vaclavkosar/MAmmoTH2-8B-Plus-AWQ", "QuantFactory/MAmmoTH2-7B-Plus-GGUF", "QuantFactory/MAmmoTH2-8B-Plus-GGUF", "QuantFactory/MAmmoTH2-8B-GGUF", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-8B-Plus-gguf", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-8B-gguf", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-Plus-gguf", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-7B-Plus-gguf", "RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf" ]
[ "TIGER-Lab/WebInstructSub", "TIGER-Lab/WebInstructFull", "TIGER-Lab/Fineweb-Instruct" ]
[ "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "TIGER-Lab/MAmmoTH2", "Granther/try-this-model", "Darok/Featherless-Feud", "emekaboris/try-this-model", "SC999/NV_Nemotron" ]
1
poster
null
https://openreview.net/forum?id=yUqUBGioBG
@inproceedings{ slavutsky2024class, title={Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations}, author={Yuli Slavutsky and Yuval Benjamini}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yUqUBGioBG} }
Zero-shot learning methods typically assume that the new, unseen classes encountered during deployment come from the same distribution as the the classes in the training set. However, real-world scenarios often involve class distribution shifts (e.g., in age or gender for person identification), posing challenges for zero-shot classifiers that rely on learned representations from training classes. In this work, we propose and analyze a model that assumes that the attribute responsible for the shift is unknown in advance. We show that in this setting, standard training may lead to non-robust representations. To mitigate this, we develop an algorithm for learning robust representations in which (a) synthetic data environments are constructed via hierarchical sampling, and (b) environment balancing penalization, inspired by out-of-distribution problems, is applied. We show that our algorithm improves generalization to diverse class distributions in both simulations and experiments on real-world datasets.
Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations
[ "Yuli Slavutsky", "Yuval Benjamini" ]
NeurIPS.cc/2024/Conference
2311.18575
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yUckuDjAE0
@inproceedings{ leghettas2024learning, title={Learning Bregman Divergences with Application to Robustness}, author={Mohamed-Hicham LEGHETTAS and Markus P{\"u}schel}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yUckuDjAE0} }
We propose a novel and general method to learn Bregman divergences from raw high-dimensional data that measure similarity between images in pixel space. As a prototypical application, we learn divergences that consider real-world corruptions of images (e.g., blur) as close to the original and noisy perturbations as far, even if in $L^p$-distance the opposite holds. We also show that the learned Bregman divergence excels on datasets of human perceptual similarity judgment, suggesting its utility in a range of applications. We then define adversarial attacks by replacing the projected gradient descent (PGD) with the mirror descent associated with the learned Bregman divergence, and use them to improve the state-of-the-art in robustness through adversarial training for common image corruptions. In particular, for the contrast corruption that was found problematic in prior work we achieve an accuracy that exceeds the $L^p$- and the LPIPS-based adversarially trained neural networks by a margin of 27.16\% on the CIFAR-10-C corruption data set.
Learning Bregman Divergences with Application to Robustness
[ "Mohamed-Hicham LEGHETTAS", "Markus Püschel" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yURca4wi2L
@inproceedings{ cai2024temporally, title={Temporally Consistent Atmospheric Turbulence Mitigation with Neural Representations}, author={Haoming Cai and Jingxi Chen and Brandon Y. Feng and Weiyun Jiang and Mingyang Xie and Kevin Zhang and Cornelia Fermuller and Yiannis Aloimonos and Ashok Veeraraghavan and Christopher Metzler}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yURca4wi2L} }
Atmospheric turbulence, caused by random fluctuations in the atmosphere's refractive index, introduces complex spatio-temporal distortions in imagery captured at long range. Video Atmospheric Turbulence Mitigation (ATM) aims to restore videos affected by these distortions. However, existing video ATM methods, both supervised and self-supervised, struggle to maintain temporally consistent mitigation across frames, leading to visually incoherent results. This limitation arises from the stochastic nature of atmospheric turbulence, which varies across space and time. Inspired by the observation that atmospheric turbulence induces high-frequency temporal variations, we propose ConVRT, a novel framework for consistent video restoration through turbulence. ConVRT introduces a neural video representation that explicitly decouples spatial and temporal information into a spatial content field and a temporal deformation field, enabling targeted regularization of the network's temporal representation capability. By leveraging the low-pass filtering properties of the regularized temporal representations, ConVRT effectively mitigates turbulence-induced temporal frequency variations and promotes temporal consistency. Furthermore, our training framework seamlessly integrates supervised pre-training on synthetic turbulence data with self-supervised learning on real-world videos, significantly improving the temporally consistent mitigation of ATM methods on diverse real-world data. More information can be found on our project page: https://convrt-2024.github.io/
Temporally Consistent Atmospheric Turbulence Mitigation with Neural Representations
[ "Haoming Cai", "Jingxi Chen", "Brandon Y. Feng", "Weiyun Jiang", "Mingyang Xie", "Kevin Zhang", "Cornelia Fermuller", "Yiannis Aloimonos", "Ashok Veeraraghavan", "Christopher Metzler" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yTTomSJsSW
@inproceedings{ kong2024aligning, title={Aligning Large Language Models with Representation Editing: A Control Perspective}, author={Lingkai Kong and Haorui Wang and Wenhao Mu and Yuanqi Du and Yuchen Zhuang and Yifei Zhou and Yue Song and Rongzhi Zhang and Kai Wang and Chao Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yTTomSJsSW} }
Aligning large language models (LLMs) with human objectives is crucial for real-world applications. However, fine-tuning LLMs for alignment often suffers from unstable training and requires substantial computing resources. Test-time alignment techniques, such as prompting and guided decoding, do not modify the underlying model, and their performance remains dependent on the original model's capabilities. To address these challenges, we propose aligning LLMs through representation editing. The core of our method is to view a pre-trained autoregressive LLM as a discrete-time stochastic dynamical system. To achieve alignment for specific objectives, we introduce external control signals into the state space of this language dynamical system. We train a value function directly on the hidden states according to the Bellman equation, enabling gradient-based optimization to obtain the optimal control signals at test time. Our experiments demonstrate that our method outperforms existing test-time alignment techniques while requiring significantly fewer resources compared to fine-tuning methods. Our code is available at [https://github.com/Lingkai-Kong/RE-Control](https://github.com/Lingkai-Kong/RE-Control).
Aligning Large Language Models with Representation Editing: A Control Perspective
[ "Lingkai Kong", "Haorui Wang", "Wenhao Mu", "Yuanqi Du", "Yuchen Zhuang", "Yifei Zhou", "Yue Song", "Rongzhi Zhang", "Kai Wang", "Chao Zhang" ]
NeurIPS.cc/2024/Conference
2406.05954
[ "https://github.com/lingkai-kong/re-control" ]
https://huggingface.co/papers/2406.05954
0
0
0
10
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=yS9xU6ANiA
@inproceedings{ chen2024exogenous, title={Exogenous Matching: Learning Good Proposals for Tractable Counterfactual Estimation}, author={Yikang Chen and Dehui du and Lili Tian}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yS9xU6ANiA} }
We propose an importance sampling method for tractable and efficient estimation of counterfactual expressions in general settings, named Exogenous Matching. By minimizing a common upper bound of counterfactual estimators, we transform the variance minimization problem into a conditional distribution learning problem, enabling its integration with existing conditional distribution modeling approaches. We validate the theoretical results through experiments under various types and settings of Structural Causal Models (SCMs) and demonstrate the outperformance on counterfactual estimation tasks compared to other existing importance sampling methods. We also explore the impact of injecting structural prior knowledge (counterfactual Markov boundaries) on the results. Finally, we apply this method to identifiable proxy SCMs and demonstrate the unbiasedness of the estimates, empirically illustrating the applicability of the method to practical scenarios.
Exogenous Matching: Learning Good Proposals for Tractable Counterfactual Estimation
[ "Yikang Chen", "Dehui du", "Lili Tian" ]
NeurIPS.cc/2024/Conference
2410.13914
[ "https://github.com/cyisk/exom" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yRuJqoWoCs
@inproceedings{ xu2024se, title={\${SE}(3)\$ Equivariant Ray Embeddings for Implicit Multi-View Depth Estimation}, author={Yinshuang Xu and Dian Chen and Katherine Liu and Sergey Zakharov and Rares Andrei Ambrus and Kostas Daniilidis and Vitor Campagnolo Guizilini}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yRuJqoWoCs} }
Incorporating inductive bias by embedding geometric entities (such as rays) as input has proven successful in multi-view learning. However, the methods adopting this technique typically lack equivariance, which is crucial for effective 3D learning. Equivariance serves as a valuable inductive prior, aiding in the generation of robust multi-view features for 3D scene understanding. In this paper, we explore the application of equivariant multi-view learning to depth estimation, not only recognizing its significance for computer vision and robotics but also addressing the limitations of previous research. Most prior studies have either overlooked equivariance in this setting or achieved only approximate equivariance through data augmentation, which often leads to inconsistencies across different reference frames. To address this issue, we propose to embed $SE(3)$ equivariance into the Perceiver IO architecture. We employ Spherical Harmonics for positional encoding to ensure 3D rotation equivariance, and develop a specialized equivariant encoder and decoder within the Perceiver IO architecture. To validate our model, we applied it to the task of stereo depth estimation, achieving state of the art results on real-world datasets without explicit geometric constraints or extensive data augmentation.
SE(3) Equivariant Ray Embeddings for Implicit Multi-View Depth Estimation
[ "Yinshuang Xu", "Dian Chen", "Katherine Liu", "Sergey Zakharov", "Rares Andrei Ambrus", "Kostas Daniilidis", "Vitor Campagnolo Guizilini" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yRhrVaDOWE
@inproceedings{ sayar2024diffusionbased, title={Diffusion-based Curriculum Reinforcement Learning}, author={Erdi Sayar and Giovanni Iacca and Ozgur S. Oguz and Alois Knoll}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yRhrVaDOWE} }
Curriculum Reinforcement Learning (CRL) is an approach to facilitate the learning process of agents by structuring tasks in a sequence of increasing complexity. Despite its potential, many existing CRL methods struggle to efficiently guide agents toward desired outcomes, particularly in the absence of domain knowledge. This paper introduces DiCuRL (Diffusion Curriculum Reinforcement Learning), a novel method that leverages conditional diffusion models to generate curriculum goals. To estimate how close an agent is to achieving its goal, our method uniquely incorporates a $Q$-function and a trainable reward function based on Adversarial Intrinsic Motivation within the diffusion model. Furthermore, it promotes exploration through the inherent noising and denoising mechanism present in the diffusion models and is environment-agnostic. This combination allows for the generation of challenging yet achievable goals, enabling agents to learn effectively without relying on domain knowledge. We demonstrate the effectiveness of DiCuRL in three different maze environments and two robotic manipulation tasks simulated in MuJoCo, where it outperforms or matches nine state-of-the-art CRL algorithms from the literature.
Diffusion-based Curriculum Reinforcement Learning
[ "Erdi Sayar", "Giovanni Iacca", "Ozgur S. Oguz", "Alois Knoll" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yRRCH1OsGW
@inproceedings{ jing2024generative, title={Generative Modeling of Molecular Dynamics Trajectories}, author={Bowen Jing and Hannes Stark and Tommi Jaakkola and Bonnie Berger}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=yRRCH1OsGW} }
Molecular dynamics (MD) is a powerful technique for studying microscopic phenomena, but its computational cost has driven significant interest in the development of deep learning-based surrogate models. We introduce generative modeling of molecular trajectories as a paradigm for learning flexible multi-task surrogate models of MD from data. By conditioning on appropriately chosen frames of the trajectory, we show such generative models can be adapted to diverse tasks such as forward simulation, transition path sampling, and trajectory upsampling. By alternatively conditioning on part of the molecular system and inpainting the rest, we also demonstrate the first steps towards dynamics-conditioned molecular design. We validate the full set of these capabilities on tetrapeptide simulations and show preliminary results on scaling to protein monomers. Altogether, our work illustrates how generative modeling can unlock value from MD data towards diverse downstream tasks that are not straightforward to address with existing methods or even MD itself. Code is available at https://github.com/bjing2016/mdgen.
Generative Modeling of Molecular Dynamics Trajectories
[ "Bowen Jing", "Hannes Stark", "Tommi Jaakkola", "Bonnie Berger" ]
NeurIPS.cc/2024/Conference
2409.17808
[ "https://github.com/bjing2016/mdgen" ]
https://huggingface.co/papers/2409.17808
0
1
0
4
[ "blanchon/mdgen" ]
[]
[]
[ "blanchon/mdgen" ]
[]
[]
1
poster